#346 Slow ldapmodify operation time for large quantities of multi-valued attribute values
Closed: wontfix None Opened 11 years ago by beall.

In particular, with extra overhead for the uniqueMember attribute, but also for other attributes such as eduPersonEntitlement, adding large quantities of values to the attribute results in excessive delays to the ldapmodify operation.

For instance, adding 100K members to an empty group takes about 2.5 hours with 389 on relatively new hardware, whereas, the same operation with Sun DS 6.3 takes 1 minute on old v440 Sun hardware.

A similar test to add 100K values to eduPersonEntitlement on an account entry takes 48 minutes in 389 and 2.5 minutes on Sun DS.

The delay occurs with respect to the size of the modification rather than the existing size of the group, so, changes of a few members on that 100K-member group luckily are quick enough.


Fix description:
1) Set SLAPI_ATTR_FLAG_NORMALIZED to a_flag for the already
normalized dn values. Then, let slapi_attr_value_find
pass the flag to plugin_call_syntax_filter_ava, where
dn normalization is skipped based on the knowledge.
2) To avoid the confusion, rename value_normalize_value and
valuearray_normalize_value to value_dn_normalize_value and
valuearray_dn_normalize_value, respectively.
3) In slapi_has8thBit, instead of checking the 8th bit one byte
by one byte, check 4 bytes at one time.

How much did this improve the performance?

Thank you for the ack, Rich. I'm checking in this patch and continue working on this ticket.

Replying to [comment:6 rmeggins]:

How much did this improve the performance?

With these modifications, adding 1000 uniqueMember attribute-value pairs
to an entry improved from 81 seconds to 69 seconds (about 14.8% gain).

Pushed the first patch to master.

$ git merge trac346
Updating 091c749..c0151f7
Fast-forward
ldap/servers/plugins/syntaxes/nameoptuid.c | 3 +--
ldap/servers/slapd/attr.c | 6 +++++-
ldap/servers/slapd/attrlist.c | 6 ++++--
ldap/servers/slapd/entry.c | 6 +++---
ldap/servers/slapd/entrywsi.c | 6 ++++--
ldap/servers/slapd/slapi-private.h | 4 ++--
ldap/servers/slapd/utf8compare.c | 19 ++++++++++++++++---
ldap/servers/slapd/value.c | 3 ++-
ldap/servers/slapd/valueset.c | 4 ++--
9 files changed, 39 insertions(+), 18 deletions(-)

$ git push
Counting objects: 31, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (16/16), done.
Writing objects: 100% (16/16), 2.16 KiB, done.
Total 16 (delta 14), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
091c749..c0151f7 master -> master

Fix Description:
Fix in "commit c0151f7" takes
advantage of the knowledge on normalized attribute DNs. When
generating index keys from the attribute value, it has to be
case normalized, but the attribute DN is normalized w/o the
cases lowered. This patch introduces attribute flags to
distinguish the 2 cases:
SLAPI_ATTR_FLAG_NORMALIZED_CES - normalized but not case-
normalized.
SLAPI_ATTR_FLAG_NORMALIZED_CIS - case-normalized
And SLAPI_ATTR_FLAG_NORMALIZED is the combination of the 2
macros.

Reviewed by Rich (Thank you!!)

Pushed to master.

$ git cherry-pick 4f7e06b41bde75a40cc2f72fb86b3f669c9db5a4
[master f6b74ad] Trac Ticket #346 - Slow ldapmodify operation time for large quantities of multi-valued attribute values
8 files changed, 68 insertions(+), 20 deletions(-)

$ git push
Counting objects: 70, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (37/37), done.
Writing objects: 100% (37/37), 4.72 KiB, done.
Total 37 (delta 32), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
d7876a2..f6b74ad master -> master

Pushed to 389-ds-base-1.2.11, as well.

$ git push redhat 389-ds-base-1.2.11:389-ds-base-1.2.11
Counting objects: 52, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (28/28), done.
Writing objects: 100% (32/32), 35.87 KiB, done.
Total 32 (delta 23), reused 9 (delta 4)
hooks/post-receive: line 1: /home/rmeggins/git-hooks/post-receive-email: No such file or directory
error: hooks/post-receive exited with error code 1
To ssh://git.engineering.redhat.com/srv/git/users/rmeggins/ds.git
607abfe..2bac5ab 389-ds-base-1.2.11 -> 389-ds-base-1.2.11

In particular, with extra overhead for the uniqueMember attribute, but
also for other attributes such as eduPersonEntitlement, adding large
quantities of values to the attribute results in excessive delays to
the ldapmodify operation.

For instance, adding 100K members to an empty group takes about 2.5
hours with 389 on relatively new hardware, whereas, the same operation
with Sun DS 6.3 takes 1 minute on old v440 Sun hardware.

A similar test to add 100K values to eduPersonEntitlement on an
account entry takes 48 minutes in 389 and 2.5 minutes on Sun DS.

The delay occurs with respect to the size of the modification rather
than the existing size of the group, so, changes of a few members on
that 100K-member group luckily are quick enough.

Hello, beall,
We are working on the performance improvement and seeing some progress. To have the common ground, can I have your sample use case? You pointed out adding uniqueMember on 389-ds-base is slow. Could you tell us how your uniqueMember value looks like and the client side operation (e.g., command line you used)? If you could also share with us how you measured the timing on Fedora/RHEL, that'll be great. And I assume you are using exact the same client tool for Sun.

Regarding eduPersonEntitlement, it's a Directory String and auxiliary objectclass eduPerson may have it. You let a group have an objectclass eduPerson and added 100K eduPersonEntitlement to the group entry?

set default ticket origin to Community

git patch file (master): fixing a bug introduced by the previous patch.
0001-Trac-Ticket-346-Slow-ldapmodify-operation-time-for-l.2.patch

Attachment 0001-Trac-Ticket-346-Slow-ldapmodify-operation-time-for-l.2.patch​ added

ack

Reviewed by Rich (Thank you!!)

Pushed to master.
Counting objects: 17, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (9/9), done.
Writing objects: 100% (9/9), 1.07 KiB, done.
Total 9 (delta 6), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
82d2c51..6237131 master -> master

Pushed to the 1.2.11 branch.
git push origin 389-ds-base-1.2.11-ext:389-ds-base-1.2.11
Counting objects: 17, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (9/9), done.
Writing objects: 100% (9/9), 1.07 KiB, done.
Total 9 (delta 6), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
db6b354..063e151 389-ds-base-1.2.11-ext -> 389-ds-base-1.2.11

Added initial screened field value.

Apologies for not seeing this earlier. I thought I would see an e-mail when this ticket was modified, but I didn't.

To answer Rich's questions from a while back, yes, we have eduPerson as an object class on our accounts entries, and we use eduPersonEntitlement as a field we populate. These values look like this:
urn:mace:usc.edu:gds:entitlement:stmn7vt3

Regarding the uniqueMember attribute, these look like this:
uscrdn=usc.edu.scmd6rd3,ou=accounts,dc=usc,dc=edu

This is a DN of an account entry.

The lag time is shown by using an ldapmodify command over a file of such values adding them to the group. Standard openldap ldapmodify was used binding with Directory Manager. 'time' was used in front of this commandline to calculate how much time it takes. The exact same command was then pointed at a Sun LDAP instance for comparison.

I can't tell for sure from the remaining comments, but it looks like you may have incorporated a patch to improve this into the next version. Is that true?

Thanks,
Russ.

Replying to [comment:21 beall]:

Apologies for not seeing this earlier. I thought I would see an e-mail when this ticket was modified, but I didn't.

To answer Rich's questions from a while back, yes, we have eduPerson as an object class on our accounts entries, and we use eduPersonEntitlement as a field we populate. These values look like this:
urn:mace:usc.edu:gds:entitlement:stmn7vt3

Regarding the uniqueMember attribute, these look like this:
uscrdn=usc.edu.scmd6rd3,ou=accounts,dc=usc,dc=edu

This is a DN of an account entry.

The lag time is shown by using an ldapmodify command over a file of such values adding them to the group. Standard openldap ldapmodify was used binding with Directory Manager. 'time' was used in front of this commandline to calculate how much time it takes. The exact same command was then pointed at a Sun LDAP instance for comparison.

I can't tell for sure from the remaining comments, but it looks like you may have incorporated a patch to improve this into the next version. Is that true?

We did some work in 1.2.11 - it should improve the performance slightly - we would appreciate it if you would try it out and let us know.

Thanks,
Russ.

The reason for the bad performance of adding large sets of values to an attribute is that it has tp be checked if any of the values
1] exists already or
2} still exists after update resolution

the case 1] is handled in the core modify in entry_apply_mod, there is already an optimization when adding more than 1 values. A binary tree (AVL) of the existing values is created and for the new values this value tree is used to check the presence.

The case 2] is in index_add_mods, when it is checked that each value is still in the attribute.It was introduced with ticket #242. This check is necessary, there could also be cases where the server has seen already a delete for a value with a newer csn and it was removed from the entry, but is still in the original mods. To have correct indexes this needs to be checked.

To resolve this performance problem there are two ways:

Reduce the need to check in index_add_mods, eg by updating the list of mods, when a value is removed from the values in the attribute, check if is this necessary in replops only, find an other way than lookupe every single value (use valuearry_minus_valuearry ? But maybe it has the same perf issues.
The other way is to speed up the value lookup to an acceptable or good performance.

I have two proposals for the speedup:
1] Keep the AVL when it is created and use it whenever a lookup is done.
Advantages: fast, code already there
Disadvantages: memory overhead, extra complexity due to index references between the tree and the array, only done for initial tree creation, not for extensions and removals

2] Keep the value array sorted (start sorting after a threshold) and use binary search for lookup.
Advantages: as fast as 1] , no extra memory requirements
Disadvantage: the original order of values is lost, this is not required by the rfcs, but customers might expect this. It could easily be resolved by keeping the original valuearray and use and array with indexes into this for sorting.

My favourite would be 2] with the extra reason that it would allow a code cleanup. We have a zoo of valueset_, valuearray_, valuearrayfast_, valuetree_ functions. If we use 2] there is no more need for the avl tree, valuearray/valuearrayfast/valueset/ could be simplified using the valuearrayfast structure throughout, also avoid the counting of values.

Current Status: I have implemented a basic version of 1], not handling the removal part. And compared test results. For the case of adding 50000 new members to a group with already 50000 members the current implementation takes 1h08m, with the new version it takes about 1 sec.
These numbers can be theoretically confirmed:
If 50k members are added to existing 50k members the new attr has 100k values. In index_add_mods for each of the 50k new values it is checked if the value is still in the new entry's 100k values. It will not be found in the first 50k values, so for each value there are on average 75000 comparisons (*50000). To represent 100k values in an AVL the tree will have a depth of 17, so in the worstcase there are 17 comparisons. The new algorithm should be 75000/17 = 4411 times faster. The observed value of 1h08m is 4080.
Next steps: Implement proposal 2], see if the performance is the same and how much code cleanup could be achieved.

I continued to work on this and implemented a first version of 2], which also shows godd result, including delete operation.
I will clean this up and have written an initial design document:
http://port389.org/wiki/Static_group_performance

overall, looks good.

There are a couple of places where the formatting was changed.

{{{

1250                    slapi_ch_free((void **) &(attrs[ i ].sa_present_values.va)); 
1251                    slapi_ch_free((void **) &(attrs[ i ].sa_deleted_values.va));

}}}
could you use valuearray_free() here? Are the individual Slapi_Value* in va set to NULL?

{{{
874 if (vs1->num == 0)
}}}
and other places - is there a slapi_valueset_isempty()?

I think where slapi_ch_free is used the values were moved "PASSIN" to a valueset and are NULL, there was a comment at one place not to use valuearray_free - I will check again.

no slapi_valueset_isempty yet, and there are a few function which are no longer used, I will do some cleanup on this.

Replying to [comment:30 lkrispen]:

I think where slapi_ch_free is used the values were moved "PASSIN" to a valueset and are NULL, there was a comment at one place not to use valuearray_free - I will check again.

Ok. I understand why the code wants to use slapi_ch_free as it doesn't want to free the contents - just don't like to use slapi_ch_free and casting unless it is necessary.

no slapi_valueset_isempty yet, and there are a few function which are no longer used, I will do some cleanup on this.

Ok.

Your enhancement looks good to me, too.

A couple of requests: This is a bit confusing for the readers... Can we remove this commented out code (assuming it's not needed...)?
{{{
attr_add_valuearray(Slapi_Attr a, Slapi_Value vals, const char dn)
858 / values are alredy inserted
895 859 if(rc==LDAP_SUCCESS) {
896 valueset_add_valuearray( &a->a_present_values, vals );
860 slapi_valueset_add_attr_valuearray_ext( a, &a->a_present_values, vals, numofvals, 0);
897 861 }
862
/
}}}
I'm wondering about this comment... It does not mean "error handling is needed but not implemented yet", does it? ;)
{{{
slapi_valueset_add_attr_valuearray_ext
1228 return (rc); / error handling still needs to be done /
}}}

Noriko, thanks for the comments.

1] you're right the code is no longer needed, I'll remove the commented code
2] the comment about error handling is obsolete, but I will add a comment before the function, what the return codes will be

I added the latest version of the patch for review. It passed the TET acceptance tests and I did a couple of performance test runs.
Here are a few numbers:
Adding 50000 members to a group of 50000 members: 395 sec ==> 10 sec
Adding 50000 members to a group of 200000 members: 1431 sec ==> 12 sec
Deleting 1000 members from a group of 100000: 239 sec ==> 1.8 sec

The indentation is off in a couple of places.

{{{

1311 valuearrayfast_done(&attrs[ i ].sa_present_values);
1312 valuearrayfast_done(&attrs[ i ].sa_deleted_values);
1313 valuetree_free( &attrs[ i ].sa_vtree );
1251 slapi_ch_free((void ) &(attrs[ i ].sa_present_values.va));
1252 slapi_ch_free((void
) &(attrs[ i ].sa_deleted_values.va));
}}}
valuearrayfast_done will free the individual slapi_value * in the array, but slapi_ch_free will not do that. Is this as intended? Where do the individual attrs[ i ].sa_present_values.va[i] get freed?

{{{
/ do we need to reset the array references ? /
}}}
We should find out.

valueset_value_syntax_cmp - why not use slapi_attr_value_cmp? Same with valueset_value_cmp - it will already handle the case of DN syntax, and the case of no syntax (which I think is an error - there should never be an attribute with no syntax).

{{{

906                 /* not sure how to deal with that, the caller will interprete any return code as 
907                  * comparison result. Do the best we can  
908                  */ 
909                 rc = strcasecmp(v1->bv.bv_val, v2->bv.bv_val);

}}}
How can this happen?
Why case-insensitive cmp?
Why not just do a berval compare e.g. first compare bv_len, then memcmp?

Rich, thanks for the careful review, here are some answers.

1] Indentation. I had noticed this, but forgotten before preparing the patch, will do it now.

2] valuearrayfast_done. The values of sa_present_values are passed in to the attr valueset and sa_predsent_values.num is set to 0, so valuearrayfast_done would also only have performed slapi_ch_free of the array itself. And since sa_present_values are now a Slapi_ValueSet and not a valuearrayfast, I had replaced it directly - but it would probably be better to use slapi_valueset_done.

3] "we should find out". I had addded this comment when I started to change index_add_mods and after the tests were successful had missed it, so I will review this again, remove the comment and fix if necessary

4] valueset_value[syntax]_cmp: we cannot use slapi_value_compare or slapi_attr_value_compare since they only return "equal" or "not equal" and for sorting I need "equal", "greater than", "smaller than". To use a slapi.... function for this we would need to extend all the underlying plugin syntax functions or add new apis - it could be worth while to investigate this

5] no synatx. I had copied this code from valuetree_find, where there was an error handling for slapi_attr_values2keys_sv(). But valuetree_find did return operations_error, whereas here any return code would be treated as a sorting order, so I decided to add a fallback. I don't know if we will ever run into this, but I think during startup, when reading entries from dse.ldif or the schema files the syntax plugins might not yet be set.

Replying to [comment:37 lkrispen]:

Rich, thanks for the careful review, here are some answers.

1] Indentation. I had noticed this, but forgotten before preparing the patch, will do it now.

Ok.

2] valuearrayfast_done. The values of sa_present_values are passed in to the attr valueset and sa_predsent_values.num is set to 0, so valuearrayfast_done would also only have performed slapi_ch_free of the array itself. And since sa_present_values are now a Slapi_ValueSet and not a valuearrayfast, I had replaced it directly - but it would probably be better to use slapi_valueset_done.

Ok. That's fine, either way. I just wanted to know that the Slapi_Value* were not leaked.

3] "we should find out". I had addded this comment when I started to change index_add_mods and after the tests were successful had missed it, so I will review this again, remove the comment and fix if necessary

Ok. Or, file a ticket for future work.

4] valueset_value[syntax]_cmp: we cannot use slapi_value_compare or slapi_attr_value_compare since they only return "equal" or "not equal" and for sorting I need "equal", "greater than", "smaller than". To use a slapi.... function for this we would need to extend all the underlying plugin syntax functions or add new apis - it could be worth while to investigate this

I don't think they are exposed at the SLAPI layer, but each syntax and matching rule provide compare functions - otherwise, keeping sorted indexes, and doing server side sorting would not work. See attr_get_value_cmp_fn and places where it is used, for example.

5] no synatx. I had copied this code from valuetree_find, where there was an error handling for slapi_attr_values2keys_sv(). But valuetree_find did return operations_error, whereas here any return code would be treated as a sorting order, so I decided to add a fallback. I don't know if we will ever run into this, but I think during startup, when reading entries from dse.ldif or the schema files the syntax plugins might not yet be set.

I think the syntax plugins start very early - if they are not available, we should find out why. Note that there is a slapd_bootstrap_config phase that just parses the most important stuff from dse.ldif, like syntax and schema, then there is a later phase that parses the entire thing.

ad 5] the situation is when reading the schema files, eq 00core.ldif. dse_read_one_file() calls str2entry_dupcheck, but doesn't set the flag SLAPI_STR2ENTRY_REMOVEDUPVALS. So before this fix, there will be no need to compare attr values, but this fix switches to a sorted array after a threshold count of values is passed and so needs a comparison function, burt the syntax plugins are not yet available

Replying to [comment:39 lkrispen]:

ad 5] the situation is when reading the schema files, eq 00core.ldif. dse_read_one_file() calls str2entry_dupcheck, but doesn't set the flag SLAPI_STR2ENTRY_REMOVEDUPVALS. So before this fix, there will be no need to compare attr values, but this fix switches to a sorted array after a threshold count of values is passed and so needs a comparison function, burt the syntax plugins are not yet available

Ok. Maybe we could have some sort of dummy or default syntax plugin to handle this case - we should use the regular syntax plugins once they are available.

I've tested this patch (v3) as we have a serious performance issue when adding/updating/deleting large group memberhips. Unfortunately this patch doesn't make things go fast for me. Adding or deleting 20000 members to a groupOfNames takes hours. The members cn's go like 'bulk00001', 'bulk00002' until 'bulk20000' Besides 389 I also tried the same using sortvals in OpenLDAP, but that's also slow. I'm now doubting the input file. Can you share with me your ldif which you used for testing adding the 50k members? Thanks!

thanks for your feedback, I've uploaded my test scripts and files - the complete test is done by "fulltest.sh <port> <dirmanager-password>".

If it is possible I would like to test your ldif as well.

slow and fast ldif for add and delete
ldifs.tar.bz2

Hi,

Thanks a lot for your fast response. I've executed your tests only on the version patched and I can say it's fast, fast enough for what I would expect :-).
As I expected, the input ldif is different and giving it a second thought it is of course very logical that my original input file is slow.
I tested with a different input file of 20k members for my test in both 1.2.11.15 and 1.3.1.0 with the patch on my not so fast laptop in a virtual box:
389-Directory/1.2.11.15 B2013.021.196:
add 20k members to 3 groups: 3 x 37 seconds
del 20k members to 3 groups: 3 x 16 seconds
389-Directory/1.3.1.0 B2013.127.1228 (including this patch)
add 20k members to 3 groups: 3 x 16 seconds
del 20k members to 3 groups: 3 x 16 seconds

So adding was 56% faster, no difference in deletion (there was actually 2 runs of 17 seconds, so might be a tiny delay but I ignore that).

Getting to the difference with the ldif input, I'll include my original and new ldif's. The original ldif's add member by member to every group instead of all at once,
that's the reason adding (and deletion same thing) takes hours. I don't expect this patch would be able to fix that. Problem is the application we're using (some sort of
an orchestrator type of program) gets it's input and adds it into the ldap this way...the ldif would normally actually contain an inetorgperson and then adding it to three groups, but
I left out the inetorgpersons a while ago when I found out I can add heaps of those very fast and that the delay is really in the group membership.
Maybe you can tell if this patch should be able the make adding 20k one by one additions to a multi valued attribute should be fast? I'm suspecting I would need to look at ticket 411 which
talks about optimizing mods and continue there. Even with optimizing mods, would it be able to handle mods together with the adds of inetorg persons and within what timeframe would you bundle them.

To make the discussion complete I can say that we've also looked at dynamic groups and filtered roles, but that doesn't cut it. We would need the dynamic group to expose the calculated membership list
as requested in ticket 128. (So hard to find a solution for this specific performance issue, it would rule if 389 could perform fast in this case :-).
It's kind of hard to modify the client application, but if there's no other solution we'll just have to use other, more expensive, ldap queries (like dynamic groups do) to get
the memberships and change our design I'm afraid.

I would appreciate your opinion on all of the above. See attached ldifs for a reference.

Thanks,

Dimitri

yes, your "slow" ldif is really bad, it will update the three groups in turn and you would not even benefit from ticket 411, which would help ldifs like

dN: cn=group1
changetype: modify
add:uniquemember
uniquemember: uid=1
-
add:uniquemember
uniquemember: uid=2
-
add:uniquemember
uniquemember: uid=2

how big are your entire groups, do the fit alltogether into the entry cache ? Otherwise the need to be reloaded from db for each member addition.

I can't say if you could rewrite your application to use dynamic groups.

Not sure about which setting for entry cache you're looking for, the nsslapd-cachememsize is 1 GB. 3 groups have about 10K members currently. Anyways, we've decided that we need to use something else then group membership to solve our particular case, there should be some way to change the client app, we'll find out the next couple of weeks. Thanks for helping!

$git merge ticket346
Updating 67b248e..54bbaed
Fast-forward
ldap/servers/slapd/attr.c | 60 ++-------
ldap/servers/slapd/attrlist.c | 2 +-
ldap/servers/slapd/attrsyntax.c | 6 +
ldap/servers/slapd/back-ldbm/import-threads.c | 2 +-
ldap/servers/slapd/back-ldbm/index.c | 22 ++--
ldap/servers/slapd/back-ldbm/ldbm_attr.c | 6 +-
ldap/servers/slapd/computed.c | 2 +-
ldap/servers/slapd/entry.c | 137 ++++++--------------
ldap/servers/slapd/entrywsi.c | 13 +-
ldap/servers/slapd/proto-slap.h | 13 +-
ldap/servers/slapd/schema.c | 106 +++++-----------
ldap/servers/slapd/slap.h | 8 ++
ldap/servers/slapd/slapi-plugin.h | 13 ++
ldap/servers/slapd/slapi-private.h | 5 +-
ldap/servers/slapd/valueset.c | 1084 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------------------------------------------------------------
15 files changed, 639 insertions(+), 840 deletions(-)
$ git push origin master
Counting objects: 41, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (21/21), done.
Writing objects: 100% (21/21), 8.72 KiB, done.
Total 21 (delta 19), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
67b248e..54bbaed master -> master

Backport to 1.3.1

$ git merge 131-ticket346
Updating 62d0dbf..f6ef7dc
Fast-forward
ldap/servers/slapd/attr.c | 60 ++--------
ldap/servers/slapd/attrlist.c | 2 +-
ldap/servers/slapd/attrsyntax.c | 6 +
ldap/servers/slapd/back-ldbm/import-threads.c | 2 +-
ldap/servers/slapd/back-ldbm/index.c | 24 ++--
ldap/servers/slapd/back-ldbm/ldbm_attr.c | 6 +-
ldap/servers/slapd/computed.c | 2 +-
ldap/servers/slapd/entry.c | 123 +++++--------------
ldap/servers/slapd/entrywsi.c | 9 +-
ldap/servers/slapd/proto-slap.h | 14 +--
ldap/servers/slapd/schema.c | 101 +++++-----------
ldap/servers/slapd/slap.h | 8 ++
ldap/servers/slapd/slapi-plugin.h | 15 ++-
ldap/servers/slapd/slapi-private.h | 5 +-
ldap/servers/slapd/valueset.c | 1015 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------------------------------------------------------------
15 files changed, 598 insertions(+), 794 deletions(-)
$ git push origin 389-ds-base-1.3.1
Counting objects: 41, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (21/21), done.
Writing objects: 100% (21/21), 8.54 KiB, done.
Total 21 (delta 19), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
62d0dbf..f6ef7dc 389-ds-base-1.3.1 -> 389-ds-base-1.3.1

To ssh://git.fedorahosted.org/git/389/ds.git
50ba5a0..ba70aac master -> master
commit ba70aac
Author: Rich Megginson rmeggins@redhat.com
Date: Wed Jul 31 10:52:21 2013 -0600
fix coverity 11915 - dead code - introduced with fix for ticket 346

389-ds-base-1.3.1
commit ea63f4f
Author: Ludwig Krispenz lkrispen@redhat.com
Date: Wed Jul 17 10:35:32 2013 +0200

Fix compiler warning

commit 8e54c53
Author: Ludwig Krispenz lkrispen@redhat.com
Date: Thu Jul 18 15:32:05 2013 +0200

fix compiler warnings

commit fcf4154
Author: Rich Megginson rmeggins@redhat.com
Date: Wed Jul 31 10:52:21 2013 -0600
coverity 11915

389-ds-base-1.3.0
commit 67694e7
Author: Rich Megginson rmeggins@redhat.com
Date: Wed Jul 31 10:52:21 2013 -0600

389-ds-base-1.2.11
commit dc9192e
Author: Rich Megginson rmeggins@redhat.com
Date: Wed Jul 31 10:52:21 2013 -0600

The last fix did not apply to 1.2.11 - have to revert

389-ds-base-1.2.11
commit 6b1cdd5
Author: Rich Megginson rmeggins@redhat.com
Date: Wed Jul 31 13:25:00 2013 -0600

Revert "fix coverity 11915 - dead code - introduced with fix for ticket 346"

git patch file (1.2.11) -- fix coverity 11915 introduced by "version 4 Slow ldapmodify operation time for large quantities of multi-valued attribute values"
0004-fix-coverity-11915-dead-code-introduced-with-fix-for.patch

https://fedorahosted.org/389/attachment/ticket/346/0001-Ticket-346-Slow-ldapmodify-operation-time-for-large-.2.patch
* the formatting is a little off - otherwise, ack

0002-Ticket-346-version-4-Slow-ldapmodify-operation-time-.patch​
* some formatting issues here too, otherwise, ack

0004-fix-coverity-11915-dead-code-introduced-with-fix-for.patch
* ack

Would like Ludwig to review these as well.

Thank you, Rich!

This patch could be merged into the one in ticket #47369. So, I'm removing it. (I fixed the formatting issue is in #47369)
Attachment 0001-Ticket-346-Slow-ldapmodify-operation-time-for-large-.2.patch​ added
git patch file (1.2.11) -- preparation for backporting "version 4 Slow ldapmodify operation time for large quantities of multi-valued attribute values"

I'm replacing this one with the new one with the formatting issue fixed.

0002-Ticket-346-version-4-Slow-ldapmodify-operation-time-.patch​
some formatting issues here too, otherwise, ack

git patch file (1.2.11) -- backporting "version 4 Slow ldapmodify operation time for large quantities of multi-valued attribute values" (fixed as suggested by Ludwig)
0001-Ticket-346-version-4-Slow-ldapmodify-operation-time-.2.patch

comments by Ludwig. (Thanks!!)
email.txt

Thanks to Rich and Ludwig for reviewing the backported patches.

Pushed to 389-ds-base-1.2.11 branch:
737169e..8252d02 389-ds-base-1.2.11 -> 389-ds-base-1.2.11
commit 8252d02
commit 8a6b75f
commit e4ddba7
commit 890fc22

I just reran some tests to change large quantities of uniqueMember values and it does not have the same lag I experienced two years ago when first reporting on the issue. Instead of 2.5 hours to add 100K members to an empty group, I was able to add 130K members in a very acceptable 10 minutes. Adding 50K more after that took another 10 minutes. Re-deletion of that same 50K set was not so great and took 168 minutes.

That was from a recent install on excellent hardware of 1.2.11.15. My dev VM with an older install of 1.2.11.23 still takes a long time, 246 minutes (even with no I/O wait), to run that same add, and I didn't try the deletion. I'm guessing there is a significant difference in the configuration or hardware which contributed to the lag, but there doesn't seem to be a significant difference. I hoping this means that at least part of the improved code was backported already even despite the difference in version numbers.

In any case I am eager to test out the latest enhancements, so as soon as this is available in a testing repo, I would like to pull it down and try it.

Unfortunately, we have no plan to respin 1.2.11 for Fedora. What platform you are running the directory server on? Please note that 1.3.1.4 and newer on Fedora 19 and 1.3.2 on Fedora 20 contains the fix.

Replying to [comment:62 nhosoi]:

Unfortunately, we have no plan to respin 1.2.11 for Fedora. What platform you are running the directory server on? Please note that 1.3.1.4 and newer on Fedora 19 and 1.3.2 on Fedora 20 contains the fix.

I could do one for EPEL6. We have to backport the fix to the 1.2.11 branch, correct?

Replying to [comment:63 rmeggins]:

Replying to [comment:62 nhosoi]:

Unfortunately, we have no plan to respin 1.2.11 for Fedora. What platform you are running the directory server on? Please note that 1.3.1.4 and newer on Fedora 19 and 1.3.2 on Fedora 20 contains the fix.

I could do one for EPEL6. We have to backport the fix to the 1.2.11 branch, correct?

We are on RHEL 6. Since all of this is still experimental and not in production yet, I could move to 1.3.x if there were plans to make a stable release for RHEL 6. It looked like from the comments that such a backport was in progress, so that is what I was hoping for.

Maybe the only option is to try RHEL 7 (beta)?

Replying to [comment:63 rmeggins]:

I could do one for EPEL6. We have to backport the fix to the 1.2.11 branch, correct?

Yes, we already have the backport in 1.2.11 branch! I've never done it (building for EPEL) myself. If it's the same as for Fedora, I could do it, too. Thanks, Rich!

Replying to [comment:65 nhosoi]:

Replying to [comment:63 rmeggins]:

I could do one for EPEL6. We have to backport the fix to the 1.2.11 branch, correct?

Yes, we already have the backport in 1.2.11 branch! I've never done it (building for EPEL) myself. If it's the same as for Fedora, I could do it, too. Thanks, Rich!

Not exactly - 389-ds-base can't be in EPEL6 since it is in RHEL6. However, see http://port389.org/wiki/Download

is the 1.2.11 branch ready to go? Are we waiting for any more patches?

Replying to [comment:66 rmeggins]:

Not exactly - 389-ds-base can't be in EPEL6 since it is in RHEL6. However, see http://port389.org/wiki/Download

is the 1.2.11 branch ready to go? Are we waiting for any more patches?

A good question. We still have 2 open tickets for 1.2.11.26...

Replying to [comment:64 beall]:

Replying to [comment:63 rmeggins]:

Replying to [comment:62 nhosoi]:

Unfortunately, we have no plan to respin 1.2.11 for Fedora. What platform you are running the directory server on? Please note that 1.3.1.4 and newer on Fedora 19 and 1.3.2 on Fedora 20 contains the fix.

I could do one for EPEL6. We have to backport the fix to the 1.2.11 branch, correct?

We are on RHEL 6. Since all of this is still experimental and not in production yet, I could move to 1.3.x if there were plans to make a stable release for RHEL 6. It looked like from the comments that such a backport was in progress, so that is what I was hoping for.

Maybe the only option is to try RHEL 7 (beta)?

We did respin 1.2.11.28 for EPEL6. If you could give us your feedback, we'd greatly appreciate it.
http://directory.fedoraproject.org/wiki/Releases/1.2.11.28

I've only run a couple tests, but I had to give sudden initial feedback after this first round due to the overwhelming improvement now observed. Instead of multiple hours for an operation to complete, both adds and deletes of large quantities of uniquemember values is now incredibly fast. Addition of 50K members to a 130K group took about 5 seconds. Deletion of that same set took about 27s. Addition of 400K members took about 1m and re-deletion of that same set took about 11m.

This is all within my dev VM that doesn't have enough memory while other operations were running as well. Looking forward to trying it out on the new hardware when the sysadmins here can get to it.

Thanks for the incredible boost!

Thank you so much for your patience and your update!

We are so happy to hear that...

I'm afraid I found a bug which affects all the versions.

This valueset_value_syntax_cmp is used to sort the values and set the order in the sorted array. We should not use the length difference for the comparison. The original is in #if 0, #else code fixes the problem.

{{{
diff --git a/ldap/servers/slapd/valueset.c b/ldap/servers/slapd/valueset.c
index e83e740..b6c6594 100644
--- a/ldap/servers/slapd/valueset.c
+++ b/ldap/servers/slapd/valueset.c
@@ -898,6 +898,7 @@ valueset_value_syntax_cmp( const Slapi_Attr a, const Slapi_Value v1, const Sla
struct berval bv1, bv2;
bv1 = &keyvals[0]->bv;
bv2 = &keyvals[1]->bv;
+#if 0
if ( bv1->bv_len < bv2->bv_len ) {
rc = -1;
} else if ( bv1->bv_len > bv2->bv_len ) {
rc = 1;
} else {
rc = memcmp( bv1->bv_val, bv2->bv_val, bv1->bv_len );
}
+#else
+ if ( bv1->bv_len == 0) {
+ if ( bv2->bv_len == 0) {
+ rc = 0;
+ } else {
+ rc = -1;
+ }
+ } else {
+ if ( bv2->bv_len == 0) {
+ rc = 1;
+ } else {
+ rc = memcmp(bv1->bv_val, bv2->bv_val,
+ (bv1->bv_len>bv2->bv_len)?bv2->bv_len:bv1->bv_len);
+ }
+ }
+#endif
}
if (keyvals != NULL)
valuearray_free( &keyvals );
}}}

For instance, this deleting "objectclass: nsTombstone" from the objectclass attribute list in ldbm_back_add and a created glue entry becomes an incomplete one.
{{{
slapi_entry_delete_string(addingentry->ep_entry, SLAPI_ATTR_OBJECTCLASS, SLAPI_ATTR_VALUE_TOMBSTONE);
}}}

why not use slapi_berval_cmp()?

Replying to [comment:73 rmeggins]:

why not use slapi_berval_cmp()?

You are right, Rich! They are almost identical... :p Let me update the patch and rerun the test...

Replying to [comment:72 nhosoi]:

I'm afraid I found a bug which affects all the versions.

This valueset_value_syntax_cmp is used to sort the values and set the order in the sorted array. We should not use the length difference for the comparison. The original is in #if 0, #else code fixes the problem.

Are you sure that this is the core of the problem
Using the length first should not be a problem, the sorting order doesn't matter as long as it is consistent. You will get a list of sorted values like:
up
top
deep
down
tree
flour
It is not alphabetic, but a new value "four" will be inserted between "down" and "tree".

I think the situation more likely arises because the values were initially sorted before an attribute type was availabe, calling valueset_value_cmp(NULL,...) so was sorted ldaputf8casecmp() and later passing an attribute when looking for nsTombstone and now using valueset_value_syntax_cmp()

Do you have a simple test case ?

Not sure if we have a simple test case or not (looking...)

The function to remove "objectclass: nsTombstone" is slapi_entry_delete_string. Another place being called is pw_add_allowchange_aci....

I think the problem is a result of using slapi_entry_add_value() to add the tombstone value. It does not pass an attribute and so will use a different sorting mechanism than slapi_entry_delete_string().
Either slapi_entry_add_string() or slapi_valueset_add_attr_value_ext() could be used.

Using slapi_berval_cmp without checking the length makes it very similar to ldaputf8casecompare in for normal strings like objectclasses and will also work.

Thanks for your comments, Ludwig. I'm still a bit confused... :)

So, there are some value add/delete pairs which should go along and otherwise delete cannot find the attribute value(s)? That is,
slapi_entry_delete_string works with slapi_entry_add_string() or slapi_valueset_add_attr_value_ext(),
but not with slapi_entry_add_value()?

Then, there's a corresponding delete function for slapi_entry_add_value()?

There are quite number of slapi_entry_add_value calls (I find 13 of them)... Are they all okay?

Description: slapi_entry_add_value is used to add values for sorting,
which did not pass the attribute syntax info and the fallback compare
algorithm was used. It sometimes different from the attribute syntax
based sorting and failed to find out an attribute. This patch passes
an attribute syntax info for sorting.

Plus, featuring slapi_berval_cmp for the fallback compare algorithm.
It is closer to the syntax based sorting.

Reviewed by Rich (Thank you!!)

Pushed to master:
fb71aa9..136fa64 master -> master
commit 136fa64

Pushed to 389-ds-base-1.3.2:
4ddc6dc..821297d 389-ds-base-1.3.2 -> 389-ds-base-1.3.2
commit 821297d

Pushed to 389-ds-base-1.3.1:
cef6ae4..e97f2b8 389-ds-base-1.3.1 -> 389-ds-base-1.3.1
commit e97f2b8f2a42d4ceca4020614fd7388e4e319bab

Pushed to 389-ds-base-1.2.11:
ac8ada3..5e45f45 389-ds-base-1.2.11 -> 389-ds-base-1.2.11
commit 5e45f45

Bug Description:
Memory leak fixed in this commit was invalid.
commit 708a56b
Ticket #47834 - Tombstone_to_glue: if parents are also converted to glue, the target entry's DN must be adjusted.
@@ -312,6 +312,13 @@ int attrlist_replace(Slapi_Attr alist, const char *type, struct berval vals)
(a)->a_flags |= SLAPI_ATTR_FLAG_NORMALIZED_CES;
}
rc = attr_replace(
a, values);
+ if (rc) {
+ slapi_log_error(SLAPI_LOG_FATAL, "attrlist_replace",
+ "attr_replace (%s, %s) failed.\n",
+ type, vals[0]->bv_val);
+ valuearray_free(&values);
+ slapi_attr_free(a);
+ }
}
return rc;

Fix Description:
If attr_replace fails and if to be replaced Slapi_Attr is newly
created in attrlist_replace//attrlist_replace_with_flags, the
Slapi_Attr is removed from the attrlist (alist) and Slapi_Attr
is freed. Otherwise, values and Slapi_Attr are not freed since
Slapi_Attr is just pointing the attribute in the attrlist and
values are consumed in attr_replace.

Note: commit 708a56b is in master
and 389-ds-base-1.3.2 branch, but this fix needs to be put into
1.3.1 and 1.2.11, as well.

{{{
attrlist_delete(alist, type);
}}}
This won't work - attrlist_find_or_create does not add the newly created a to the alist.

Replying to [comment:85 rmeggins]:

{{{
attrlist_delete(alist, type);
}}}
This won't work - attrlist_find_or_create does not add the newly created a to the alist.

Sure? It looks to me when {{{a == NULL}}} at line 82, *a is pointing to a_next in the last Slapi_Attr. And the NULL is replaced with the pointer to the newly created Slapi_Attr at line 84...?
{{{
69 int
70 attrlist_find_or_create_locking_optional(Slapi_Attr
alist, const char type, Slapi_Attr a, PRBool use_lock)
71 {
72 int rc= 0; /
found /
73 if (
a==NULL )
74 {
75 for ( *a = alist;
a != NULL;
a = &(a)->a_next ) {
76 if ( strcasecmp( (
a)->a_type, type ) == 0 ) {
77 break;
78 }
79 }
80 }
81
82 if( a==NULL )
83 {
84
a = slapi_attr_new();
85 slapi_attr_init_locking_optional(a, type, use_lock);
86 rc= 1; /
created
/
87 }
88 return rc;
89 }
}}}

Thank you, Rich!!

Pushed to master:
ca3d08a..c9b914b master -> master
commit c9b914b

Pushed to 389-ds-base-1.3.2:
8484ce2..b393c36 389-ds-base-1.3.2 -> 389-ds-base-1.3.2
commit b393c36

Pushed to 389-ds-base-1.3.1:
ba60488..3777f9c 389-ds-base-1.3.1 -> 389-ds-base-1.3.1
commit 3777f9c71bc53595d4d62a500da7a2897d056267

Pushed to 389-ds-base-1.2.11:
2b08517..783fb90 389-ds-base-1.2.11 -> 389-ds-base-1.2.11
commit 783fb90

Metadata Update from @nhosoi:
- Issue assigned to lkrispen
- Issue set to the milestone: 1.2.11.30

7 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/346

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: Fixed)

3 years ago

Login to comment on this ticket.

Metadata