#512 improve performance of vattr code
Closed: wontfix None Opened 11 years ago by rmeggins.

The vattr code is a performance bottleneck because it is called twice for every search entry - once to check the entry against the filter in case there are virtual attributes that match the filter, and once when the entry is returned to the client to see if there are any virtual attributes to add to the entry. This involves a lot of additional code paths and a lot of use of malloc/free. In addition, systemtap analysis shows the vattr_init the_map->lock rwlock is heavily contended. We need to

  • improve the efficiency of this code
  • remove malloc/free as much as possible
  • investigate lock-free techniques to access the_map

'''Here is the current status'''

  • The bug is related to several issues (contention on the_map hashtable, lot of malloc/free, code called twice)

  • I focus on the contention issue

  • I was not able to measure the contention.
    I have difficulties to get a reliable stap script to measure contention. I can obtain measurement but I am not confident in the reported values.
    So I decided to measure contention with a search workload. Assuming that if there is contention, search throughput should be impacted

  • I failed to reproduce the contention
    I prepared 3 versions of DS. The first (original) with a rwlock, the second with a simple lock and the third without any lock (configuration not changed during tests).
    The search throughput (rsearch) was almost identical with the 3 versions (for a given number of threads 5, 10, 20 or 30).

  • The test case use cos+rsearch. Each search triggers 64 calls to vattr_map_lookup (that access the hash table holding the lock)

{{{
dn: cn=ZipTemplate,SUFFIX
objectclass: top
objectclass: LDAPsubentry
objectclass: extensibleobject
objectclass: cosTemplate
postalCode: value1
cosPriority: 0

dn: cn=pointerCoS,SUFFIX
objectclass: top
objectclass: LDAPsubentry
objectclass: cosSuperDefinition
objectclass: cosPointerDefinition
cosTemplateDn: cn=ZipTemplate,SUFFIX
cosAttribute: postalCode

rsearch -h localhost -p 1389 -D "cn=directory manager" -w Secret123 -s "dc=com" -f "(&(uid=tcos1)(|(postalcode=1) (postalcode=2) (postalcode=3) (postalcode=4) (postalcode=5) (postalcode=6) (postalcode=7) (postalcode=8) (postalcode=9) (postalcode=10) (postalcode=11) (postalcode=12) (postalcode=13) (postalcode=14) (postalcode=15) (postalcode=16) (postalcode=17) (postalcode=18) (postalcode=19) (postalcode=20) (postalcode=21) (postalcode=22) (postalcode=23) (postalcode=24) (postalcode=25) (postalcode=26) (postalcode=27) (postalcode=28) (postalcode=29) (postalcode=30) (postalcode=30) (postalcode=31) (postalcode=32) (postalcode=33) (postalcode=34) (postalcode=35) (postalcode=35) (postalcode=37) (postalcode=38) (postalcode=39) (postalcode=40) (postalcode=value1)))" -t 30
}}}

'''Here are the next steps'''

  • I will try to increase the workload (adding more entries)

  • I will improve the stap script to better measure the contention on the_map lock

'''Here is the current status'''

  • I increased the workload by adding more entries (1000) to return
    Each individual search acquire/free 50K times the virtual attribute lock.
    The throughput was around 6 op/sec and each tests last between 20-60 sec.
    So each measure take into account [6M-20M] lock/unlock of the virtual attribute lock

  • I improved the stap script to get consistant data and measure the contention when the lock is not free

  • Test machine is i7-4cores @ 2.9 Ghz on fedora 17

  • With the test scenario (see attachment ticket_test_desc and do_it.sh) I did not reproduce a contention
    on the vattr lock (the_map->lock).
    For example on rsearch workload with 20 clients, only 62K out of 82M (<0.1%)
    there were contention (had to wait for the lock to be acquired).
    The contention remains at a low level with a vast majority (85%) of
    the waiting time being below 0.5ms and 96% below 10ms.

I tested changing the RWLock into a simple Lock and had similar results
in terms of contention and throughput.

  • On other plateform (like sparc/Solaris), the RWlock implementation is/was very poor
    in terms of performance and may require a change in the way vattr lock is designed

'''In conclusion''':
On my test platform with my test case I fail to reproduce contention on that lock.
That does not mean that this lock is not subject to high contention, but without being able to
reproduce I am not able to verify the need and result of that improvement.

'''Note''': during investigation I isolated with the same test case 2 locks impacted by high contention
* entry cache lock
* cos_cache lock
I will open separated ticket for them

'''Here are the next steps'''

  • I change the focus of the ticket to see how to optimise the evaluation of the vattr (filter test + send result)

'''Here is the current status'''

  • I have been unable to reproduce contention on vattr (using a 8*4 cores machines). I know the test case I am using is doing an intensive access to the the_map->lock and should produce contention.[[BR]]
    I change the test case to return only one entry but to require the evaluation of many vattr:[[BR]]
    "(&(uid=tcos1)(|(xyz=1)(xyz=2)...(xyz=300)(givenname=test)))"

  • Reading the code the contention should be very limited. The lookup code is
    {{{
    slapi_rwlock_rdlock(the_map->lock);
    result = (vattr_map_entry)PL_HashTableLookupConst(the_map->hashtable,
    (void)basetype);
    /
    Release ze lock */
    slapi_rwlock_unlock(the_map->lock);
    }}}
    So if there is contention it is either due to slow rwlock or slow nspr hash table lookup.
    A possibility is that these codes (rwlock and hashlookup) improved and are no longer triggering a contention.

'''Here are the next steps'''

  • Check if optimisation is possible between filter test and sent result

'''Here is the current status'''

  • A last attempt to reproduce contention is still failing.[[BR]]
    I was using a filter containing many time a virtual attribute:
    "(&(uid=tcos1)(| (postalcode=1)...(postalcode=300) (postalcode=value1))". In my test 'postalcode' is a virtual attribute whose value is 'value1'. Each search return 1 entry.

  • Description of virtual attribute evaluation for a search:[[BR]]

1 - create a list of candidate
2 - For each candidate entry the filter is evaluated (slapi_vattr_filter_test).[[BR]]
For a given filter component, if the attribute is a virtual attribute (it exists a Service Provider for it) the computed valueset is cached () into the entry "e_virtual_attrs" (slapi_entry_vattrcache_merge_sv).
(
)The condition to cache the value is that the attribute should be cachable.[[BR]]

3 - The entries matching the filter are returned. Attributes values are retrieved from "e_virtual_attrs" (slapi_vattr_namespace_values_get_sp), if they have been cached[[BR]]

In DS 389 implementation, '''only''' 'nsrole' attribute is cachable

  • I was doing some tests with cos. I think it would be valuable to cache the returned cos values but need further investigation to know if it is interesting (it exists a cos cache) and feasible.
    In fact a cos attribute is not a hardcoded named attribute (like "nsrole") but it can be any attribute name and we need to look into the cos definition to know which attribute is a cos or not.

'''Here are the next steps'''

  • continue investigations

'''Here is the current status'''

  • Description of virtual attribute evaluation for a search:
    1 - create a list of candidate
    2 - For each candidate entry the filter is evaluated (slapi_vattr_filter_test).
    For a given filter component, if the attribute is a virtual attribute (it exists a Service Provider for it)
    the computed valueset is cached into the entry "e_virtual_attrs" (slapi_entry_vattrcache_merge_sv).
    The condition to cache the value is that the attribute should be cachable.
    3 - The entries matching the filter are returned
    attributes values are retrieved from "e_virtual_attrs" (slapi_vattr_namespace_values_get_sp),
    if they have been cached

    In DS 389 implementation, only 'nsrole' attribute is cachable. So service provide like 'COS'
    are not cached. If a filter contains multiple times the same vattr, the vattr is evaluated (cos evaluated)
    several time. The same way if the set of returned attribute contains a vattr it is evaluated even if
    it was already evaluated during filter analyse.
    
  • A fix is proposed in attachment

  • the test case data are attached. To provision the DB (cos_add). Filter with no impact ticket512.noimpact (no vattr in the filter/returned). Filter with no gain ticket512.nogain (vattrs appears only once in filter/returned). Filter with big gain ticket512.gain.[[BR]]

ticket512.noimpact: 5400/s vs 5200/s
ticket512.nogain: 5000/s vs 4200/s
ticket512.gain: 1800/s vs 700/s

'''Here are the next steps'''

  • Waiting for code review

  • Waiting for acceptance completion (basic/cos/roles tests are ok)

Looks good, but I have a question.

One thing that confuses me about the vattr code is the tests for e->e_virtual_lock. e_virtual_lock is allocated only in 1 place - slapi_entry_init() - and freed in only 1 place - slapi_entry_free(). So if the Slapi_Entry is valid at all, e->e_virtual_lock will always be non-NULL. So I don't understand why there are tests for e->e_virtual_lock at all. That is, everywhere there is a test
{{{
if (e->e_virtual_lock) {
}}}
it could be replaced with
{{{
if (1) {
}}}
and the code would work the same. So why all the tests for e->e_virtual_lock?

Hi Rich,

Thanks for your review. You are right. I do not see the interest of testing e_virtual_lock.
Actually it was a typo in my code where I tested 'e_virtual_lock' in place of 'e_virtual_attrs' !!.
It did not crash because attrlist_find, checks that 'attrs' is not NULL before testing it.
I will correct this and rerun some tests.

best regards
thierry

{{{
a b slapi_entry_vattrcache_findAndTest( const Slapi_Entry e, const char type,
2438 2438 int r= SLAPI_ENTRY_VATTR_NOT_RESOLVED; / assume not resolved yet /
2439 2439 rc = -1;
2440 2440
2441 if( slapi_vattrcache_iscacheable(type) &&
2442 slapi_entry_vattrcache_watermark_isvalid(e) ) {
2443
2444 if(e->e_virtual_lock == NULL) {
2445 return r;
2446 }
2441 if ((e->e_virtual_attrs == NULL) || ! slapi_entry_vattrcache_watermark_isvalid(e)) {
2442 /
there is not virtual attribute cached or they are all invalid
2443 * just return
2444 /
2445 return r;
2446 }
}}}
{{{
… slapi_entry_vattrcache_find_values_and_type_ex( const Slapi_Entry
e,
2507 2505 {
2508 2506 Slapi_Attr tmp_attr = NULL;
2509 2507
2510 int r= SLAPI_ENTRY_VATTR_NOT_RESOLVED; /
assume not resolved yet /
2508 int r = SLAPI_ENTRY_VATTR_NOT_RESOLVED; /
assume not resolved yet /
2511 2509
2512 if( slapi_vattrcache_iscacheable(type) &&
2513 slapi_entry_vattrcache_watermark_isvalid(e) && e->e_virtual_attrs)
2514 {
2510 if ((e->e_virtual_attrs == NULL) || ! slapi_entry_vattrcache_watermark_isvalid(e)) {
2511 /
there is not virtual attribute cached or they are all invalid
2512 * just return
2513 */
2514 return r;
2515 }
}}}

At the line 2445 and 2514, it returns SLAPI_ENTRY_VATTR_NOT_RESOLVED if e->e_virtual_attrs is NULL. I'm wondering how the case when the virtual attribute is valid and the value is empty is represented in the code... If it is "( slapi_vattrcache_iscacheable(type) && slapi_entry_vattrcache_watermark_isvalid(e) && (e->e_virtual_attrs == NULL))" is TRUE, aren't we missing it? In such a case, should we return SLAPI_ENTRY_VATTR_RESOLVED_ABSENT, instead of SLAPI_ENTRY_VATTR_NOT_RESOLVED?

Hi Noriko,

Thanks for your review and your remarks. I agree with you that this part of code needs additionnal comments.

e_virtual_attr is a cache specific to a given entry. It contains values returned by Service Provide. Before the fix, the only Service Provider that stored values in the cache was "roles plugin" and only for "nsrole" attribute.

With the fix, the Service Provider returns a flag stating if a returned value is cacheable or not. The fix let "cos plugin" return the flag and store cos attributes in the cache.

When the server lookup for a virtual attribute (search filter or returned attribute), before calling the SPs it checks if the attribute was not already in the cache.
If the cache is empty (e_virtual_attrs == NULL) or the attribute is not in the cache, means that either the attribute was not yet evaluated or was not cachable. The server should call the SPs for that attribute, to get its evaluation. To say to the server it has to call the SPs, the lookup functions (slapi_entry_vattrcache_find*) returns SLAPI_ENTRY_VATTR_NOT_RESOLVED.

If the cache contains the attribute, the lookupt functions returns SLAPI_ENTRY_VATTR_RESOLVED_EXISTS.
We have a special case here, if the cache contains the attribute but the attribute value is empty. In that case lookup functions return SLAPI_ENTRY_VATTR_RESOLVED_ABSENT. The server considere there is no SP for that attribute and try to take it from the entry. This behavior (empty value) is not changed by the fix.

To return SLAPI_ENTRY_VATTR_RESOLVED_ABSENT, the attribute must be in the cache but with an empty value. That means SLAPI_ENTRY_VATTR_RESOLVED_ABSENT is only returned if e_virtual_attrs != NULL. If e_virtual_attr==NULL really means the attribute is not in the cache (not yet evaluated or not cachable) and we need to (re)evaluate the attribute => SLAPI_ENTRY_VATTR_NOT_RESOLVED.

I hope I understand correctly your remarks and my answer clarify my understanding.

Thanks for the clarification, Thierry! Your explanation convinced me. Only my concern was this fix might bring back this issue again... After reading your notes, I do not think so, though...
https://fedorahosted.org/389/ticket/490
If you could run some tests having lots of Roles, it will be great. Thanks!

Hi Noriko,

I did some performance comparison master vs ticket512. The database contained 13000 entries (all of them cached), 1000 managed roles and 100 filter roles. I did search comparison like

{{{
time searchable -p 1389 -h localhost -D "cn=directory manager" -w Secret123 -b "dc=com" "(nsrole=cn=role_group<xxx>,ou=People,dc=com)" nsrole
}}}

where the number of entries belonging to the role (in the filter) range from 100 to 3100. For all tests, the time (real/sys/user) are equivalent master vs ticket512

best regards
thierry

Fix reviewed by Rich and Noriko.

git merge ticket512
Updating 4b2ee49..44a9808
Fast-forward
ldap/servers/plugins/cos/cos_cache.c | 2 +-
ldap/servers/slapd/entry.c | 238 ++++++++++++++++++++++++++++++++++++----------------------------------------
ldap/servers/slapd/slapi-plugin.h | 1 +
ldap/servers/slapd/slapi-private.h | 2 +-
ldap/servers/slapd/tools/rsearch/searchthread.c | 2 +-
ldap/servers/slapd/vattr.c | 14 ++---
6 files changed, 124 insertions(+), 135 deletions(-)

commit 44a9808ca9661adf6c0921aa910c8cf43250cc66
Author: Thierry bordaz (tbordaz) tbordaz@redhat.com
Date: Tue Apr 9 16:33:37 2013 +0200

'''Note''': the previous 'git merge' and commit are invalid. The previous merge actually contained two commits.
I wanted to completely get rid of one of them (458055d) that was between 4b2ee49..44a9808[[BR]]

{{{
44a9808 Ticket 512 - improve performance of vattr code
458055d Ticket XXX - Ticket description
4b2ee49 Ticket 528 - RFE - get rid of instance specific scripts
}}}

So I did a git rebase -i 4b2ee49, and removed 458055d. Then I did the following push.

git push origin master

Counting objects: 23, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (12/12), done.
Writing objects: 100% (12/12), 2.87 KiB, done.
Total 12 (delta 10), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
4b2ee49..86f8b9f master -> master

commit 86f8b9f
Author: Thierry bordaz (tbordaz) tbordaz@redhat.com
Date: Tue Apr 9 16:33:37 2013 +0200

Reopening this ticket.

I can confirm that this commit has broken the functionality gained from the fix provided by Norkio in ticket #490.

commit 86f8b9f (ticket 512) appears to have broken the nsrole fix implemented in ae7e811 (ticket 490).

Test case:
Same exact data & config on 389-Directory/1.3.1.pre.a1.gitfebd0db and 389-Directory/1.3.2.a1.gita154ecf yield significantly different results in terms of time:
role search on 1.3.1 takes 0m2.279s
role search on 1.3.2 it takes 2h37m

Hi James,

sorry to hear that it broke the fix ticket490.
I will try to reproduce with the following testcase (taken from 490):

{{{
Using test data having 86568 entries in total; 98 nsRoleDefinition entries and 61542 nsRoleDn among them...

Sample command line:
ldapsearch -LLLx -h localhost -p 389 -D 'cn=directory manager' -w password -b "dc=example,dc=com" "(nsrole=cn=CN0,o=O0,dc=example,dc=com)" nsrole
It returns 3291 entries with 8321 nsrole attribute values.
}}}

If by any chance you have on hand your own testcase, it would help ;)

regards
thierry

Here is the current status

  • I created a small DB (10000 entries) with 100 groups of 100 entries each

  • I wanted to double check we are on the same page:
    The first search is to provision the entrycache
    The second one is close to the testcase found in https://fedorahosted.org/389/ticket/490
    The third one is the same as the second, but we can see that the response time of the second is 50 times slower than the third. IMHO this is normal because the second search fills into the entries the 'nsroledn' attribute, and the third gets benefit of it.

{{{
[tbordaz@pctbordaz ticket512]$ time ldapsearch -LLL -h localhost -p 1512 -D "cn=directory manager" -w Secret123 -b "dc=com" "objectclass=*" dn | wc
20002 20002 268012

real 0m0.651s
user 0m0.019s
sys 0m0.023s
[tbordaz@pctbordaz ticket512]$ time ldapsearch -LLL -h localhost -p 1512 -D "cn=directory manager" -w Secret123 -b "dc=com" "(nsrole=cn=group0,dc=com)" nsroledn | wc
300 400 5290

real 0m2.620s
user 0m0.002s
sys 0m0.002s
[tbordaz@pctbordaz ticket512]$ time ldapsearch -LLL -h localhost -p 1512 -D "cn=directory manager" -w Secret123 -b "dc=com" "(nsrole=cn=group0,dc=com)" nsroledn | wc
300 400 5290

real 0m0.051s
user 0m0.004s
sys 0m0.004s

}}}

Here are the next steps

* continue to try to reproduce a performance hit

* could you confirm that the performance you measure on a given request was still very bad on the second attempt ?
  • could you provide your test case

Here is the current status

  • I confirm that my fix (ticket512) broke the fix ticket490. Thanks to Noriko that provided me the test ldif. In the following logs, we can see that the second search (once entries are primed) is much slower after the fix 512

{{{
- rebase after 512
time ldapsearch -LLLx -h localhost -p 1512 -D "cn=directory manager" -w Secret123 -b "dc=mycorp,dc=com" "(nsrole=cn=Treasury-DataAdmin,o=Intra,dc=mycorp,dc=com)" nsroledn |wc
115 216 5923

real 2m13.149s
user 0m0.005s
sys 0m0.004s
time ldapsearch -LLLx -h localhost -p 1512 -D "cn=directory manager" -w Secret123 -b "dc=mycorp,dc=com" "(nsrole=cn=Treasury-DataAdmin,o=Intra,dc=mycorp,dc=com)" nsroledn |wc
115 216 5923

real 0m53.071s
user 0m0.001s
sys 0m0.006s

    - before 512

time ldapsearch -LLLx -h localhost -p 1512 -D "cn=directory manager" -w Secret123 -b "dc=mycorp,dc=com" "(nsrole=cn=Treasury-DataAdmin,o=Intra,dc=mycorp,dc=com)" nsroledn |wc
115 216 5923

real 2m16.509s
user 0m0.002s
sys 0m0.006s
time ldapsearch -LLLx -h localhost -p 1512 -D "cn=directory manager" -w Secret123 -b "dc=mycorp,dc=com" "(nsrole=cn=Treasury-DataAdmin,o=Intra,dc=mycorp,dc=com)" nsroledn |wc
115 216 5923

real 0m0.286s
user 0m0.002s
sys 0m0.006s
}}}

Here are the next steps

  • I investigate the RC of that regression

Here is the current status

  • The RC is identified. Each entry keeps a cache of its virtual attributes. Before https://fedorahosted.org/389/ticket/512, this cache only contained 'nsrole' attribute. To know if the values stored in the cache were valid, we used a "watermark" mechanism (watermark valid). When the watermark was "valid" it meant two things: The values in the virtual attributes cache are valid AND it was evaluated (for the only cached attribute "nsrole"). So if the watermark was valid and the vattr_list empty than meant this entry has no 'nsrole' (this was the purpose of ticket490).
    Now with ticket 512 an entry may cache many virtual attributes. So when watermark is valid, it still means that the virtual attributes are valid but it no longer means that the searched attribute was evaluated.

In short the ticket512 is incomplete as it needs an additional mechanism to keep track of the virtual attributes already evaluated for an entry.

Here are the next steps

  • investigate how to fix this.

'''Here is the current status'''

    - The good news is that I implemented a new fix that stores the cachable virtual attributes
      into a hash table (on each entry)
      This fix provides the same level of performance than with ticket #490

    time ldapsearch -LLLx -h localhost -p 1512 -D "cn=directory manager" -w Secret123 -b "dc=mycorp,dc=com" "(nsrole=cn=Treasury-DataAdmin,o=Intra,dc=mycorp,dc=com)" nsroledn | wc
                115     216    5923

            real    2m10.073s
            user    0m0.001s
            sys     0m0.007s
    time ldapsearch -LLLx -h localhost -p 1512 -D "cn=directory manager" -w Secret123 -b "dc=mycorp,dc=com" "(nsrole=cn=Treasury-DataAdmin,o=Intra,dc=mycorp,dc=com)" nsroledn | wc
                115     216    5923

            real    0m0.288s
            user    0m0.004s
            sys     0m0.003s
    ldapsearch -LLLx -h localhost -p 1512 -D "cn=directory manager" -w Secret123 -b "dc=mycorp,dc=com" "(nsrole=cn=Treasury-DataAdmin,o=Intra,dc=mycorp,dc=com)" nsroledn | wc
                115     216    5923

            real    0m0.295s
            user    0m0.003s
            sys     0m0.005s

- The bad news is that the fix triggers a leak that I am currently investigating

Here are the next steps

* investigating the leak

Here is the current status

  • There was no leak in the new fix. I was monitoring a memory fragmentation that is bigger than before likely because of the use of a hash table per entry.
    This is fragmentation because after some time the memory growth reduce and finally get stable.
    With the #490 test case, the memory stabilize at 4G while it stabilized at 3.3G on a rebase #490.

  • performance with the last fix are equivalent #490 (0.29 sec after role priming), and #512 (cos) throughputs are 2700/s (with #512) vs 1700/s (witout).

Here are the next steps

  • Waiting for second fix review

Why a hash table? Why not just a simple linked list as for the list of attributes in an entry? I think a hash table per entry will add significant overhead unless there are more than a few vattrs per entry.

If a hash table, then this code is problematic:
{{{

2638            e->e_virtual_attrs = PL_NewHashTable(10, 
2639                    entry_vattr_hash_fn, entry_vattr_hash_compare_keys, 
2640                    entry_vattr_hash_compare_values, NULL, NULL);

}}}
This will pre-allocate 10 hash buckets for each entry - if there are 10 million entries, that's a lot of pre-allocated hash buckets, especially if they are not being used.

Hi Rich,

thanks for your review. I agree that most often the number of virtual attribute is null or low.
Regarding the fragmentation and how ram is precious for file system, I will reimplement this cache as a linked list.

regards
thierry

If entry_vattr_LOCK() now are local, why not just use a macro if you don't want to call slapi_relock_ directly.
Otherwis it looks good to me.

If I understand this correctly, struct _entry_vattr doesn't need both attr and attrname - it needs attrname only when when there is no vattr defined for this attribute for this entry (attr == NULL) - when attr != NULL, then you could use attr->a_type instead of attrname.

Also, where is struct vattr_evaluated used?

Hi Rich,

You are right. When there is a service provider for a given virtual attribute but that virtual attribute does not exist for the entry, then attr==NULL. This was the purpose of ticket490 enhancement, to cache that 'nsrole' was already evaluated for a given entry even if the evaluation returned no value.
When a service provider exists and returns a value, then the attribute name is store twice in 'attrname' and 'attr->a_type'.
I will change the fix, so that it does not keep 'attrname' if 'attr != NULL'

'vattr_evaluated' was a bad cut/paste, I will remove it from the fix

regards
thierry

Instead of using void * for the e_virtual_attrs type, it would be better to define an actual type for it. In slap.h just forward declare a type e.g.

typedef struct _entry_vattr Slapi_Vattr;
...
Slapi_Attr e_virtual_attrs; / list of virtual attributes /
Slapi_Vattr
e_virtual_attrs; / cache of virtual attributes /

Then, in entry.c, Slapi_Vattr will be defined by the structure definition.

{{{
slapi_entry_vattrcache_merge_sv()
...

2710                    /* Add the vals in the virtual attribute cache  */ 
2711                    vattr = entry_vattr_lookup_nolock(e, type); 
2712                    if (vattr) { 
2713                            /* virtual attribute already cached, add the value */ 
2714                            valueset_add_valuearray( &vattr->attr->a_present_values,

}}}

Is it possible for vattr != NULL but vattr->attr == NULL at this point in the code? I see other places in the code where vattr != NULL and vattr->attrname != NULL but vattr->attr == NULL.

Hi Rich,

Thanks for your feedback. I created the Slapi_Vattr, that makes the code much cleaner !

Regarding your concern. In order to store the attribute name once we have
either vattr->attrname != NULL and vattr->attr==NULL
either vattr->attrname == NULL and vattr->attr != NULL

In that portion of code (adding a value in an already cached attribute), vattr != NULL and vattr->attr == NULL occurs if a SP returns a NULL value (like ticket490).
In that case the NULL value for the attribute is cached and further lookup will found it without calling slapi_entry_vattrcache_merge_sv.
A risk would exist if the attribute is not found (not cached yet) and there is several SP attempting to merge their values in the cache (multiple call to slapi_entry_vattrcache_merge_sv in a raw). However, each loop to find a SP stops as soon as a SP is found that implements the virtual attribute so slapi_entry_vattrcache_merge_sv is called once.
I am unsure it could crash but I agree with you that this part of code could easily be more robust so I changed it to prevent a possible crash.

git merge ticket512

Updating 2d6d9ac..e6fd70b
Fast-forward
ldap/servers/slapd/entry.c | 325 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------------------
ldap/servers/slapd/proto-slap.h | 4 --
ldap/servers/slapd/slap.h | 3 +-
ldap/servers/slapd/vattr.c | 16 -----
4 files changed, 235 insertions(+), 113 deletions(-)

git push origin master

Counting objects: 17, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (9/9), done.
Writing objects: 100% (9/9), 3.71 KiB, done.
Total 9 (delta 7), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
2d6d9ac..e6fd70b master -> master

commit e6fd70b
Author: Thierry bordaz (tbordaz) tbordaz@redhat.com
Date: Fri Jun 21 17:58:44 2013 +0200

Metadata Update from @tbordaz:
- Issue assigned to tbordaz
- Issue set to the milestone: 1.3.2 - 08/13 (August)

7 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/512

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: Fixed)

3 years ago

Login to comment on this ticket.

Metadata