#48906 Allow nsslapd-db-locks to be configurable online
Closed: wontfix None Opened 4 years ago by tbordaz.

nsslapd-db-locks is configured to be not CONFIG_FLAG_ALLOW_RUNNING_CHANGE

So when the server is running, ldbm_config_set forbids the update.

nsslapd-db-locks should be configurable online (like nsslapd-dbcachesize) although it requires a restart to be taken into account


Fix looks good, you have my ack, but I noticed that is no config validation being done in the set function. For example, you could set "-10", or "0" for the db locks. We should probably add some basic validation tests.

Replying to [comment:2 mreynolds]:

Fix looks good, you have my ack, but I noticed that is no config validation being done in the set function. For example, you could set "-10", or "0" for the db locks. We should probably add some basic validation tests.

Sorry I missed that it gets cast to size_t, but I guess you could still set it to zero.

What happens if someone tries to set the value to "magic" or a string?

Hi Thierry,

I wonder you may need to do on db_lock something similar to db_cachesize?

Not just adding this field:
{{{
574 int li_new_dblock;
}}}
You also need to add "int li_dblock" to ldbminfo to compare the changes at the startup?

And need to do the similar trick like this code for the dbcache in ldbm_config_dbcachesize_set?
{{{
if (CONFIG_PHASE_RUNNING == phase) {
li->li_new_dbcachesize = val;
LDAPDebug(LDAP_DEBUG_ANY, "New db cache size will not take affect until the server is restarted\n",
0, 0, 0);
} else {
li->li_new_dbcachesize = val;
li->li_dbcachesize = val;
}
}}}
And we need to set the current li_dblock to dblayer_lock_config to take effect?

I think this current line is a bug... :(
{{{
1350 priv->dblayer_lock_config = li->li_dbncache;
}}}
It should be
{{{
1350 priv->dblayer_lock_config = li->li_dblock;
}}}

Replying to [comment:7 nhosoi]:

Hi Thierry,

I wonder you may need to do on db_lock something similar to db_cachesize?

Not just adding this field:
{{{
574 int li_new_dblock;
}}}
You also need to add "int li_dblock" to ldbminfo to compare the changes at the startup?

And need to do the similar trick like this code for the dbcache in ldbm_config_dbcachesize_set?
{{{
if (CONFIG_PHASE_RUNNING == phase) {
li->li_new_dbcachesize = val;
LDAPDebug(LDAP_DEBUG_ANY, "New db cache size will not take affect until the server is restarted\n",
0, 0, 0);
} else {
li->li_new_dbcachesize = val;
li->li_dbcachesize = val;
}
}}}
And we need to set the current li_dblock to dblayer_lock_config to take effect?

Hi Noriko,

Thanks you so much for reviewing that patch.
I am surprised, fixing that ticket I was looking how was implemented db_cachesize update.
I may have missed something.

It requires to update a 'new_dblock' in place of 'dblayer_lock_config' so that the guardian file will be created with the current setting (from dblayer_lock_config) while the config will be created with new setting (new_dblock read from ldbm_config_get). So at restart guardian and config will differ and the env recreated based on the config.

so ldbm_config_db_lock_get (used by ldbm_config_get) needs to return the new_dblock.
And ldbm_config_db_lock_set need to update new_dblock to update the config, instead of the dblayer_private (that is used to create guardian)

I think this current line is a bug... :(
{{{
1350 priv->dblayer_lock_config = li->li_dbncache;
}}}
It should be
{{{
1350 priv->dblayer_lock_config = li->li_dblock;
}}}

I can not find li_dbncache in the patch, did I attach/send an other patch ?

Replying to [comment:8 tbordaz]:

so ldbm_config_db_lock_get (used by ldbm_config_get) needs to return the new_dblock.
And ldbm_config_db_lock_set need to update new_dblock to update the config, instead of the dblayer_private (that is used to create guardian)

I think this current line is a bug... :(
{{{
1350 priv->dblayer_lock_config = li->li_dbncache;
}}}
It should be
{{{
1350 priv->dblayer_lock_config = li->li_dblock;
}}}

I can not find li_dbncache in the patch, did I attach/send an other patch ?

Sorry, my comment was not clear. This potion is not in the patch, but I'm asking you to fix in addition to your patch. li_dbncache is for a number 'n' which allows the dbcache into 'n' consecutive chunk of memory. So, there's NO relationship with dblayer_lock_config... :(

I agree handling this BDB resource files is quite complicated... It has to take the config params from cn=ldbm database,cn=plugins,cn=config. Plus, since {{{__db.00#}}} is persistent on the disk, the changes need to be detected and regenerate the file if needed. The resizing is done via dblayer_start. These requirements are making the code more complicated...

Looks good. Thanks, Thierry!!

git push origin master
Counting objects: 14, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (14/14), done.
Writing objects: 100% (14/14), 4.99 KiB | 0 bytes/s, done.
Total 14 (delta 11), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
125486d..cd55c8f master -> master

commit cd55c8f
Author: Thierry Bordaz tbordaz@redhat.com
Date: Thu Sep 8 19:29:25 2016 +0200

Metadata Update from @tbordaz:
- Issue assigned to tbordaz
- Issue set to the milestone: 1.3.6.0

3 years ago

@tbordaz In the CI test you check the dbcachesize. I was wondering why? Well with Williams autotuning this is breaking the test, and I'd like to remove it. Is it okay? Why would be want to test it anyway?

Thanks!

Metadata Update from @mreynolds:
- Issue close_status updated to: None (was: Fixed)

3 years ago

@mreynolds : There is no need to test dbcachesize I do not recall exactly why I wanted to check this value. Autotuning breaks this testing and I agree it should be removed. Do you want me to do that change ?

@tbordaz William already removed it :)

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/1965

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix

2 months ago

Login to comment on this ticket.

Metadata