At the moment we define a max descriptors of 1024 in the server. This caps our FD's for conns. Really, we should not define this in the config (libglobs.c), but should take it from our env, and have the ability to consume "as much as needed".
Alternately, we should raise this default to a more sensible number reflective of the current scale of computers.
Metadata Update from @mreynolds: - Custom field origin adjusted to None - Custom field reviewstatus adjusted to None - Issue set to the milestone: 1.4.1
Pretty sure we read this from the environment and the sysctls we have? So I'm wondering how we default to 1024? Saying this, the code around this configuration is pretty complex. Should we have a max descriptors limit at all? Should we atually just read from the sysctl file limits insteadL?
DS sets its soft limit according to the defined maxdescriptor. It may also try to set the hard limit also if this one is too low.
an idea would be to set it to the defined value (dse.ldif), if it is not defined fallback to environment value, if it is not defined fallback to the default value (1024).
That sounds reasonable @tbordaz and reads very much like the design of libglobs.c now. :)
Metadata Update from @mreynolds: - Issue assigned to mreynolds
https://pagure.io/389-ds-base/pull-request/50321
Phase 1:
commit 8ca1420
1d13ff2..2c583a9 389-ds-base-1.4.0 -> 389-ds-base-1.4.0
4a01095..a6112a4 389-ds-base-1.3.9 -> 389-ds-base-1.3.9
Next phase: look into deprecating/removing nsslapd-maxdescriptors in 1.4.1 ...
Metadata Update from @mreynolds: - Issue close_status updated to: fixed - Issue status updated to: Closed (was: Open)
Systemd >=240 bumped fs.nr_open and fs.max-file to their largest possible values - https://github.com/systemd/systemd/commit/a8b627aaed409a15260c25988970c795bf963812
fs.nr_open
fs.max-file
In some cases (F31 host with systemd-243 and RHEL8 container) fs.nr_open is reported as 1073741816 which causes ns-slapd to allocate a lot of memory on start up and trigger OOM killer. We should have a sensible limit and not rely on a system limits that can be too high.
Metadata Update from @vashirov: - Issue status updated to: Open (was: Closed)
I'll add a hard limit... But what? :-) 100,000? Less, more?
I think we can safely use kernel's default limit of 1024*1024 (1048576) https://www.kernel.org/doc/Documentation/sysctl/fs.txt This was also a limit in systemd before v240.
Enforce a hard max limit
https://pagure.io/389-ds-base/pull-request/50852
Commit 54b941d relates to this ticket
202953d..f361de4 389-ds-base-1.4.2 -> 389-ds-base-1.4.2
10f7d40..25ba648 389-ds-base-1.4.1 -> 389-ds-base-1.4.1
389-ds-base is moving from Pagure to Github. This means that new issues and pull requests will be accepted only in 389-ds-base's github repository.
This issue has been cloned to Github and is available here: - https://github.com/389ds/389-ds-base/issues/3049
If you want to receive further updates on the issue, please navigate to the github issue and click on subscribe button.
subscribe
Thank you for understanding. We apologize for all inconvenience.
Metadata Update from @spichugi: - Issue close_status updated to: wontfix (was: fixed)
Log in to comment on this ticket.