Running on master branch on a vm with 8Gb of memory with more than 4Gb of free memory
top - 15:11:40 up 22 days, 22:37, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.1 us, 0.0 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 8157580 total, 4614608 free, 190148 used, 3352824 buff/cache KiB Swap: 65532 total, 65532 free, 0 used. 7778536 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 194900 9444 5988 S 0.0 0.1 3:35.03 systemd top - 15:11:43 up 22 days, 22:37, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 99.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 8157580 total, 4614592 free, 190160 used, 3352828 buff/cache KiB Swap: 65532 total, 65532 free, 0 used. 7778520 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 194900 9444 5988 S 0.0 0.1 3:35.03 systemd
If I start the instance with a small nsslapd-cachememsize=10Mb, it is fine. If I stop the instance and set nsslapd-cachememsize=400Mb, At startup I can see this warning in the error logs
[03/Jun/2016:15:10:10.379232664 +0200] util_is_cachesize_sane - WARNING adjusted cachesize to 221881344 [03/Jun/2016:15:10:10.380578175 +0200] util_is_cachesize_sane - WARNING: Cachesize not sane [03/Jun/2016:15:10:10.381595084 +0200] Error: cachememsize value is too large. [03/Jun/2016:15:10:10.382498890 +0200] Error with config attribute nsslapd-cachememsize : Error: cachememsize value is too large. [03/Jun/2016:15:10:10.383434287 +0200] Error parsing the config DSE
Note the tuning seems to stay according to what is in dse.ldif
ldapsearch -LLL -o ldif-wrap=no -D "cn=directory manager" -w Secret123 -b "cn=userRoot,cn=ldbm database,cn=plugins,cn=config" -s base dn: cn=userRoot,cn=ldbm database,cn=plugins,cn=config cn: userRoot objectClass: top objectClass: extensibleObject objectClass: nsBackendInstance nsslapd-suffix: <suffix> nsslapd-cachesize: -1 nsslapd-cachememsize: 419430400 nsslapd-readonly: off nsslapd-require-index: off nsslapd-directory: /var/lib/dirsrv/slapd-<realm>/db/userRoot nsslapd-dncachememsize: 10485760
A first issue is that I would not expect 400Mb to be rejected on such platform
A second issue is that DS seems unusable when started with 400Mb
ldapsearch -LLL -o ldif-wrap=no -D "cn=directory manager" -w Secret123 -b "<suffix>" dn| wc No such object (32) 12 12 441
Editing dse.ldif to set cachememsize=10mb I get
ldapsearch -LLL -o ldif-wrap=no -D "cn=directory manager" -w Secret123 -b "<suffix>" dn| wc 796 1566 42266
This was a regression from:
https://fedorahosted.org/389/ticket/48863
William just provided me a patch in the above ticket, and it solved the problem.
Replying to [comment:1 mreynolds]:
This was a regression from: https://fedorahosted.org/389/ticket/48863 Might be another one since the 48863 patch is not pushed to the git repo yet at this moment...
https://fedorahosted.org/389/ticket/48863 Might be another one since the 48863 patch is not pushed to the git repo yet at this moment...
Could be this? Ticket 48617 - Server ram checks work in isolation
So, this patch solves the problem? https://fedorahosted.org/389/attachment/ticket/48863/0001-Ticket-48863-remove-check-for-vmsize-from-util_info_.patch
Yes, this is a duplicate. #48863 will solve this.
Metadata Update from @mreynolds: - Issue set to the milestone: 0.0 NEEDS_TRIAGE
Metadata Update from @vashirov: - Issue set to the milestone: None (was: 0.0 NEEDS_TRIAGE)
389-ds-base is moving from Pagure to Github. This means that new issues and pull requests will be accepted only in 389-ds-base's github repository.
This issue has been cloned to Github and is available here: - https://github.com/389ds/389-ds-base/issues/1928
If you want to receive further updates on the issue, please navigate to the github issue and click on subscribe button.
subscribe
Thank you for understanding. We apologize for all inconvenience.
Metadata Update from @spichugi: - Issue close_status updated to: wontfix (was: Duplicate)
Login to comment on this ticket.