Ticket was cloned from Red Hat Bugzilla (product Red Hat Enterprise Linux 7): Bug 1466191
Description of problem: I have not tested auto tuning of caches yet in RHEL7.4 but currently, we are having a very high issue with customer in IPA environment. All the updates go to retrochangelog even if the only component that needs them is DNS. When the retro changelog is enabled, the database is created and by default: ./ldap/servers/plugins/retroclretrocl.h: #define RETROCL_BE_CACHEMEMSIZE "2097152" the size is not reasonable. This, that should be only provoking performance issues, is driving to database corruption and inconsistencies as well. Let's change this as soon as possible to a reasonable value. It's just a one liner commit. I am having corruption and inconsistencies in nearly all the customers that are using ipa in large environments. For instance: [28/Jun/2017:11:41:17 -0400] - libdb: BDB0689 changelog/id2entry.db page 15397 is on free list with type 5 [28/Jun/2017:11:41:17 -0400] - libdb: BDB0061 PANIC: Invalid argument [28/Jun/2017:11:41:17 -0400] - libdb: BDB0060 PANIC: fatal region error detected; run recovery [28/Jun/2017:11:41:17 -0400] - Serious Error---Failed in dblayer_txn_abort, err=-30973 (BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery) [28/Jun/2017:11:41:17 -0400] DSRetroclPlugin - replog: an error occured while adding change number 13028533, dn = changenumber=13028533,cn=changelog: Operations error. [28/Jun/2017:11:41:17 -0400] retrocl-plugin - retrocl_postob: operation failure [1] [28/Jun/2017:11:41:17 -0400] - libdb: BDB0060 PANIC: fatal region error detected; run recovery [28/Jun/2017:11:41:17 -0400] - libdb: BDB0060 PANIC: fatal region error detected; run recovery [28/Jun/2017:11:41:17 -0400] - Serious Error---Failed in dblayer_txn_begin, err=-30973 (BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery) [15/Jun/2017:21:34:48 -0400] DSRetroclPlugin - delete_changerecord: could not delete change record 12684904 (rc: 1) [15/Jun/2017:21:35:22 -0400] - libdb: BDB3017 unable to allocate space from the buffer cache [15/Jun/2017:21:35:22 -0400] DSRetroclPlugin - delete_changerecord: could not delete change record 12684946 (rc: 1) [15/Jun/2017:21:35:52 -0400] - libdb: BDB3017 unable to allocate space from the buffer cache [15/Jun/2017:21:35:52 -0400] DSRetroclPlugin - delete_changerecord: could not delete change record 12685002 (rc: 1) [15/Jun/2017:21:36:06 -0400] - libdb: BDB3017 unable to allocate space from the buffer cache (this latest should be related to insufficient locks and not the db itself). But there are multiple issues with retro changelog since any update impacts it and the cache is not big enough. Please, give to this bug some priority if this has not been solved with auto tunning. Thanks a lot, German.
Metadata Update from @firstyear: - Custom field rhbz adjusted to https://bugzilla.redhat.com/show_bug.cgi?id=1466191
Metadata Update from @firstyear: - Issue assigned to firstyear
Metadata Update from @firstyear: - Custom field type adjusted to defect - Issue set to the milestone: 1.3.7 backlog (was: 0.0 NEEDS_TRIAGE)
Metadata Update from @firstyear: - Issue priority set to: major
Metadata Update from @tbordaz: - Issue assigned to tbordaz (was: firstyear)
I have not been able to reproduce the issue. I tried with the following settings:
After a discussion with @gparente, @lkrispen and @firstyear we have not a clear understanding why increasing the entry cache prevents the DB corruption BUT we know that setting entry cache to 200Mb prevents the DB corruption. So the fix is known although we do not know understand why it works :(
Preparing a fix in that sense
Metadata Update from @tbordaz: - Custom field component adjusted to None - Custom field origin adjusted to None - Custom field reviewstatus adjusted to None - Custom field version adjusted to None
<img alt="0001-Ticket-49313-Change-the-retrochangelog-default-cache.patch" src="/389-ds-base/issue/raw/files/e631efcc3decb4ba70a7bde2f6b6bd4cc435acdc1ee7436fd33107bb3087d40a-0001-Ticket-49313-Change-the-retrochangelog-default-cache.patch" />
Metadata Update from @tbordaz: - Custom field reviewstatus adjusted to review (was: None)
Metadata Update from @firstyear: - Custom field reviewstatus adjusted to ack (was: review)
git push origin master
Counting objects: 7, done. Delta compression using up to 8 threads. Compressing objects: 100% (7/7), done. Writing objects: 100% (7/7), 996 bytes | 0 bytes/s, done. Total 7 (delta 5), reused 0 (delta 0) remote: Sending to redis to log activity and send commit notification emails remote: Emitting a message to the fedmsg bus. remote: * Publishing information for 1 commits remote: Sending notification emails to: 389-commits@lists.fedoraproject.org To ssh://git@pagure.io/389-ds-base.git fb0c84b..28ad77e master -> master
Metadata Update from @tbordaz: - Issue close_status updated to: fixed - Issue status updated to: Closed (was: Open)
git push origin 389-ds-base-1.3.6
Counting objects: 7, done. Delta compression using up to 8 threads. Compressing objects: 100% (7/7), done. Writing objects: 100% (7/7), 998 bytes | 0 bytes/s, done. Total 7 (delta 5), reused 0 (delta 0) remote: Sending to redis to send commit notification emails remote: Emitting a message to the fedmsg bus. remote: * Publishing information for 1 commits remote: Sending notification emails to: 389-commits@lists.fedoraproject.org To ssh://git@pagure.io/389-ds-base.git 17aee5e..ef43dd8 389-ds-base-1.3.6 -> 389-ds-base-1.3.6
Metadata Update from @tbordaz: - Issue set to the milestone: 1.3.6 backlog (was: 1.3.7 backlog)
389-ds-base is moving from Pagure to Github. This means that new issues and pull requests will be accepted only in 389-ds-base's github repository.
This issue has been cloned to Github and is available here: - https://github.com/389ds/389-ds-base/issues/2372
If you want to receive further updates on the issue, please navigate to the github issue and click on subscribe button.
subscribe
Thank you for understanding. We apologize for all inconvenience.
Metadata Update from @spichugi: - Issue close_status updated to: wontfix (was: fixed)
Login to comment on this ticket.