Ticket was cloned from Red Hat Bugzilla (product Red Hat Enterprise Linux 7): Bug 1463204
When a backend instance is deleted, it checks to see if the instance is busy, but it also sets it to busy while it checks if its busy. It needs to be set to not busy after the delete. Otherwise if you create a new database with the same name it will then hang the server at shutdown because its still seen as busy.
Metadata Update from @mreynolds: - Custom field rhbz adjusted to https://bugzilla.redhat.com/show_bug.cgi?id=1463204
Metadata Update from @mreynolds: - Issue assigned to mreynolds
<img alt="0001-Ticket-49402-Server-can-hang-at-shutdown-after-backe.patch" src="/389-ds-base/issue/raw/files/94d2a48fcc18913ca65b2a37ddef25c84427794e51482a2ad64fda1284bd7afd-0001-Ticket-49402-Server-can-hang-at-shutdown-after-backe.patch" />
Metadata Update from @mreynolds: - Custom field component adjusted to None - Custom field origin adjusted to None - Custom field reviewstatus adjusted to review - Custom field type adjusted to None - Custom field version adjusted to None
this patch fixes the issue, but I am not sure if it fixes it in the right place.
the hang is in shutdown in task_destroy, where it hangs on a "busy" backend. resetting the busy flag fixes it. But we have an delete of a backend followed by the add of the same again. Shouldn't the add, when it is completed reset the flag ?
this patch fixes the issue, but I am not sure if it fixes it in the right place. the hang is in shutdown in task_destroy, where it hangs on a "busy" backend. resetting the busy flag fixes it. But we have an delete of a backend followed by the add of the same again. Shouldn't the add, when it is completed reset the flag ?
But it's the delete function that sets it busy, but it does not unset it after it's done its work. To me it makes sense the way it is.
And if I try and set it "not busy" when we create the backend it causes the server to crash during instance creation. Investigating the crash, but I feel this is the wrong direction to take. We'll see what I find...
maybe I was unclear, what I don't understand is: - we add a backend - we run an import task - we delete the backend and add it again - the shutdown fails to destroy a task: why does the reference to the deleted backend still exist, shouldn't the delete of the backend completely remove it ? it should be irrelevant what flags are set
Funny you ask that, I'm also looking into how the old flags are getting resurrected... Perhaps the task has a corrupted inst struct after it was destroyed. Investigating this....
Also setting the flags to "not busy" during backend creation does not fix the issue.
Yeah, so its the destroy function. This also fixes the problem:
+++ b/ldap/servers/slapd/back-ldbm/instance.c @@ -392,7 +392,7 @@ ldbm_instance_destructor(void **arg) attrinfo_deletetree(inst); slapi_ch_free((void **)&inst->inst_dataversion); /* cache has already been destroyed */ - + inst->inst_flags = 0; slapi_ch_free((void **)&inst); }
This is still a hack fix though. Perhaps the delete instance function needs to destroy any tasks as well. Still investigating...
Needed to change how the import task destructor checked if the task was done or not.
Please review this patch:
<img alt="0001-Ticket-49402-Adding-a-database-entry-with-the-same-d.patch" src="/389-ds-base/issue/raw/files/07495973b43685a1828e7f760e135f6cefc477afff348ae2c460dcfe2c882fbd-0001-Ticket-49402-Adding-a-database-entry-with-the-same-d.patch" />
Metadata Update from @lkrispen: - Custom field reviewstatus adjusted to ack (was: review)
3eb443b..bc6dbf1 master -> master
746abe7..2ef4e81 389-ds-base-1.3.7 -> 389-ds-base-1.3.7
Metadata Update from @mreynolds: - Issue close_status updated to: fixed - Issue status updated to: Closed (was: Open)
80c8795..009800b 389-ds-base-1.3.6 -> 389-ds-base-1.3.6
389-ds-base is moving from Pagure to Github. This means that new issues and pull requests will be accepted only in 389-ds-base's github repository.
This issue has been cloned to Github and is available here: - https://github.com/389ds/389-ds-base/issues/2461
If you want to receive further updates on the issue, please navigate to the github issue and click on subscribe button.
subscribe
Thank you for understanding. We apologize for all inconvenience.
Metadata Update from @spichugi: - Issue close_status updated to: wontfix (was: fixed)
Log in to comment on this ticket.