#48184 clean up and delete connections at shutdown
Closed: wontfix 4 years ago Opened 7 years ago by rmeggins.

The server relies on the idletimeout to add the I/O timeout job. If there is no idletimeout, or the user (e.g. directory manager) has no idletimeout, the connection will be added to nunc-stans with no timeout. If the client does not close the connection, the connection will remain in nunc-stans indefinitely, and will not be closed at shutdown. We need some sort of mechanism in 389 (or possibly in nunc-stans) that can be used to clean up these types of jobs at shutdown.


Per triage, push the target milestone to 1.3.6.

Metadata Update from @nhosoi:
- Issue set to the milestone: 1.3.6.0

5 years ago

Metadata Update from @firstyear:
- Issue assigned to firstyear

5 years ago

Metadata Update from @firstyear:
- Issue assigned to firstyear

5 years ago

I think this mechanism needs to be in directory server, we need to walk the connection table/tree and close all our connections (even the ones with idletimeouts)

Today we probably just let these leak, and even during a shutdown, the event still may be fired and sent to the work queues. It's an annoying issue, because we probably need to stop the event thread before the worker threads to stop more worker jobs coming in and allow the work queue to drain. This way, even if the event were meant to fire, because the event loop is stopped, it won't and we can safely eliminate this.

Metadata Update from @firstyear:
- Custom field reviewstatus adjusted to new
- Issue close_status updated to: None

5 years ago

Metadata Update from @mreynolds:
- Custom field reviewstatus reset (from new)
- Issue set to the milestone: 1.3.7 backlog (was: 1.3.6.0)

5 years ago

Metadata Update from @firstyear:
- Custom field reviewstatus adjusted to None
- Issue set to the milestone: 1.4 backlog (was: 1.3.7 backlog)

4 years ago

Metadata Update from @firstyear:
- Custom field reviewstatus adjusted to review (was: None)

4 years ago

are you sure to tear down all conns befor doing the ps_search cleanup ?

Let me check, but you are likely correct :)

@vashirov @lkrispen can you check this please? I think this now passes for your tests, so would be great to have this merged,

Thanks!

That's correct, it passes my tests.

@lkrispen Can you review this again please given it passes the tests. Thank you so much!

Metadata Update from @lkrispen:
- Custom field reviewstatus adjusted to ack (was: review)

4 years ago

commit 1418fc3
To ssh://git@pagure.io/389-ds-base.git
d9ad6fd..1418fc3 master -> master

Metadata Update from @firstyear:
- Issue close_status updated to: fixed
- Issue status updated to: Closed (was: Open)

4 years ago

4316907..cf6d12c 389-ds-base-1.3.7 -> 389-ds-base-1.3.7

Metadata Update from @tbordaz:
- Custom field rhbz adjusted to https://bugzilla.redhat.com/show_bug.cgi?id=1517383

4 years ago

This previous patch introduced some rare regression (hang), a new patch is under tests. Attaching it

I think I understand this. The main fix is in closing_try to see if we can safely detach the temporary job I think. Can you explain your logic in the fix so I'm sure, but I think this looks good :) Great work @tbordaz

commit e562157 --> master

40178b5..941360b 389-ds-base-1.3.8 -> 389-ds-base-1.3.8

dc719ea..660508d 389-ds-base-1.3.7 -> 389-ds-base-1.3.7

Third fix

085e99f --> master
e0e739d..770cfd8 389-ds-base-1.3.7 -> 389-ds-base-1.3.7
5aed2f4..7ff19c6 389-ds-base-1.3.8 -> 389-ds-base-1.3.8

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/1515

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: fixed)

2 years ago

Login to comment on this ticket.

Metadata