#49410 opened connection can remain no longer poll, like hanging
Closed: wontfix 6 years ago Opened 6 years ago by tbordaz.

Issue Description

The issue is difficult to reproduce and not completely understood.
During IPA tests, KRB5 is having closed all its connections to DS but at the same time, the connections are still opened on DS connection table.

Something weird is that the connections are in gettingber status but no thread are reading the connections and the work queue is empty

Package Version and Platform

1.3.7.4 Fedora

Steps to reproduce

Not identified

Actual results

Expected results


Some of the data:

monitoring shows that ip_local connection are opened and are in gettingber r status (6th field in the connection record)

ldapsearch -LLL -o ldif-wrap=no  -b "cn=monitor" -s base                                                                        
dn: cn=monitor
cn: monitor
objectClass: top
objectClass: extensibleObject
version: 389-Directory/1.3.7.4 B2017.258.1513
threads: 19
connection: 64:20171019131816Z:3:3:-:uid=pkidbuser,ou=people,o=ipaca:0:0:0:5:ip=<client_ip>
connection: 66:20171019131816Z:21:21:r:cn=Directory Manager:0:0:0:3:ip=local
connection: 70:20171019131816Z:2:2:-:uid=pkidbuser,ou=people,o=ipaca:0:0:0:6:ip=<client_ip>
connection: 78:20171019131822Z:912:912:-:uid=pkidbuser,ou=people,o=ipaca:0:0:0:10:ip=<client_ip>
connection: 98:20171019132047Z:600:600:-:uid=pkidbuser,ou=people,o=ipaca:0:0:0:19:ip=<client_ip>
connection: 99:20171019131825Z:13:13:-:krbprincipalname=dns/<vm>@<domain>,cn=services,cn=accounts,<suffix>:0:0:0:15:ip=local
connection: 100:20171019131825Z:4:4:-:krbprincipalname=dns/<vm>@<domain>,cn=services,cn=accounts,<suffix>:0:0:0:16:ip=local
connection: 101:20171019132329Z:2384:2384:r:cn=Directory Manager:0:0:0:21:ip=local
connection: 102:20171019225611Z:5651:5651:r:cn=Directory Manager:0:0:0:2322:ip=local
connection: 103:20171019152054Z:9047:9047:r:cn=Directory Manager:0:0:0:477:ip=local
connection: 104:20171020141111Z:8:8:-:fqdn=<vm-fqdn>,cn=computers,cn=accounts,<suffix>:0:3:0:6111:ip=<client_ip>
connection: 105:20171020093406Z:5815:5815:-:cn=Directory Manager:0:0:0:4884:ip=local
connection: 106:20171020034044Z:7030:7030:r:cn=Directory Manager:0:0:0:3442:ip=local
connection: 108:20171020141826Z:2:0:-:cn=directory manager:0:0:0:6140:ip=<client_ip>
currentconnections: 14

In addition we can see that the workqueue is empty (except the cn=monitor SRCH req)

opsinitiated: 49411
opscompleted: 49410

Metadata Update from @tbordaz:
- Custom field component adjusted to None
- Custom field origin adjusted to None
- Custom field reviewstatus adjusted to None
- Custom field type adjusted to None
- Custom field version adjusted to None

6 years ago

At the same time NO worker are reading connections

The sample of the connections shows that it happens at any time during the timelife of the connection. After few or many requests:

[19/Oct/2017:15:18:16.485833425 +0200] conn=3 fd=66 slot=66 connection from local to /var/run/slapd-<dom>.socket
[19/Oct/2017:15:18:16.486498501 +0200] conn=3 AUTOBIND dn="cn=Directory Manager"
[19/Oct/2017:15:18:16.486504714 +0200] conn=3 op=0 BIND dn="cn=Directory Manager" method=sasl version=3 mech=EXTERNAL
[19/Oct/2017:15:18:16.486535389 +0200] conn=3 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0000557540 dn="cn=Directory Manager"
...
[19/Oct/2017:15:18:25.349406969 +0200] conn=3 op=19 SRCH base="cn=<domain>,cn=kerberos,<SUFFIX>" scope=0 filter="(objectClass=krbticketpolicyaux)" attrs="krbMaxTicketLife krbMaxRenewableAge krbTicketFlags"
[19/Oct/2017:15:18:25.350380515 +0200] conn=3 op=19 RESULT err=0 tag=101 nentries=1 etime=0.0001004272


[19/Oct/2017:15:23:29.206870046 +0200] conn=21 fd=101 slot=101 connection from local to /var/run/slapd-<dom>.socket
[19/Oct/2017:15:23:29.208298752 +0200] conn=21 AUTOBIND dn="cn=Directory Manager"
[19/Oct/2017:15:23:29.208305194 +0200] conn=21 op=0 BIND dn="cn=Directory Manager" method=sasl version=3 mech=EXTERNAL
[19/Oct/2017:15:23:29.208381956 +0200] conn=21 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0001378785 dn="cn=Directory Manager"
...
[19/Oct/2017:17:15:49.615770458 +0200] conn=21 op=2382 SRCH base="cn=<domain>,cn=kerberos,<SUFFIX>" scope=0 filter="(objectClass=krbticketpolicyaux)" attrs="krbMaxTicketLife krbMaxRenewableAge krbTicketFlags"
[19/Oct/2017:17:15:49.616398813 +0200] conn=21 op=2382 RESULT err=0 tag=101 nentries=1 etime=0.0000652590


[20/Oct/2017:00:56:11.579542425 +0200] conn=2322 fd=102 slot=102 connection from local to /var/run/slapd-<dom>.socket
[20/Oct/2017:00:56:11.580506042 +0200] conn=2322 AUTOBIND dn="cn=Directory Manager"
[20/Oct/2017:00:56:11.580512416 +0200] conn=2322 op=0 BIND dn="cn=Directory Manager" method=sasl version=3 mech=EXTERNAL
[20/Oct/2017:00:56:11.580554029 +0200] conn=2322 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0000866140 dn="cn=Directory Manager"
...
[20/Oct/2017:05:34:55.584044261 +0200] conn=2322 op=5649 SRCH base="cn=<domain>,cn=kerberos,<SUFFIX>" scope=0 filter="(objectClass=krbticketpolicyaux)" attrs="krbMaxTicketLife krbMaxRenewableAge krbTicketFlags"
[20/Oct/2017:05:34:55.584718475 +0200] conn=2322 op=5649 RESULT err=0 tag=101 nentries=1 etime=0.0000696549
netstat -ntulp | grep 8413
 tcp6       0      0 :::389                  :::*                    LISTEN      8413/ns-slapd       
 tcp6       0      0 :::636                  :::*                    LISTEN      8413/ns-slapd

Ok this is worse than we thought, I got a strace log of the KDC thanks to @mreznik where the problem is finally clearly shown (o the KDC side):100:

16:22:22.516821 write(4, "0\202\0039\2\1\25c\202\0032\4\16dc=ipa,dc=test\n\1\2\n\1\0\2\1\0\2\2\1,\1\1\0\240\201\366\241]\243\36\4\vobjectclass\4\17krbprincipalaux\243\33\4\vobjectclass\4\fkrbprincipal\243\36\4\vobjectclass\4\17ipakrbprincipal\241\201\224\243@\4\24ipakrbprincipalalias\4(ipa-dnskeysyncd/master.ipa.test@IPA.TEST\251P\201\22caseIgnoreIA5Match\202\20krbprincipalname\203(ipa-dnskeysyncd/master.ipa.test@IPA.TEST0\202\2\25\4\20krbPrincipalName\4\20krbCanonicalName\4\fkrbUPEnabled\4\17krbPrincipalKey\4\30krbTicketPolicyReference\4\26krbPrincipalExpiration\4\25krbPasswordExpiration\4\25krbPwdPolicyReference\4\20krbPrincipalType\4\rkrbPwdHistory\4\20krbLastPwdChange\4\23krbPrincipalAliases\4\25krbLastSuccessfulAuth\4\21krbLastFailedAuth\4\23krbLoginFailedCount\4\23krbPrincipalAuthInd\4\fkrbExtraData\4\22krbLastAdminUnlock\4\23krbObjectReferences\4\16krbTicketFlags\4\20krbMaxTicketLife\4\22krbMaxRenewableAge\4\rnsaccountlock\4\17passwordHistory\4\17ipaKrbAuthzData\4\17ipaUserAuthType\4\30ipatokenRadiusConfigLink\4\vobjectClass", 829) = 829 <0.000007>
16:22:22.516846 gettimeofday({tv_sec=1507998142, tv_usec=516851}, NULL) = 0 <0.000002>
16:22:22.516859 poll([{fd=4, events=POLLIN|POLLPRI}], 1, 300000) = 0 (Timeout) <300.092228>
16:27:22.609185 write(4, "0\6\2\1\26P\1\25", 8) = 8 <0.000345>
16:27:22.610422 write(4, "0\5\2\1\27B\0", 7) = 7 <0.000014>
16:27:22.610491 close(4)                = 0 <0.000019>

This is the moment things go south.
The KDc had exchanged dozens of messages before this one, and now it sends one last request down FD=4 which is our LDAP socket.
... and it waits ...
... and it waits ...

Finally the timeout kicks in and it gives up closing the socket.

note that from previous DS debugging we see that no request is "seen" by DS (at least nothing is written to the access log about this last request).
This means DS somehoe completely forgets about this socket, and that is why it is still open on the DS side. The socket buffers are not drained by DS, so the OS has to keep the socket opne vs DS until it is drained of the last data sent by the KDC.

Seem like DS forgets about this socket and stops listening for requests for some reason.

@simo: do you have the DS access log from this time?

no I have only the strace @mreznik gave me, but the behavior is consistent with the machine @tbordaz analyzed after I pointed out the strange hanging sockets.

Metadata Update from @tbordaz:
- Custom field origin adjusted to IPA (was: None)

6 years ago

Observation:

The problem occurs more frequently on 1CPU machine than on 2CPU machine (according to CI tests).
Also the number of KDC subprocess and so ldapi connection increase with the number of CPU.
so 1CPU machine has less ldapi connections.

@tbordaz my intuition is that we see it happen less frequently on 2 CPUs because there we have 2 KDCs, and kerberos clients retry to obtain credentials after only 1 second if they do not get answers.
So both the KDC processes need to be stuck at the same time to see the issue in CI.
So number of CPUs is probably not relevant to the case and should not distract.

  • The problem occurs with and without nunc-stans

  • During last failure: status of the krb connections

    4 ldapi connections:
       conn=3, conn=14, conn=15, conn=20
    connection: 64:20171026070411Z:21:21:r:cn=Directory Manager:0:0:0:3:ip=local
    connection: 65:20171026070920Z:1874:1874:-:cn=Directory Manager:0:0:0:20:ip=local
     connection:95:20171026070417Z:13:13:-:krbprincipalname=dns/<fqdn>@ipa.test,cn=services,cn=accounts,dc=ipa,dc=test:0:0:0:14:ip=local
    connection: 96:20171026070418Z:4:4:-:krbprincipalname=dns/<fqdn>@ipa.test,cn=services,cn=accounts,dc=ipa,dc=test:0:0:0:15:ip=local
    

    All of them have completed their last request.
    conn=3 is in gettingber

  • Impacted by https://pagure.io/389-ds-base/issue/48341
    Since 1.3.5.2 (https://pagure.io/389-ds-base/issue/48341) a connection flagged c_gettingber
    is ignored, assuming that the worker that will read it will clear the flag.
    The problem here, there is no worker.

I am not sure if #48341 makes a difference, the patch ignores a conn in _gettingber when iterating the connection table because the thread handling the conn could hold a lock.
But before the patch it would handle the connection, but since it is in c_gettingber would do nothing because it is an active connection, so the ignore was just delayed-

The question is: why is there no more worker for a conn in c_gettingber ?

@lkrispen I agree the question is why no worker for a gettingber flagged connection.

I just mentioned that because of #48341 a gettingber connection is ignored. That is the right thing to do.

  • Conn=3 activity

    [26/Oct/2017:03:04:11.137102596 -0400] conn=3 fd=64 slot=64 connection from local to /var/run/slapd-IPA-TEST.socket
    [26/Oct/2017:03:04:11.137365605 -0400] conn=3 AUTOBIND dn="cn=Directory Manager"
    [26/Oct/2017:03:04:11.137370251 -0400] conn=3 op=0 BIND dn="cn=Directory Manager" method=sasl version=3 mech=EXTERNAL
    [26/Oct/2017:03:04:11.137435284 -0400] conn=3 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0000291336 dn="cn=Directory Manager"
    ...
    [26/Oct/2017:03:04:17.872487214 -0400] conn=3 op=19 SRCH base="cn=IPA.TEST,cn=kerberos,dc=ipa,dc=test" scope=0 filter="(objectClass=krbticketpolicyaux)" attrs="krbMaxTicketLife krbMaxRenewableAge krbTicketFlags"
    [26/Oct/2017:03:04:17.872862737 -0400] conn=3 op=19 RESULT err=0 tag=101 nentries=1 etime=0.0000391816

  • The workqueue is empty:
    (gdb) print tail_work_q
    $9 = (struct Slapi_work_q *) 0x0
    (gdb) print work_q_size
    $10 = 0

  • A worker took the operation but failed to decode it
    Next operation is 20 and the single one on the connection
    the attempt to decode it failed
    (gdb) print c->c_ops->o_opid
    $25 = 20
    (gdb) print c->c_ops->o_next
    $26 = (struct op *) 0x0
    (gdb) print c->c_private->c_buffer
    $27 = 0x56058f2b5200 "0\201\231\002\001\024c\201\223\004&cn=IPA.TEST,cn=kerberos,dc=ipa,dc=test\n\001"

    (gdb) x/8a 0x56058f2b5200
    0x56058f2b5200: 0x8163140102998130 0x50493d6e63260493
    0x56058f2b5210: 0x632c545345542e41 0x72656272656b3d6e
    0x56058f2b5220: 0x70693d63642c736f 0x7365743d63642c61
    0x56058f2b5230: 0x200010a00010a74 0x1012c0102020001
    (gdb) print (void) c->c_ops->o_tag
    $33 = (void
    ) 0xffffffffffffffff
    (gdb) print (void) c->c_ops->o_ber->ber_tag
    $34 = (void
    ) 0xffffffffffffffff

    In conclusion:
    On conn=3, activity was detected
    It was added on the work queue
    the work queue is empty
    A worker took it but failed to decode it
    Then it is like under the failure it did not reset gettingber

Metadata Update from @mreynolds:
- Issue set to the milestone: 1.3.7.0

6 years ago
  • DS (calling ber_get_next) failed to decode the received ber tag (it remained to -1)

  • at the same time buffer counters are looking like the failure was not detected in DS
    Indeed c_buffer_offset is not 0 (as it is initialized before calling ber_get_next)

    (gdb) print c->c_private->c_buffer_offset
    $36 = 156

  • According to DS get_next_from_buffer, failure detection relies on specific returned values from ber_get_next (bytes_scanned/errno values)

Next steps

  • preparing a new debug version to trace ber_get_next failures

Logs with connection debug level

[27/Oct/2017:02:29:24.782557063 -0400] conn=3 fd=66 slot=66 connection from local to /var/run/slapd-IPA-TEST.socket
[27/Oct/2017:02:29:24.786416372 -0400] conn=3 AUTOBIND dn="cn=Directory Manager"
[27/Oct/2017:02:29:24.786422046 -0400] conn=3 op=0 BIND dn="cn=Directory Manager" method=sasl version=3 mech=EXTERNAL
[27/Oct/2017:02:29:24.786451540 -0400] conn=3 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0003189731 dn="cn=Directory Manager"
...
[27/Oct/2017:02:29:44.218066264 -0400] conn=3 op=70 SRCH base="cn=<host_fqdn>,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=test" scope=0 filter="(objectClass=*)" attrs=ALL
[27/Oct/2017:02:29:44.220209326 -0400] conn=3 op=70 RESULT err=0 tag=101 nentries=1 etime=0.0002510529

[27/Oct/2017:02:29:44.176515077 -0400] - ERR - connection_release_nolock_ext - conn=3 fd=66 Attempt to release connection that is not acquired
[27/Oct/2017:02:29:44.180509033 -0400] - ERR - connection_release_nolock_ext - conn=3 fd=66 Attempt to release connection that is not acquired
[27/Oct/2017:02:29:44.182716954 -0400] - ERR - connection_release_nolock_ext - conn=3 fd=66 Attempt to release connection that is not acquired
[27/Oct/2017:02:29:44.194812989 -0400] - ERR - connection_release_nolock_ext - conn=3 fd=66 Attempt to release connection that is not acquired
[27/Oct/2017:02:29:44.202926411 -0400] - ERR - connection_release_nolock_ext - conn=3 fd=66 Attempt to release connection that is not acquired
[27/Oct/2017:02:29:44.204742175 -0400] - ERR - connection_release_nolock_ext - conn=3 fd=66 Attempt to release connection that is not acquired
[27/Oct/2017:02:29:44.217268795 -0400] - ERR - connection_release_nolock_ext - conn=3 fd=66 Attempt to release connection that is not acquired
[27/Oct/2017:02:29:44.232292428 -0400] - ERR - connection_release_nolock_ext - conn=3 fd=66 Attempt to release connection that is not acquired
[27/Oct/2017:02:29:45.234274777 -0400] - ERR - connection_release_nolock_ext - conn=3 fd=66 Attempt to release connection that is not acquired
  • A debug build confirms that a worker picked up the connection from the work queue

  • When the problem occurs, connection_read_operation exits without setting op->msgid and msg->tag. Meaning connection_read_operation jumped to 'done'. But there are so many 'goto done;'

  • Suspect that get_next_from_buffer failed but then disconnect_server_nomutex was not called.
    currently testing a fix. I will attach a patch, that is only a test patch.

As I read this it says 'if the connection doesn't have the information to proceed, disconnect it'?

Is that correct?

I think it would be interesting to know what state the connection is in when we jump to DONE. Perhaps a way to debug this (forcefully) is to put in PR_ASSERT(op->msgid) before each 'goto done' because then we'll SIGABRT on the branch that has the logic fault?

Few updates

  • tentative fix did NOT fix the connection issue nor provided details to understand the issue
  • a new debug build was tested and showed interesting data although some of them are missing and new debug builds are needed
  • The interesting point is that some (all ?) of the hanging ldapi connection entered into turbo mode
    This mode, dedicates a thread to handle a given connection and the connection activity is ignored (#48341) except by the dedicated thread.
  • logs are missing to know if the connection exists from turbo mode and if not, why it remains in such situation

Next steps

  • new test will be done with standard 389-ds version with and without turbo mode enabled to confirm this track
  • analyze current log and prepare new debug build focus on turbo mode

Interesting. The c_gettingber flag is reset when the connection is made readable and handed over to the listener. In turbo mode the connection wil be continued without giving it back to the listener.
Question: is it possible that it stays incorrectly in turbo mode if there is no more data and then for some reason doesn't get back to the listener ?

there was a small code change in 1.3.6 for turbo mode (ticket #49193), but it seems not to change the logic, so teh question is still why not in 1.3.5

What if current_mode is changing between the two checks?

Actually, look at the patch.

I think it's missing

} else if (!current_mode) {
    new_mode = 0;
}

how would it change, it is a local var and set before the checks.
and new_mode is initialized to 0

Well, it must have been there for a reas,ni f new_mode was set to 1 earlier. It's a really unclean piece of code, so I'm worried this change could be it.

Humour me and try it :)

Here is the current status

  • The symptoms are:
    - KDC is hanging because DS does not respond to its requests
    - DS have some (sometime all) KDC connections in getting-ber flag but no worker are reading
    - hanging connection (gettingber) have detected activity and a worker worked on it
    - The worker was not able to read/decode the received req
    - The worker exited leaving the connection with getting ber flag

    • When/where does it occur
      - It does not occurs systematically
      - It occurs more frequently on "small" machine (2 CPUs)
      - It occurs on 1.3.7
      - It does not occur before 1.3.5.2. IMHO the fix #48341, reveal that bug that may
      have been there for a long time
  • The problem occurs independently of:
    - nunc-stans enabled or not
    - connection being in turbo mode or not

  • The problem happens under the following conditions:
    - When activity is detected, the worker needs to do IO read to start decoding the request.
    (the begining of the req was not in a previously read buffer)
    - When activity is detected, worker does IO read that fails in EAGAIN
    - DS then read_poll the connection that again times out (1s)
    - Upon that time out, gettingber should be reset. Either it is not, or it is set again.

In conclusion:
The problem occurs when the server fails to read the socket and receive an EAGAIN.
The poll_read timeout (although EAGAIN) and the handling of this specific condition keeps gettingber

Next step:
still debugging+test patch

@tbordaz If we disable turbomode, does that correct it?

@firstyear, unfortunately not. The problem happens with and without turbomode enabled.
IIRC it is more frequent to reproduce with turbomode enable. Turbo mode is an accelerator but not a required condition to reproduce.

When timeout on read there are two aspects:

  • allow to reevaluate turbo mode toggle on the connection
  • checking of an excessive number of timeout => closure on ioblocktimeout

@tbordaz just to get confirmation, you say that it is independent of turbo and nunc stans, but there are four combinations of these settings, does the problem happen in all combinations ?

@lkrispen I think these two settings are unrelated. NS only manages accept() and pushing events to the workers for select(), where as turbo is about keeping the event in the worked to call read() if it's busy rather than putting it back to the ct for select(). So I doubt this will have an effect ......

@tbordaz There have been two ioblocktimeouts in the last year. One to lower the default value, one to fix the iotimeouts with ns. Could these be related?

Adding new logs I hit a strange behavior. Upon timeout, while reading new data, the worker is in charge before exiting to reset the gettingber flag.
The additional logs when resetting gettingber were skipped (it was like the worker exited without doing anything).
Indeed, they were not skipped the worker was taking an other branch: The operation was flagged persistent search !

So an unread operation (timeout) was flagged persistent search. Quite unexpected.
Indeed on persistent search branch, gettingber is not reset.

Now why this flag on the operation. I have no clue but I will investigate some changes in the way operations are retrieved from the workqueue and set in the pblock. I suspect we are using an incorrect pblock or operation pointer.

Metadata Update from @tbordaz:
- Custom field reviewstatus adjusted to review (was: None)

6 years ago

This bug is a side effect of https://pagure.io/389-ds-base/issue/49097

So this bug applies to 1.3.6. and after. That is why we did not reproduce it on 1.3.5.

Metadata Update from @mreynolds:
- Custom field reviewstatus adjusted to ack (was: review)

6 years ago

git push origin master

Counting objects: 6, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 1.13 KiB | 0 bytes/s, done.
Total 6 (delta 4), reused 0 (delta 0)
To ssh://git@pagure.io/389-ds-base.git
  bdcbf5b..1b2a2d6  master -> master

git push origin 389-ds-base-1.3.6

Counting objects: 6, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 1.11 KiB | 0 bytes/s, done.
Total 6 (delta 4), reused 0 (delta 0)
To ssh://git@pagure.io/389-ds-base.git
   23a09e1..628a927  389-ds-base-1.3.6 -> 389-ds-base-1.3.6

git push origin 389-ds-base-1.3.7

To ssh://git@pagure.io/389-ds-base.git
   913bc29..f209fea  389-ds-base-1.3.7 -> 389-ds-base-1.3.7

@tbordaz I knew I was the cause of this! Doh! Great work to find this, the patch is really simple. Thanks for all your investigation on this,

@firstyear there is no concern regarding #49097, making pblock real black box structure was a long pending improvement and a great one. Review process missed the typo (read many times the faulty line without noticing it) but coverity caught it.
So if we have something to improve is to do a better job on coverity side.

Now to be honest, this part of code is so complex (turbo mode, more data, persistent search, timeout,...) that there is a chance that reviewing a coverity report on this specific line we also may have missed it ;)

@tbordaz Do you think it is worth adding additional debug information for connection logging? Like the one you used in the temporary debug patch?

Metadata Update from @tbordaz:
- Custom field rhbz adjusted to https://bugzilla.redhat.com/show_bug.cgi?id=1514033

6 years ago

@firstyear there is no concern regarding #49097, making pblock real black box structure was a long pending improvement and a great one. Review process missed the typo (read many times the faulty line without noticing it) but coverity caught it.
So if we have something to improve is to do a better job on coverity side.
Now to be honest, this part of code is so complex (turbo mode, more data, persistent search, timeout,...) that there is a chance that reviewing a coverity report on this specific line we also may have missed it ;)

Well I'm hoping that when I finally get some time, I'll really simplify that connection code a bit, and improve the quality by making it state machine based, so we can revist this soon :)

Metadata Update from @tbordaz:
- Issue close_status updated to: fixed
- Issue status updated to: Closed (was: Open)

6 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/2469

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: fixed)

3 years ago

Login to comment on this ticket.

Metadata
Attachments 3