#49818 For a replica bindDNGroup, should be fetched the first time it is used not when the replica is started
Closed: wontfix 5 years ago Opened 5 years ago by tbordaz.

Issue Description

When a replica is created, the time of the last_group_check is set at the current time.
So if the a replica contains a nsds5replicabinddngroup, this group will only be uploaded after a delay of nsDS5ReplicaBindDnGroupCheckInterval.

So in the period [replica_creation, replica_creation+nsDS5ReplicaBindDnGroupCheckInterval] any incoming replication connections, within will fail with NSDS50_REPL_PERMISSION_DENIED , even if the group actually contains the bound DN.

On the supplier side we can see message like

[29/Jun/2018:17:21:58.943439172 +0200] - ERR - NSMMReplicationPlugin - acquire_replica - agmt="cn=meTo<server_fqdn>" (<server>:389): Unable to acquire replica: permission denied. The bind dn "" does not have permission to supply replication updates to the replica. Will retry later.

Package Version and Platform

Any version

Steps to reproduce

  1. IPA server-replica install
  2. or create a replica with a group/check interval and verify that for the first check_interval period all replication session fail
    3.

Actual results

It fails

Expected results

should not


Metadata Update from @tbordaz:
- Custom field component adjusted to None
- Custom field origin adjusted to IPA
- Custom field reviewstatus adjusted to None
- Custom field type adjusted to None
- Custom field version adjusted to None

5 years ago

Metadata Update from @tbordaz:
- Issue assigned to tbordaz

5 years ago

A workaround is to set nsDS5ReplicaBindDnGroupCheckInterval to a small value (e.g. 3s) so that the groupDN will be fetched rapidely, then reset it to a more appropriate value (e.g. 60s)

when the binddngroup was implemented it was a deliberate choice not to listen to group changes an dnot to always reevaluate the group since this could be a large overhead.

Another option to fix it could be to rebuild the binddn group if authentication fails and retry

@lkrispen thanks for you feedback. I fully agree it looks overkill to listen group changes.

I tried to reproduced the bug with a test case and surprisingly had difficulties. First lib389 is doing magic so it took me time to realize the config was not what I was expecting.
The second reason is that my first analyze was wrong.

Actually, the group is fetched at replica creation/update so we are good. The problem in IPA is that the member join the group after the replica creation (TBC) so the uploaded group is not up to date for the firsts replication sessions.

Now I think this part of code could be improved, so that the group is not fetched at replica creation but the first time the group is need (at replication session). I also like the idea to fetch the group on authentication failure. However if the authentication succeeds on a "old" group is also a concern. Should we systematically fetch the group ? I do not think so. IMHO such group should be very stable it most of the time it is a waste of time fetching it.

So I feel that we may keep that ticket but to change the when fetch is done.

After additional investigations fetching of bindDnGroup is working as designed.
The issue I would like to report in that ticket is that fetching is done at replica startup/creation and then on demand (during replication start session) at the condition the last fetch is older than bindDnGroupCheckInterval.

In case, at startup, the group does not contain the replica_mgr. No replication session will succeed until bindDnGroupCheckInterval delay.

For FreeIPA, it is a common scenario where the first 60sec (bindDnGroupCheckInterval) no incoming replication session succeeds

Metadata Update from @mreynolds:
- Issue set to the milestone: 1.3.7.0

5 years ago

Metadata Update from @mreynolds:
- Custom field rhbz adjusted to https://bugzilla.redhat.com/show_bug.cgi?id=1598478

5 years ago

Metadata Update from @spichugi:
- Custom field reviewstatus adjusted to ack (was: None)

5 years ago

master -> 4206d27
df357f5..19d2bcb 389-ds-base-1.3.8 -> 389-ds-base-1.3.8
c907bca..244c940 389-ds-base-1.3.7 -> 389-ds-base-1.3.7

Metadata Update from @tbordaz:
- Issue close_status updated to: fixed
- Issue status updated to: Closed (was: Open)

5 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/2877

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: fixed)

3 years ago

Login to comment on this ticket.

Metadata