#5479 test ipa-restore with at least 2 promoted replicas
Closed: Invalid None Opened 8 years ago by pvoborni.

And fix if it will fail.

I'm afraid that it might fail during disable_agreements phase because the code directly modifies replication agreements.


Seems to run fine with schemes B <-> A <-> C, A being the master, B being the restored replica; and B <-> A <-> C <-> D with A being the master and C being the restored replica. Tested with CAs on all systems.

However, ipa_restore.py:504 throws exception if patch from freeipa-devel is not included. This error does not block the command from finishing with success, though.

before we close the ticket, I would like to know Ludwig's option on ipa-restore handling of replication agreements.

There is no change since 4.2:

ipa-restore will:

  • disable replication agreements between the restored replica and the ones connected to it using 'old method': ReplicationManager.disable_agreement(host) which modifies the replication agreement entry directly
  • after restore, admin is encouraged to re-initialize each master from the restored server

How does it behave if some master is reinitialized and then some update happens on not-reinitialized server?

Does the re-initialized server refuse to replica with not-reinitialized servers?

Re-initialize in ipa-replica-manage also enables both repl. agreements prior re-initialization. How does it work together with managed topology? E.g. if we would like to initiate the reinitialization using ipa topologysegment-reinitialize $SUFFIX $SEGMENT --left|--right. If the agreement was disabled by setting nsds5ReplicaEnabled to 'off' then this info is not reflected in the topologysegment and even if it would, the same attribute in topology segment is read-only in API so it can't be changed by admin.

Replying to [comment:4 pvoborni]:

Does the re-initialized server refuse to replica with not-reinitialized servers?

No. If two servers will be able to replicate to each other depends only on the database generation ID. As long as databases are created by initializing from another server this is the case.
Only if a database is created by an ldif import without replication meta data a new generation ID is generated.
BUT. If one or more servers are restored and som not, it will depend on the age of the backup used to restore, if the non-restored server is not able to find the starting csn in its changelog (eg if it is trimmed, replication can fail.
BUT2: the other problem if not all servers are restored, there could be unexpected data states. Eg if you have A <--> B <--> C in sync, then take a backup on A, then add a new entry (E) on A, it will be replicated to B and C, now restore A from the backup and initialize B. When replication restarts, C will find the add of (E) in its changelog and replicate it back to B and to A.
So, if the goal is to have the state of the backup on all servers, all servers need to be reinitialized.

How does it behave if some master is reinitialized and then some update happens on not-reinitialized server?

the change will be accepted on the non-initialized server and stored in its changelog, it will then try to replicate this change to the other servers including the initialized servers

Re-initialize in ipa-replica-manage also enables both repl. agreements prior re-initialization. How does it work together with managed topology? E.g. if we would like to initiate the reinitialization using ipa topologysegment-reinitialize $SUFFIX $SEGMENT --left|--right. If the agreement was disabled by setting nsds5ReplicaEnabled to 'off' then this info is not reflected in the topologysegment and even if it would, the same attribute in topology segment is read-only in API so it can't be changed by admin.

The nsds5ReplicaEnabled and nsds5BeginReplicaRefresh are managed attributes, but not restricted attributes - so they can be change directly in cn=config or by modifying the segment. That's why the old method still works. For direct changes the change is not propagated to the segment, which probably should be improved. If you enable the agreement via the segment, it is visible:

dn: cn=ca-repair, cn=ipaca, cn=topology, cn=ipa,cn=etc,dc=abc,dc=idm,dc=lab,dc=eng, dc=brq, dc=redhat, dc=com
objectClass: iparepltoposegment
objectClass: top
ipaReplTopoSegmentLeftNode: vm-192.abc.idm.lab.eng.brq.redhat.com
cn: ca-repair
ipaReplTopoSegmentDirection: both
ipaReplTopoSegmentRightNode: vm-072.abc.idm.lab.eng.brq.redhat.com

# ldapmodify -h vm-192.abc.idm.lab.eng.brq.redhat.com -p 389 -D "cn=directory manager" -w secret123
dn: cn=ca-repair,cn=ipaca,cn=topology,cn=ipa,cn=etc,dc=abc,dc=idm,dc=lab,dc=eng,dc=brq, dc=redhat,dc=com
changetype: modify
replace: nsds5ReplicaEnabled;left 
nsds5ReplicaEnabled;left: on

dn: cn=ca-repair,cn=ipaca,cn=topology,cn=ipa,cn=etc,dc=abc,dc=idm,dc=lab,dc=eng,dc=brq ,dc=redhat,dc=com
nsds5ReplicaEnabled;left: on
objectClass: iparepltoposegment
objectClass: top
ipaReplTopoSegmentLeftNode: vm-192.abc.idm.lab.eng.brq.redhat.com
cn: ca-repair
ipaReplTopoSegmentDirection: both
ipaReplTopoSegmentRightNode: vm-072.abc.idm.lab.eng.brq.redhat.com

if the attribute is readonly in the api, then the "old" method needs to used, or it would have to be changed to behave similar as "reinitialize".

But if the "old" method still works, we maybe want to change this only after the direct mods in cn=config are reflected in the segment.

Thank you Ludwig,

IMO the goal is to have the restored state on all servers. But looking more into ipa_restore.disable_agreements method, I see that it disables all agreements in the topology which should effectively prevent tainting re-initialized servers by not-yet-initialized --> no issue here.

But I'll open tickets for:

For direct changes the change is not propagated to the segment, which probably should be improved

And for

it would have to be changed to behave similar as "reinitialize"

So that it can be done from API. Maybe we should implement a script which would do the work because servers needs to reinitialized in correct order to reflect topology.

For direct changes the change is not propagated to the segment, which probably should be improved

Thinking about it again, I am no longer convinced that it is useful:

the change to replicabeginrefresh is temporary, removed after init and tehre is some effort to clean it from the segment when the init was done via ipa command, so propagating the temporary state to the segment is probably not needed.

the change of replicanabled would be good to see in the segment, but if the disabling is done directly in the agreement object, replication will be disabled and even if the segment is updated in the postop phase, it will probably not be replicated unless there is an other active agreement on this server, so it could be only visible in the segment if it is queried on that specific server

True. I think this whole effort would need the topology status functionality to know that we are not receiving updates from some parts(all in this case) of topology.

But for the case, where we want to re-initialize whole topology. Would it possible to do something like the following using managed topology?

Topology: A <-> B <-> C

  1. A is restored, all agreements are disabled
  2. enable A -> B, reinitialize B from A, enable B -> A
  3. enable B -> C, reinitialize C from B, enable C -> B

Apply same principal for more complex topologies.

I assume that enabling direction B -> A will be replicated to B by agreement A -> B, so it could be possible to propagate the changes.

Other question is whether it is worth to write a tool which would do it in automated/semi-automated manner.

the discussion is reflected in ticket #5543

Metadata Update from @pvoborni:
- Issue assigned to lkrispen
- Issue set to the milestone: FreeIPA 4.3

7 years ago

Login to comment on this ticket.

Metadata