#48414 cleanallruv should also clean the ruvs in the replication agreement
Closed: wontfix None Opened 8 years ago by lkrispen.

Looks like cleanallruv can leave a replicaID in the RUV of an replicationagreement.

I think this is fine and can be ignored. That replica ID should be cleaned up on the next update that the agreement processes. I will investigate to make sure

The agreement RUV does get cleaned up after some time(or a restart). It does not seem to cause any problems while it is present. The consumer RUV(the ruv in the agreement) is only used for changelog purging, but cleanallruv already does the purge at the end of the task.

Can this ticket be closed? Was there anything else I should look at?

it will be removed by an update if everything works fine, but what about the cases
- where a replica cannot be reached, temporararily or forever. In force mode we do not wait for the replica to come online and clean the ruv in the database and changelog, but in the agreeement it will be kept until the replica can be reached again
- where the server crashes before the update, will it be recreated fromthe dse.ldif having the old RID in the consumer ruv.

if it is not too hard, I still think it would be nice to clean the consumer ruv

d46a0f6..11a5b1e master -> master
commit 11a5b1e
Author: Mark Reynolds mreynolds@redhat.com
Date: Tue Sep 6 10:39:46 2016 -0400

Metadata Update from @mreynolds:
- Issue assigned to mreynolds
- Issue set to the milestone: 1.3.6 backlog

7 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/1745

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: Fixed)

3 years ago

Log in to comment on this ticket.