At the end of a replication session the url of the supplier is added (or updated) in the consumer RUV. This aslo happens if the replication session is terminated because supplier and consumer have different database generation IDs.
In this situation the consumer will send a list of referrals to client which include masters with differnt data, if the client choses one of these to apply the update the update will never be replicated to the consumer.
attachment 0001-ticket-48995-WIP-only-set-referrals-in-the-RUV-if-da.patch
Metadata Update from @lkrispen: - Issue set to the milestone: 1.3.5.14
Metadata Update from @mreynolds: - Custom field component reset (from Replication - General) - Custom field reviewstatus adjusted to review - Issue close_status updated to: None - Issue set to the milestone: 1.3.7.0 (was: 1.3.5.14)
The patch looks good to me, just for safety would you check local_gen/suppl_gen being not NULL before logging/strcasecmp.
What happens if all the masters differ? Would we never send a referral?
If I have two masters, A and B. Lets say A is out of sync with B, specifically behind.
When I reinit B -> A, A's data generation will not match B. As a result, A will not present a refereal to B during the sync process. When B -> A is complete B and A will match, so all good from there, but it may break the referral procees here.
As well, this may affect some other conditions (online db2index, if B -> A and you reindex A, but B is ahead of A, it won't set referral either.)
So I think that we have to set referrals to servers where data generation is the same or greater than our own, or in the case of two masters only, always set the referral regardless.
the suggested fix affects only the end replication session for an incremental update request, the setting of referrals after a total update is not affected.
We need to avoid situations where you have eg master M1 and M2 and consumer C. If you reimport M1 and then init C, M1 and C have the same data generation. If, in the current implementation M2 connects to C, it will get the differen db gen error, but the referrals of C will now also contain M2 - and a client of C could be redirected to M1 or M2
Right. So this only affects consumers. This won't affect MMR?
no, but master usually do not send referrals, only if temporary readonly.
What I was saying is that total update is not affected and in incremental update as soon as the repl gen matches in the next session the referrals will be updated
Okay, I'm happy to ack this then if @tbordaz is okay too :)
While I think it is unlikely that the supplier and local RUV would be missing the repl_gen, we should still check for NULL before dereferencing it. Then you have my ack!
Metadata Update from @mreynolds: - Custom field component adjusted to None
Metadata Update from @lkrispen: - Issue assigned to lkrispen
will update the patch
well. looks I didn't.
And there is a similar issue with csngen adjustment. We always do it for a startRepl request, even if we do have different db generations. Right now we only decide on the supplier if we have the same Gen after examining the consumer ruv. This should already havee been handled on the consumer.
@lkrispen - is this patch still applicable? If so please push it
Metadata Update from @mreynolds: - Issue set to the milestone: 1.4.2 (was: 1.3.7.0)
Metadata Update from @vashirov: - Issue set to the milestone: 1.4.3 (was: 1.4.2)
Metadata Update from @mreynolds: - Custom field rhbz adjusted to https://bugzilla.redhat.com/show_bug.cgi?id=1859228
Issue linked to Bugzilla: Bug 1859228
389-ds-base is moving from Pagure to Github. This means that new issues and pull requests will be accepted only in 389-ds-base's github repository.
This issue has been cloned to Github and is available here: - https://github.com/389ds/389-ds-base/issues/2054
If you want to receive further updates on the issue, please navigate to the github issue and click on subscribe button.
subscribe
Thank you for understanding. We apologize for all inconvenience.
Metadata Update from @spichugi: - Issue close_status updated to: wontfix - Issue status updated to: Closed (was: Open)
Login to comment on this ticket.