#49864 RFE: Error (18) Replication error acquiring replica: Incremental update transient error. Backing off, will retry update later. (transient error) needs better description
Closed: wontfix 5 years ago Opened 5 years ago by mreynolds.

Description of problem:
Customer has replication stuck in "last update status: Error (18) Replication
error acquiring replica: Incremental update transient error.  Backing off, will
retry update later. (transient error)"

Version-Release number of selected component (if applicable):


How reproducible:
Same status for hours

[/root]# ipa-replica-manage list -v `hostname`
Directory Manager password:

nsc-prd-ipa-151.ipa.ba.ssa.gov: replica
  last init status: None
  last init ended: 1970-01-01 00:00:00+00:00
  last update status: Error (0) Replica acquired successfully: Incremental
update succeeded
  last update ended: 2018-02-12 17:05:39+00:00
ssc-prd-ipa-073.ipa.ba.ssa.gov: replica
  last init status: None
  last init ended: 1970-01-01 00:00:00+00:00
  last update status: Error (18) Replication error acquiring replica:
Incremental update transient error.  Backing off, will retry update later.
(transient error)
  last update ended: 1970-01-01 00:00:00+00:00
[/root]# ipa-replica-manage re-initialize --from ssc-prd-ipa-073.ipa.ba.ssa.gov
Directory Manager password:

Update in progress, 3 seconds elapsed
Update succeeded

[/root]# ipa-replica-manage list -v `hostname`
Directory Manager password:

nsc-prd-ipa-151.ipa.ba.ssa.gov: replica
  last init status: None
  last init ended: 1970-01-01 00:00:00+00:00
  last update status: Error (0) Replica acquired successfully: Incremental
update succeeded
  last update ended: 2018-02-12 17:05:39+00:00
ssc-prd-ipa-073.ipa.ba.ssa.gov: replica
  last init status: None
  last init ended: 1970-01-01 00:00:00+00:00
  last update status: Error (18) Replication error acquiring replica:
Incremental update transient error.  Backing off, will retry update later.
(transient error)
  last update ended: 1970-01-01 00:00:00+00:00

[/root]# ipa-replica-manage re-initialize --from ssc-prd-ipa-073.ipa.ba.ssa.gov
--force
Directory Manager password:

Update in progress, 3 seconds elapsed
Update succeeded

[/root]# ipa-replica-manage list -v `hostname`
Directory Manager password:

nsc-prd-ipa-151.ipa.ba.ssa.gov: replica
  last init status: None
  last init ended: 1970-01-01 00:00:00+00:00
  last update status: Error (0) Replica acquired successfully: Incremental
update succeeded
  last update ended: 2018-02-12 17:05:39+00:00
ssc-prd-ipa-073.ipa.ba.ssa.gov: replica
  last init status: None
  last init ended: 1970-01-01 00:00:00+00:00
  last update status: Error (18) Replication error acquiring replica:
Incremental update transient error.  Backing off, will retry update later.
(transient error)
  last update ended: 1970-01-01 00:00:00+00:00

Metadata Update from @mreynolds:
- Custom field rhbz adjusted to https://bugzilla.redhat.com/show_bug.cgi?id=1544973

5 years ago

Metadata Update from @mreynolds:
- Custom field component adjusted to None
- Custom field origin adjusted to None
- Custom field reviewstatus adjusted to None
- Custom field type adjusted to None
- Custom field version adjusted to None
- Issue set to the milestone: 1.4.0 (was: 0.0 NEEDS_TRIAGE)

5 years ago

Metadata Update from @mreynolds:
- Issue assigned to mreynolds

5 years ago

Proposing changing this agmt status message from:

Incremental update transient error.  Backing off, will retry update later.

To

Backing off, will retry update later.

This would change the output from

last update status: Error (18) Replication error acquiring replica: Incremental update transient error.  Backing off, will retry update later.(transient error)

To:

last update status: Error (18) Replication error acquiring replica: Backing off, will retry update later.(transient error)

It's slightly more friendly and less alarming. I am open to other suggestions though, but without changing the agmt status format there isn't too much we can to make it less alarming. It's still going to say "Error (18) Replication error acquiring replica: NEW_MESSAGE"

This will also impact the docs about replication status errors.

Metadata Update from @mreynolds:
- Issue set to the milestone: 1.3.9 (was: 1.4.0)

5 years ago

commit bb335e0 Master

181ff36..b757a07 389-ds-base-1.3.9 -> 389-ds-base-1.3.9

Metadata Update from @mreynolds:
- Issue close_status updated to: fixed
- Issue status updated to: Closed (was: Open)

5 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/2923

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: fixed)

3 years ago

Login to comment on this ticket.

Metadata