#2772 server to server sasl mapping fails on replica on F-17
Closed: Invalid None Opened 11 years ago by rmeggins.

I'm using F-17 with a pre-release 389-ds-base-1.2.11.4

successful ipa-server-install on master, and ipa-replica-install on replica.

kinit admin@TESTDOMAIN.COM works fine

ipa user-add fails on replica - operations error

the replica attempts to make a DNA range request to the master - this fails because the sasl mapping on the master is incorrect

There are two matching entries for krbPrincipalName on the master - sasl expects only 1

dn: krbPrincipalName=ldap/f17x8664n2.testdomain.com@TESTDOMAIN.COM,cn=TESTDOMA
IN.COM,cn=kerberos,dc=testdomain,dc=com

dn: krbprincipalname=ldap/f17x8664n2.testdomain.com@TESTDOMAIN.COM,cn=services
,cn=accounts,dc=testdomain,dc=com

The sasl mappings are:
dn: cn=Full Principal,cn=mapping,cn=sasl,cn=config
nsSaslMapRegexString: (.)@(.)
nsSaslMapBaseDNTemplate: dc=testdomain,dc=com
nsSaslMapFilterTemplate: (krbPrincipalName=\1@\2)

dn: cn=Name Only,cn=mapping,cn=sasl,cn=config
nsSaslMapRegexString: ^[^:@]+$
nsSaslMapBaseDNTemplate: dc=testdomain,dc=com
nsSaslMapFilterTemplate: (krbPrincipalName=&@TESTDOMAIN.COM)

Not sure when/how the extra ldap krb principal was added, but it looks like the nsSaslMapBaseDNTemplate needs to be changed for IPA to specify either cn=services
,cn=accounts,dc=testdomain,dc=com or cn=kerberos,dc=testdomain,dc=com


No, we cannot restrict the location because e have principals in both locations.
During install we initially create the ldap principal under cn=kerberos as that's where the krb5 utils place it later then we 'move' the object under cn=services.

The bug is in the fact you have 2 entries, there should be really only one.
Do you have the access_log for the whole install ? Can you check when the one in cn=services ahs been created and if a delete operation immediately followed (and failed maybe ?)

There is something really strange going on here:

[21/May/2012:11:04:03 -0600] conn=5 op=8 ADD dn="krbPrincipalName=ldap/f17x8664n2.testdomain.com@TESTDOMAIN.COM,cn=TESTDOMAIN.COM,cn=kerberos,dc=testdomain,dc=com"
[21/May/2012:11:04:04 -0600] conn=5 op=9 UNBIND
[21/May/2012:11:04:04 -0600] conn=5 op=9 fd=65 closed - U1
[21/May/2012:11:04:04 -0600] conn=5 op=8 RESULT err=0 tag=105 nentries=0 etime=1 csn=4fba7584000000030000
[21/May/2012:11:04:04 -0600] conn=4 op=10 SRCH base="krbprincipalname=ldap/f17x8664n2.testdomain.com@TESTDOMAIN.COM,cn=TESTDOMAIN.COM,cn=kerberos,dc=testdomain,dc=com" scope=0 filter="(objectClass=*)" attrs=ALL
[21/May/2012:11:04:04 -0600] conn=4 op=10 RESULT err=0 tag=101 nentries=1 etime=0
[21/May/2012:11:04:04 -0600] conn=4 op=11 DEL dn="krbprincipalname=ldap/f17x8664n2.testdomain.com@TESTDOMAIN.COM,cn=TESTDOMAIN.COM,cn=kerberos,dc=testdomain,dc=com"
[21/May/2012:11:04:04 -0600] conn=4 op=11 RESULT err=0 tag=107 nentries=0 etime=0
[21/May/2012:11:04:04 -0600] conn=4 op=12 ADD dn="krbprincipalname=ldap/f17x8664n2.testdomain.com@TESTDOMAIN.COM,cn=services,cn=accounts,dc=testdomain,dc=com"
[21/May/2012:11:04:04 -0600] conn=4 op=12 RESULT err=0 tag=105 nentries=0 etime=0 csn=4fba7585000200030000

The entry under cn=kerberos is added with csn=4fba7584000000030000, then deleted almost immediately afterward, with no csn. The very next operation is to add the entry under cn=services, with csn=4fba7585000200030000. The delete op should have been csn=4fba7585000100030000. The delete op is not in the changelog either. Which means it was not replicated to the master. This looks like some sort of 389 problem.

krbprincipalname=ldap/f17-ipa-clone not deleted on replica
ipareplica-install.snipped.log

Well Rich, I'm about 33 minutes after you ;) Just added another log showing the same thing here. I also believe the same problem occurs with the host/f17-ipa-clone... entry as well. I also have two of those entries which appear on the master KDC after trying to install the replica.

Additional information:

After setting up MMR with plain 389-ds-base (no IPA), I did the following;

  1. On the master, create ou=test,dc=virt,dc=messinet,dc=com

    ldapadd -x -D'cn=directory manager' -W -f ou.ldif

    ou.ldif

    dn: ou=test,dc=virt,dc=messinet,dc=com
    objectclass: top
    objectclass: organizationalunit
    ou: test

  2. Delete it

    ldapdelete -x -D'cn=directory manager' -W ou=test,dc=virt,dc=messinet,dc=com

Find the following:

[22/May/2012:15:17:30 -0500] conn=41 fd=64 slot=64 connection from ::1 to ::1
[22/May/2012:15:17:30 -0500] conn=41 op=0 BIND dn="cn=directory manager" method=128 version=3
[22/May/2012:15:17:30 -0500] conn=41 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="cn=directory manager"
[22/May/2012:15:17:30 -0500] conn=41 op=1 ADD dn="ou=test,dc=virt,dc=messinet,dc=com"
[22/May/2012:15:17:30 -0500] conn=41 op=1 RESULT err=0 tag=105 nentries=0 etime=0 csn=4fbbf45c000000070000
[22/May/2012:15:17:30 -0500] conn=41 op=2 UNBIND
[22/May/2012:15:17:30 -0500] conn=41 op=2 fd=64 closed - U1
[22/May/2012:15:17:30 -0500] conn=42 fd=65 slot=65 connection from 192.168.1.208 to 192.168.1.202
[22/May/2012:15:17:30 -0500] conn=42 op=0 BIND dn="cn=replication manager,cn=config" method=128 version=3
[22/May/2012:15:17:30 -0500] conn=42 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="cn=replication manager,cn=config"
[22/May/2012:15:17:30 -0500] conn=42 op=1 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension"
[22/May/2012:15:17:30 -0500] conn=42 op=1 RESULT err=0 tag=101 nentries=1 etime=0
[22/May/2012:15:17:30 -0500] conn=42 op=2 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension"
[22/May/2012:15:17:30 -0500] conn=42 op=2 RESULT err=0 tag=101 nentries=1 etime=0
[22/May/2012:15:17:30 -0500] conn=42 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
[22/May/2012:15:17:30 -0500] conn=42 op=3 RESULT err=0 tag=120 nentries=0 etime=0
[22/May/2012:15:17:30 -0500] conn=42 op=4 EXT oid="2.16.840.1.113730.3.5.5" name="Netscape Replication End Session"
[22/May/2012:15:17:30 -0500] conn=42 op=4 RESULT err=0 tag=120 nentries=0 etime=0
[22/May/2012:15:17:32 -0500] conn=43 fd=64 slot=64 connection from ::1 to ::1
[22/May/2012:15:17:32 -0500] conn=43 op=0 BIND dn="cn=directory manager" method=128 version=3
[22/May/2012:15:17:32 -0500] conn=43 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="cn=directory manager"
[22/May/2012:15:17:32 -0500] conn=43 op=1 DEL dn="ou=test,dc=virt,dc=messinet,dc=com"
[22/May/2012:15:17:32 -0500] conn=43 op=1 RESULT err=0 tag=107 nentries=0 etime=0
[22/May/2012:15:17:32 -0500] conn=43 op=2 UNBIND
[22/May/2012:15:17:32 -0500] conn=43 op=2 fd=64 closed - U1

After doing this a few times, an ldapsearch on the server where I originally added and deleted the ou=test...:

ldapsearch -x -h 127.0.0.1 -b dc=virt,dc=messinet,dc=com ou=test
# extended LDIF
#
# LDAPv3
# base <dc=virt,dc=messinet,dc=com> with scope subtree
# filter: ou=test
# requesting: ALL
#

# search result
search: 2
result: 0 Success

# numResponses: 1

Now on the 'other' server (replica) you can see that the ou=test... still exists:

ldapsearch -x -h 127.0.0.1 -b dc=virt,dc=messinet,dc=com ou=test
# extended LDIF
#
# LDAPv3
# base <dc=virt,dc=messinet,dc=com> with scope subtree
# filter: ou=test
# requesting: ALL
#

# test, virt.messinet.com
dn: ou=test,dc=virt,dc=messinet,dc=com
objectClass: top
objectClass: organizationalunit
ou: test

# df607001-a44a11e1-ba00f3e5-ad59dd83, virt.messinet.com
dn: nsuniqueid=df607001-a44a11e1-ba00f3e5-ad59dd83,dc=virt,dc=messinet,dc=com
objectClass: top
objectClass: organizationalunit
ou: test

# df607002-a44a11e1-ba00f3e5-ad59dd83, virt.messinet.com
dn: nsuniqueid=df607002-a44a11e1-ba00f3e5-ad59dd83,dc=virt,dc=messinet,dc=com
objectClass: top
objectClass: organizationalunit
ou: test

# 26e6fc01-a44b11e1-ba00f3e5-ad59dd83, virt.messinet.com
dn: nsuniqueid=26e6fc01-a44b11e1-ba00f3e5-ad59dd83,dc=virt,dc=messinet,dc=com
objectClass: top
objectClass: organizationalunit
ou: test

# search result
search: 2
result: 0 Success

# numResponses: 5
# numEntries: 4

https://fedorahosted.org/389/ticket/383 is fixed in 389-ds-base-1.2.11.4

I don't see this issue any more using that version of 389

Metadata Update from @rmeggins:
- Issue assigned to someone
- Issue set to the milestone: 0.0 NEEDS_TRIAGE

7 years ago

Login to comment on this ticket.

Metadata