#49218 Support FreeIPA 4.5+ certmap features
Closed: wontfix 3 years ago by spichugi. Opened 7 years ago by abbra.

With FreeIPA 4.5 version it is possible to define certificate mappings using flexible rules. This mechanism is supported now in MIT Kerberos (to be released in 1.16, backported to Fedora 26/RHEL 7.4) with 'certauth' plugin which FreeIPA 4.5 implements and in SSSD (for local PKINIT authentication).

It would be nice to extend 389-ds to support the same certificate mapping rules. Right now certificate mapping in 389-ds is static and defined in a read-only configuration file. Aside from that, this file needs to be manually copied across all replicas to be consistent.

FreeIPA design page: http://www.freeipa.org/page/V4/Certificate_Identity_Mapping
SSSD design page: https://docs.pagure.org/SSSD.sssd/design_pages/matching_and_mapping_certificates.html
MIT Kerberos design page: https://k5wiki.kerberos.org/wiki/Projects/Certificate_authorization_pluggable_interface


This looks really a really really complex feature: it involves reasonably complex templating, string parsing for the mappings, interaction with the certmap code, and the schema you use in IPA .... well it's not accessible to use out of the box for us. Additionally, it looks like it will be complex to test due to the large possible number of configuration combinations that interact.

I think we need to study this request carefully, and come up with a proper design and approach to how we will implement this in directory server in a correct and reliable manner. Please be patient with us while we do this, as it may take some time.

Metadata Update from @firstyear:
- Custom field type adjusted to defect

7 years ago

You don't need anything that, really. All work was done for you already by SSSD developers. All you need to do is to use SSSD library like FreeIPA KDB driver does.

Please check complexity of the code in https://pagure.io/freeipa/blob/master/f/daemons/ipa-kdb/ipa_kdb_certauth.c, function ipa_certauth_authorize(). You'd initialize SSSD library context, add rules to it and then just ask to evaluate an incoming certificate based on it. The result of evaluation is an LDAP filter you'd need to use to find a valid LDAP object.

I am unwilling and hesitant to call out to and depend on an external process like SSSD, as it may cause stability and significant performance issues within Directory Server. Above all else, we must design our server and features to prioritise correctness and stability.

I think that I would like the chance to investigate this in a more detailed way, and come back with a solid proposal about a correct way to implement that works correctly for all of us, rather than rushing to complete the feature.

Thanks,

You are, again, base your judgement on prejudice rather than facts. I'm talking about SSSD certmap library which does not depend on SSSD processes being run. It does not even link to openldap libraries, either. This is purely a library to load rules as strings/lists of strings and match certificates to those rules.

Please, spend a bit of time on understanding what is the scope and task before making unfounded decisions. Thank you in advance.

# rpm -q --requires libsss_certmap
/sbin/ldconfig
/sbin/ldconfig
libc.so.6()(64bit)
libc.so.6(GLIBC_2.14)(64bit)
libc.so.6(GLIBC_2.2.5)(64bit)
libc.so.6(GLIBC_2.3)(64bit)
libc.so.6(GLIBC_2.3.4)(64bit)
libc.so.6(GLIBC_2.4)(64bit)
libdl.so.2()(64bit)
libnspr4.so()(64bit)
libnss3.so()(64bit)
libnss3.so(NSS_3.10)(64bit)
libnss3.so(NSS_3.12)(64bit)
libnss3.so(NSS_3.12.5)(64bit)
libnss3.so(NSS_3.2)(64bit)
libnss3.so(NSS_3.2.1)(64bit)
libnss3.so(NSS_3.3)(64bit)
libnss3.so(NSS_3.4)(64bit)
libnss3.so(NSS_3.6)(64bit)
libnss3.so(NSS_3.8)(64bit)
libnss3.so(NSS_3.9)(64bit)
libnssutil3.so()(64bit)
libnssutil3.so(NSSUTIL_3.12)(64bit)
libplc4.so()(64bit)
libplds4.so()(64bit)
libpthread.so.0()(64bit)
libsmime3.so()(64bit)
libssl3.so()(64bit)
libtalloc.so.2()(64bit)
libtalloc.so.2(TALLOC_2.0.2)(64bit)
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(FileDigests) <= 4.6.0-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
rpmlib(PayloadIsXz) <= 5.2-1
rtld(GNU_HASH)

You are, again, base your judgement on prejudice rather than facts. I'm talking about SSSD certmap library which does not depend on SSSD processes being run. It does not even link to openldap libraries, either. This is purely a library to load rules as strings/lists of strings and match certificates to those rules.
Please, spend a bit of time on understanding what is the scope and task before making unfounded decisions. Thank you in advance.

I literally just said:
"""
I think that I would like the chance to investigate this in a more detailed way, and come back with a solid proposal about a correct way to implement that works correctly for all of us, rather than rushing to complete the feature.
"""

I want to complete an investigation in to this matter first. I don't see how your reaction is appropriate here in this context, and I would like to ask you to step away from this topic to prevent further responses like this from occurring. Please allow for some time to allow me to do my work, as an engineer to create a proposal into this matter based on the information you have provided me.

Thank you.

Please take your time. I do not understand why you needed to start with strong words on your decision making if you did not intend to make decisions yet.

@abbra - what do the certmap filters look like that are generated in sss_certmap_get_search_filter()? Does it contain just the components of the certificate, or the cert itself? If you could provide some examples that would be great.

@mreynolds SSSD design page includes few examples for configuration on FreeIPA side. They are imaginary but they give some understanding. Main part of it is that a rule contains matching part and mapping part. Mapping part is a template of an LDAP filter where substitutions are done based on the content of the certificate. Below is a real example.

I created a mapping rule that maps by certificate binary string itself and then matches by the issuer and EKU. The filter is maprule after expansion which should be something like (userCertificate;binary=<some binary value>:

ipa certmaprule-add testrule \
                    --matchrule='<ISSUER>^CN=Certificate Authority,O=XS.IPA.COOL$<EKU>clientAuth' \
                    --maprule='(userCertificate;binary={cert!bin})' \
                    --priority=42

--------------------------------------------------
Added Certificate Identity Mapping Rule "testrule"
--------------------------------------------------
  Rule name: testrule
  Mapping rule: (userCertificate;binary={cert!bin})
  Matching rule: <ISSUER>^CN=Certificate Authority,O=XS.IPA.COOL$<EKU>clientAuth
  Priority: 42
  Enabled: TRUE

When I asked SSSD to do a search for a user with a client certificate issued by this IPA instance, it generated following LDAP search against 389-ds:

 filter="(&(usercertificate;binary=0\82\04\1B0\82\03\03\A0\03\02\01\02\02\01\100\0D\06\09\2A\86H\86\F7\0D\01\01\0B\05)(objectClass=posixAccount)(uid=*)(&(uidNumber=*)(!(uidNumber=0))))" 

attrs="objectClass uid userPassword uidNumber gidNumber gecos homeDirectory loginShell krbPrincipalName cn memberOf ipaUniqueID ipaNTSecurityIdentifier modifyTimestamp entryusn shadowLastChange shadowMin shadowMax shadowWarning shadowInactive shadowExpire shadowFlag krbLastPwdChange krbPasswordExpiration pwdattribute authorizedService accountexpires useraccountcontrol nsAccountLock host logindisabled loginexpirationtime loginallowedtimemap ipaSshPubKey ipaUserAuthType usercertificate;binary mail"

As you can see, its search filter itself has few additional components for POSIX users, these come from SSSD itself and are not part of certmap library code. The actual filter from certmap library is (usercertificate;binary=0\82\04\1B0\82\03\03\A0\03\02\01\02\02\01\100\0D\06\09\2A\86H\86\F7\0D\01\01\0B\05). In this case I have actual certificate stored in the user's LDAP entry as userCertificate attribute's value.

For a certificate component mapping it will work the same way. An application (SSSD or KDB driver in the examples in the ticket description) read the rules, build up a certmap object, and then pass certificate to the certmap library together with the certmap object. The library iterates over all rules. If matching rule part comparison against the certificate succeeds, the mapping rule part is collected. Then for all collected mapping rules their fields specified in templates are resolved against the certificate's content. This results in a list of filter components to be returned to the application.

@abbra - Just a quick observation - this is an unindexed search. The entire database will have to be sequentially scanned. In fact it's "unindexable" due to the NOT filter, the binary value (which you would never index), and the presence filters. If this search occurs for every bind then you are going to see a significant performance impact on the entire server - especially on large databases.

You can probably remove the NOT filter using a range search on uidNumber, and move it to the front of the filter:

filter="(&(uidNumber>0)(usercertificate;binary=0\82\04\1B0\82\03\03\A0\03\02\01\02\02\01\100\0D\06\09\2A\86H\86\F7\0D\01\01\0B\05)(objectClass=posixAccount)(uid=*))"

But this is still going to be unindexable. This is fine on small DB's, but once you get above 5000 entries it's going to become problematic, and on 100K+ databases it's going to be devastating (it will just hang the entire server at 100% CPU).

I would definitely pass on the word that this needs to be performance tested on large databases (at least 100k entries). If I'm correct, and I hope I'm not, this isn't going to work in a live production environment. You need to find a way to do it without using an unindexed search per user authentication. Again, I hope I'm misunderstanding how it works.

I think you are missing my point here. We may ask SSSD to change their LDAP filter used to weed out non-POSIX users in a POSIX user search case but it has nothing to do with the certificate mapping request I'm trying to discuss here. This is the search SSSD runs for almost a decade now for all POSIX searches, you can check yourself.

However, it is irrelevant to certmap-generated output. Certificate mapping does generate a filter along the lines you specified in the certmap rule, that's all. You asked me to show how such rules look like, I provided a result.

Please tell me what you consider wrong with this particular ticket. Let's keep things separate.

I think you are missing my point here. We may ask SSSD to change their LDAP filter used to weed out non-POSIX users in a POSIX user search case but it has nothing to do with the certificate mapping request I'm trying to discuss here. This is the search SSSD runs for almost a decade now for all POSIX searches, you can check yourself.
However, it is irrelevant to certmap-generated output. Certificate mapping does generate a filter along the lines you specified in the certmap rule, that's all. You asked me to show how such rules look like, I provided a result.
Please tell me what you consider wrong with this particular ticket. Let's keep things separate.

I was just making an observation that it was using an unindexed search - which is very bad. That's all. As I said, I have not "fully" investigated this - I Just had a question, and I made an observation on what you shared with me. My suggestion still stands - that this needs to be tested on large databases (even if it's been around for years).

Could you please file that comment as a ticket against SSSD to make sure they know there is a need for improvement?

I'd like to keep these discussions separate.

Could you please file that comment as a ticket against SSSD to make sure they know there is a need for improvement?

Actually they probably are indexing userCertificate - they must be or else they would have seen problems. It's not an attribute that gets updated often, so although its a massive index key and very expensive to maintain, it should be fine. Please ignore my "observation".

I'd like to keep these discussions separate.

Of course...

Metadata Update from @mreynolds:
- Issue set to the milestone: 1.3.7.0

6 years ago

Metadata Update from @firstyear:
- Issue assigned to firstyear

6 years ago

Thank you, William.

Couple comments:

  • What does INVALID_DN mean in terms of a plugin response? It is unclear what you would like to indicate through this return code. I can only think about a locked entry but that would probably be better handled by a caller or does it right now a duty of a "password" bind plugin callback to prevent such LDAP binds, thus you modeled it here as well?
  • It would, perhaps, be nice to be able to separate non-existing DN case from invalid DN case. This would be needed if we ever decide to create a placeholder DN based on the authenticated certificate properties. The latter could be accomplished by returning a synthetic entry on success but a caller might be easier to handle that if a return code is different from a normal SUCCESS one.

Invalid DN means that you did a map and the DN you got back from it doesn't have an attached entry. So imagine you map the cert CN to say UID in the directory. So you have a cert with CN=foo, but ldapsearch '(uid=foo)' returns no entries. This is the invalid DN case.

The account access controls are still managed in the current form in do_bind, which obviously happens in the success path.

The only concern I have about the invalid_dn case and the desire to create a placeholder/shadow entry, is that this map is done in a RO transaction. So perhaps there can be some future logic added where we say:

for p in certmap_plugins() {
    r = p.do_cert_map_plugin()
    if (r == continue) {
         continue;
    } else if (r == success) {
          // entry is found, 
          break;
    } else if (r == invalid_dn) {
          plugin call invalid_dn handlers ....
    }

The benefit to this idea is that when we move on to the invalid dn handlers we can drop the ro_txn and upgrade to rw, and we can call this from various places - for example, your krb PAC shadow entries during a sasl bind, or if you do a simple bind and the dn doesn't exist we can call the exact same handling routines for it. This gives you a really consistent and nice way to handle this case IMO.

In the future we'll want to try and keep as much as possible in ro_txn's because we'll get much better parallel performance from it, so this is why the structure above is what I'm proposing.

0001-Ticket-49218-Certmap-plugin-capabalities-for-freeipa.patch

This change ended up quite significant: The main reason is that instead of extending plugin v3 and then requiring a rewrite, I decided to implement the plugin v4 specification. Because this is a limited area of code, this gives us a good area to examine and reason about.

Please see the patch for further comments and discussion.

Important aims were simplicity of the api and module for future plugin authors, complete thread safety, and FFI improvements for future rust api integrations.

Metadata Update from @firstyear:
- Custom field component adjusted to None
- Custom field origin adjusted to None
- Custom field reviewstatus adjusted to review
- Custom field version adjusted to None

6 years ago

Thanks @firstyear. I read through the patchset today and overall it looks good, if only I'd formatted the code using more lines as reading long lines tend to force me off a bit. API names need some adoption (too long prefixes) but I get most of the points you wanted to make with the changes.

It would probably be better to split these changes into smaller commits (eventually).

Yes, really, there are a few patches in here conceptually.

  • slapi_v4 api foundations
  • v4 plugin library handler and dispatch
  • removal of the old certmap code
  • addition of a certmap plugin
  • addition of an "on upgrade" mechanism.

I think we can split this for review, but as a commit it should probably go in "as a whole" because the parts depend on each other.

As for the api prefix, I did think about changing this to something like s4_ to make it easier on authors.

There are some other key elements that need to be pointed out around the transactions of plugins and some constructs, but that can be discussed later. TL;DR plugins don't need mutexes, as the per-plugin context data is guaranteed to be safe for you, and should be read only. Additionally, it's possible to have multiple open plugins with context data, IE when you disable/enable, an existing operation keeps the old ctx data while the new calls get new ctx data, so you can have overlapping data life times.

Very likely, I'll add a mechanism for "on change of subtree" in cn=config (or the main db tree), so the plugin reloads on config change. Again, the idea is the framework should be atomic and safe, and plugin authors should just focus on "doing their business".

What's going to be really important is that plugins must not use mutexes else you will impact performance, and any calls to IPC should be removed or minimised because this new architecture will again, impact performance and you could generate some excellent race conditions.

I like to also mention a problem statement that I discussed, mostly based on information from Alexander Bokovoy, i.e. why all this is being done, as it is not apparent from the description:

FreeIPA LDAP server right now defines access control based on the group membership of a bind DN. For non-LDAP SIMPLE binds a DN which this bind operation impersonates needs to be established first. For SASL GSSAPI/SPNEGO FreeIPA has some mapping rules that allow us to find out a Kerberos principal LDAP entry. For certificate-based authentication FreeIPA has certmap rules in LDAP server's dse.ldif that are different from FreeIPA certificate mapping rules. The latter difference could be a source of security issues.

Further, for AD users we map their SASL GSSAPI/SPNEGO access to an ID override in Default Trust View. The latter is not included anywhere in group membership that actually defines how access control works. Same for certificate-based authentication of AD users to LDAP. We need to
extend a process how LDAP server:

  • maps certificates to a bind DN
  • establishes group membership of the bound DN for the purpose of ACL enforcement

Right now a full-cert mapping is the only possible option with existing certmap in 389-ds implementation. Users can upload own public certs but that would only give them access as their own DN. However, we definitely should make effort to have the same rules applying everywhere.

Proposed user story:

As a Security Architect of the IdM system I want to make sure that direct authentication using LDAP bind against IPA with a user certificate and authentication via SSSD client into a client system behaves the same exact way to prevent security issues. Right now the mapping is different and Administrator could get in a situation when an unauthorized person gets access to information in IPA over LDAP bind when he should not have such access. Such situation might be viewed as a vulnerability and I as a Security Architect want to avoid those in the product.

Starting review of patch....

Initial question... Why did you create another error logging function? This brings the total to 3 error logging functions now. It's way too redundant in my opinion. We should reducing the number of logging functions, not increasing it :-)

Still reviewing....

Because the new one takes an enum not an int. This way it's compiler checked, and the idea would be to replace our existing calls to this. The main reason for an enum is rust FFI compatability - you can't produce #defines for rust, but you can use enums.

@mkosek This ticket only provides extension to certmapping so you can define a plugin with custom rules for TLS CN -> entry dn maps.

For entry generation on bind related to trusts that requires a different ticket / feature, and I have already done design work on this: http://www.port389.org/docs/389ds/design/pad-pac-anchor-creation.html

So while I appreciate the input and comments, it's actually not related to this issue much at all ;)

Because the new one takes an enum not an int. This way it's compiler checked, and the idea would be to replace our existing calls to this. The main reason for an enum is rust FFI compatability - you can't produce #defines for rust, but you can use enums.

This should be a different ticket then (for 1.4.0), as adding a new redundant logging function has nothing to do with the certmap plugin.

Ahhhh, but it relates to the v4 plugin api which I want certmap to consume. So that's why it's part of this change :)

Metadata Update from @mreynolds:
- Issue tagged with: RFE

6 years ago

I have rebase this to master, and split the patches into different parts. However, I have found a bug in pyldap during the update which I want to resolve first,

@mreynolds As requested, only a single error log interface.

This passes the test suites I've converted to python 3 so far (basic, filter, etc), as well as expanding our TLS suites to exercise many more features that we support and offer.

@mreynolds As requested, only a single error log interface.

:thumbsup:

Looking good, I'm going to work on reviewing as much of this as I can this week.

This passes the test suites I've converted to python 3 so far (basic, filter, etc), as well as expanding our TLS suites to exercise many more features that we support and offer.

I'm still going to follow @lkrispen's request to split this patchset to two changes - one for the api, one for the plugin (they are kinda related, but kinda not, I can see his point very clearly, but I also see value leaving it together)

Thanks for the review @mreynolds ! I've actually been testing this for weeks with TLS tests and now the TLS replication suites :)

Metadata Update from @mreynolds:
- Issue set to the milestone: 1.4.2 (was: 1.3.7.0)

4 years ago

Metadata Update from @vashirov:
- Issue set to the milestone: 1.4.4 (was: 1.4.2)

4 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/2277

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix
- Issue status updated to: Closed (was: Open)

3 years ago

Login to comment on this ticket.