#26 Add back the design pages
Closed 6 years ago by jhrozek. Opened 6 years ago by fidencio.
SSSD/ fidencio/docs wip/design_pages  into  master

@@ -0,0 +1,901 @@ 

+ Note well: this is at the *spitballing* stage, so it can all get shot

+ down.

+ 

+ Better Integration with The Desktop

+ -----------------------------------

+ 

+ What we're suggesting as the point of focus for integration is to have

+ SSSD provide a superset of the *org.freedesktop.Accounts* D-Bus API.

+ 

+ The org.freedesktop.Accounts D-Bus API as currently implemented by *accountsservice*

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ `The current

+ API <https://live.gnome.org/ThreePointThree/Features/UserPanel/Service>`__

+ offers:

+ 

+ -  Creation/removal of user accounts.

+ -  Enumeration of user accounts which are known to the service.

+ 

+    -  Currently this resolves to a subset of the local system's

+       accounts.

+ 

+ -  Signals broadcast when a user account is added/removed/modified.

+ -  Ability to mark the user user account as a "Standard" user or an

+    "Administrator".

+ 

+    -  Under the covers, the user is added to or removed from the *wheel*

+       group, but UID=0 is always considered to be an administrator.

+ 

+ -  Ability to lock or unlock the user's account and query its lock

+    status.

+ -  Ability to check, set, or reset that the user must change password at

+    next login.

+ -  Password can be changed to a new **hashed** value, as returned by

+    *crypt(3)* or it can be removed, after which no password is required

+    for login.

+ -  Attributes exposed via specific get/set methods and the properties

+    interface:

+ 

+    -  login name

+    -  full name

+    -  email

+    -  preferred locale

+    -  preferred X session name

+    -  physical location

+    -  home directory pathname

+    -  login shell

+    -  login frequency

+    -  icon/thumbnail filename

+ 

+       -  the contents of this file are copied to

+          /var/lib/AccountsService/icons/$user

+ 

+    -  autologin

+    -  password hint

+ 

+ The org.freedesktop.Accounts D-Bus API as it would be provided by *SSSD*

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Local users live in the SSSD *local* provider's domain, full

+    creation/removal support.

+ -  Local **groups** are now exposed and managed.

+ 

+    -  Groups can contain users and other groups.

+ 

+ -  Enumeration of users defaults to returning those known to the *local*

+    domain and all identities from other domains that are in SSSD's

+    cache.

+ -  Signals broadcast when a user account or group is added to the

+    *local* domain's database, or an entry is added to SSSD's cache for

+    any other domains. Likewise, signals emitted when knowledge of a user

+    or group is updated or removed from either location.

+ -  *Local* users can be marked as *Standard* or *Administrator*

+    accounts, and this information can be retrieved.

+ 

+    -  [STRIKEOUT:This will add or remove the user from the *local*

+       *wheel* group. Some POSIX applications may be confused by this.]

+    -  [STRIKEOUT:Users from other domains may also be added or removed

+       from the *local* *wheel* group.]

+ 

+ -  *Local* user accounts can be locked or unlocked, and their account

+    lock status can be checked.

+ -  *Local* user accounts can have their account flagged to require a

+    password change at the next login.

+ 

+    -  When the user is part of a non-\ *local* domain, this may be known

+       up-front, or it may be discovered as a side-effect of performing

+       an authentication attempt.

+ 

+ -  *Local* user accounts can have their password changed to a new

+    **clear** or **hashed** value, or removed.

+ 

+    -  User accounts in non-\ *local* domains can have their password

+       changed to a new **clear** value if the old value is also

+       provided.

+ 

+ -  *Local* user accounts have their attributes stored in the database as

+    entry attributes along with the already-kept POSIX attributes, and

+    can be modified.

+ 

+    -  For user accounts in non-\ *local* domains, if an attribute is

+       configured to be writable, its value is fetched from the identity

+       provider only if there is no value for it, for the user, already

+       present in the cache. Because SSSD does not know how to push

+       updated information to identity providers, if the attribute is

+       writable, only the cached value is updated.

+ 

+ API additions useful to non-desktop cases

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Some of these look like reasonable extensions to the existing D-Bus

+ APIs, others won't.

+ 

+ -  A new method for obtaining a list of SSSD's domains.

+ 

+    -  The answer can depend on who's asking, as reported by the bus.

+ 

+ -  New methods for creating and deleting *local* groups, and for adding

+    or removing a *local* user or group from the list of a *local*

+    group's members.

+ -  A new method for enumerating the groups known to the *local* domain

+    and any identities from other domains that are in SSSD's cache.

+ 

+    -  [STRIKEOUT:A variation on this method] [STRIKEOUT:An optional

+       argument for this method] A variation on this method which narrows

+       the scope to a specified domain.

+ 

+ -  [STRIKEOUT:A variation on] [STRIKEOUT:An optional argument for the]

+    An additional user enumeration method which narrows the scope to a

+    specified domain.

+ -  A new method for performing authentication checks.

+ 

+    -  Conceptually similar to the application's part of a PAM

+       conversation, but explicitly includes the concept of an

+       authentication domain and enough context to tell if we're asking

+       for a password, an OTP, a smart card PIN, etc.

+    -  Can be multi-step.

+    -  RHEV-M would likely use this instead of nsswitch+PAM because its

+       users wouldn't be (and wouldn't need to be) complete POSIX users.

+    -  A user's secondary identities, if serviced by a mechanism that

+       SSSD can/will/does support, *can* also be authenticated here,

+       though it would generally only be useful to do so if

+       authentication provided some sort of SSO credential for SSSD to

+       manage on the user's behalf.

+ 

+ -  A new method for performing password changes.

+ 

+    -  [STRIKEOUT:As above, conceptually similar to the application's

+       part of a PAM conversation, again including the concept of an

+       authentication domain.]

+    -  [STRIKEOUT:Add a flag to the existing password change method to

+       indicate that an unhashed password is being provided, and allow

+       password change to fail if the flag is not set.]

+    -  Calling signature is similar to the authentication API, except

+       that the caller is told when it will be supplying the new

+       password.

+ 

+ -  A new method for obtaining a list of groups to which a user belongs.

+    These wouldn't necessarily be POSIX groups, as the accounts service

+    is uninterested in groups in the general case (the main exception

+    being that it maps Administatorness to membership in the *wheel*

+    group), but they'd be whatever the domain considered to be groups.

+ -  A new method or three for determining which users are in a group,

+    which groups are in a group, and which users are in a group by way of

+    being in other groups.

+ -  A new signal broadcast when a user's password or equivalent is about

+    to expire, along with how much time is left, if we can know that.

+ 

+    -  Would need something running in the user's session to catch them

+       and offer to initiate a password change via the above

+       password-changing method. Not provided by SSSD.

+ 

+ -  A new signal broadcast when a user's SSO credentials (e.g. Kerberos

+    TGT) are about to expire.

+ 

+    -  Would need something running in the user's session to catch them

+       and offer to reinitialize them by calling the above authentication

+       method. Not provided by SSSD.

+ 

+ -  A new signal broadcast when a user's SSO credentials are

+    reinitialized.

+ 

+    -  Would want something running in the user's session to catch them

+       and rescind offers to reinitialize them that aren't in-progress.

+       Not provided by SSSD.

+ 

+ -  The ability to fetch and manage more string attributes than the

+    current accountsService API offers.

+ 

+    -  This may just take the form of more properties, perhaps without

+       friendly get/set methods, particularly if

+    -  The set would be configured in SSSD on a per-domain basis.

+ 

+ Breaking It Down To The API Level

+ ---------------------------------

+ 

+ We're talking about providing a superset of the D-Bus API currently

+ offered by the *accountsservice* package.

+ 

+ The APIs themselves are advertised to clients via D-Bus introspection,

+ so they can be browsed using tools such as *d-feet*, and what follows is

+ heavily based on that information and the introspection information

+ included with the package.

+ 

+ The very, very, very short D-Bus Primer

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The D-Bus service model is a tree of objects. Each object has a path

+ name (an *object path*) which resembles a filesystem path, and can both

+ emit broadcast notifications referred to as *signals* and provide

+ callable functions referred to as *method call*\ s, as well as

+ possessing data members called *properties*. When a process connects to

+ a bus, it is given a connection-specific name (typically of the form

+ ":1.121") which is used to route replies back to it. A process which

+ intends to offer services typically also registers a name (of a form

+ such as "org.freedesktop.Accounts") which clients can use to specify the

+ destination for *method call*\ s that they intend to make use of. The

+ names of methods can be namespaced using *interface* names, but in many

+ cases, unless necessary for disambiguation, they are optional.

+ 

+ The Singleton Management Object

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The *accountsservice* package currently provides a service which can be

+ reached using the name *org.freedesktop.Accounts*, which provides one

+ singleton object of note: */org/freedesktop/Accounts*, which provides

+ five methods, two signals, and one property, all as part of an interface

+ named *org.freedesktop.Accounts*. Methods and properties that we add

+ that are specific to SSSD should grouped as part of an SSSD-specific

+ interface name.

+ 

+ -  **method** CreateUser(String name, String fullname, Int accountType)

+ 

+    -  *name*: the user's login name

+    -  *fullname*: the user's real name

+    -  *accountType*: an enumerated value which flags the account as a

+       *Standard* or *Administrator* account

+    -  returns: Path *user*: the path for the user's object

+    -  Error *org.freedesktop.Accounts.Error.PermissionDenied*: caller

+       lacks appropriate PolicyKit authorization

+    -  Error *org.freedesktop.Accounts.Error.Failed*: generic operation

+       failure

+    -  Creates a user with a given name in the default local provider

+       domain. Note that the UID is not specified by the caller, as it is

+       allocated by the provider. The caller can retrieve it from the

+       user's entry if the call succeeds. The meaning of account types is

+       not specified, but in the current implementation the difference

+       between a *Standard* user and an *Administrator* is whether or not

+       the user is a member of the *wheel* group.

+ 

+ -  **ADD** **method** CreateUserInDomain(String domain, String name,

+    String fullname, Int accountType)

+ 

+    -  *domain*: the domain in which the caller wants the account to be

+       created, can be left empty or unspecified to implicitly select the

+       default local provider domain, to which the caller must already be

+       allowed access

+    -  *name*: the user's login name

+    -  *fullname*: the user's real name

+    -  *accountType*: an enumerated value which flags the account as a

+       *Standard* or *Administrator* account

+    -  returns: Path *user*: the path for the user's object

+    -  Error *org.freedesktop.Accounts.Error.PermissionDenied*: caller

+       lacks appropriate PolicyKit authorization

+    -  Error *org.freedesktop.Accounts.Error.Failed*: generic operation

+       failure

+    -  Creates a user with a given name. Note that the UID is not

+       specified by the caller, as it is allocated by the provider. The

+       caller can retrieve it from the user's entry if the call succeeds.

+       The meaning of account types is not specified, but in the current

+       implementation the difference between a *Standard* user and an

+       *Administrator* is whether or not the user is a member of the

+       *wheel* group.

+ 

+ -  **method** DeleteUser(Int64 user, Boolean removeFiles)

+ 

+    -  *user*: the user ID of the user to be removed

+    -  *removeFiles*: whether or not to remove the user's home directory

+    -  Error *org.freedesktop.Accounts.Error.PermissionDenied*: caller

+       lacks appropriate PolicyKit authorization

+    -  Error *org.freedesktop.Accounts.Error.Failed*: generic operation

+       failure

+    -  Deletes the user with the given UID.

+ 

+ -  **ADD** **method** DeleteUserInDomain(String domain, Int64 user,

+    Boolean removeFiles)

+ 

+    -  *domain*: the domain to which the user belongs

+    -  *user*: the user ID of the user to be removed

+    -  *removeFiles*: whether or not to remove the user's home directory

+    -  Error *org.freedesktop.Accounts.Error.PermissionDenied*: caller

+       lacks appropriate PolicyKit authorization

+    -  Error *org.freedesktop.Accounts.Error.Failed*: generic operation

+       failure

+    -  Deletes the user with the given UID if a matching user exists in

+       the named domain.

+ 

+ -  **method** FindUserById(Int64 id)

+ 

+    -  *id*: the user's UID

+    -  returns: Path *user*: the path for the user's object. All

+       configured domains are searched.

+    -  Error *org.freedesktop.Accounts.Error.Failed*: no such user exists

+ 

+ -  **ADD** **method** FindUserByIdInDomain(String domain, Int64 id)

+ 

+    -  *id*: the user's UID

+    -  *domain*: the name of the domain to search

+    -  returns: Path *user*: the path for the user's object, if a

+       matching user exists in the domain.

+    -  Error *org.freedesktop.Accounts.Error.Failed*: no such user exists

+ 

+ -  **method** FindUserByName(String name)

+ 

+    -  *name*: the user's login name

+    -  returns: Path *user*: the path for the user's object. The search

+       is performed over all configured domains.

+    -  Error *org.freedesktop.Accounts.Error.Failed*: no such user exists

+ 

+ -  **ADD** **method** FindUserByNameInDomain(String domain, String name)

+ 

+    -  *domain*: the name of the domain to search

+    -  *name*: the user's login name

+    -  returns: Path *user*: the path for the user's object

+    -  Error *org.freedesktop.Accounts.Error.Failed*: no such user exists

+ 

+ -  **method** ListCachedUsers()

+ 

+    -  returns: *users*: an array of paths for the user objects

+    -  Returns a subset of the users who exist, typically those who have

+       logged in recently, for populating chooser lists such as those

+       used by GDM's greeter.

+    -  Currently the *accountsservice* process scans /etc/passwd for

+       users, filters out those with UID values which are below a

+       threshold point to screen out system users, and sorts the rest by

+       the number of times the users in question appear in /var/log/wtmp.

+       Above a certain length, it's expected that the caller will

+       disregard the list and present only an entry field. The entry

+       field always needs to be available because we know that some

+       results may be missing from this list.

+ 

+ -  **ADD** **method** ListDomainUsers(String domain)

+ 

+    -  *domain*: the domain name in which the caller is interested

+    -  returns: *users*: an array of paths for all known user objects

+    -  Returns all of the objects for users about which SSSD is aware.

+       This may be a very large list, particularly if enumeration is

+       enabled for the domain.

+ 

+ -  **signal** UserAdded(Path user)

+ 

+    -  *path*: the path for the user's object

+    -  **MODIFY** this signal is emitted when a user is created or

+       appears in the cache for a remote domain

+ 

+ -  **signal** UserDeleted(Path user)

+ 

+    -  *path*: the path for the user's object

+    -  **MODIFY** this signal is emitted when a user is deleted or

+       disappears from the cache for a remote domain, though the latter

+       is not expected to happen often

+ 

+ -  **property** String DaemonVersion

+ 

+    -  The version of the running daemon.

+ 

+ -  **ADD** **method** CreateGroup(String name)

+ 

+    -  *name*: the group's name

+    -  returns: Path *group*: the path for the new group object

+    -  Error *org.freedesktop.Accounts.Error.PermissionDenied*: caller

+       lacks appropriate PolicyKit authorization

+    -  Error *org.freedesktop.Accounts.Error.Failed*: generic operation

+       failure

+    -  Creates a group with the given name in the default local provider

+       domain. As with users, the GID is allocated by the provider, and

+       the caller can retrieve it from the group's entry if the call

+       succeeds.

+ 

+ -  **ADD** **method** CreateGroupInDomain(String domain, String name)

+ 

+    -  *domain*: the domain name in which the caller wants the group to

+       be created

+    -  *name*: the group's name

+    -  returns: Path *group*: the path for the new group object

+    -  Error *org.freedesktop.Accounts.Error.PermissionDenied*: caller

+       lacks appropriate PolicyKit authorization

+    -  Error *org.freedesktop.Accounts.Error.Failed*: generic operation

+       failure

+    -  Creates a group with the given name in the given domain. As with

+       users, the GID is allocated by the provider, and the caller can

+       retrieve it from the group's entry if the call succeeds.

+ 

+ -  **ADD** **method** DeleteGroup(Int group)

+ 

+    -  *group*: the group ID of the group to be removed

+    -  Error *org.freedesktop.Accounts.Error.PermissionDenied*: caller

+       lacks appropriate PolicyKit authorization

+    -  Error *org.freedesktop.Accounts.Error.Failed*: generic operation

+       failure

+    -  Deletes the group with the given GID.

+ 

+ -  **ADD** **method** DeleteGroupInDomain(String domain, Int group)

+ 

+    -  *domain*: the name of the domain to which the group belongs

+    -  *group*: the group ID of the group to be removed

+    -  Error *org.freedesktop.Accounts.Error.PermissionDenied*: caller

+       lacks appropriate PolicyKit authorization

+    -  Error *org.freedesktop.Accounts.Error.Failed*: generic operation

+       failure

+    -  Deletes the group with the given GID.

+ 

+ -  **ADD** **method** ListCachedGroups()

+ 

+    -  returns: *groups*: a subset of the known group objects

+    -  Error *org.freedesktop.Accounts.Error.PermissionDenied*: caller

+       lacks appropriate PolicyKit authorization

+    -  Error *org.freedesktop.Accounts.Error.Failed*: generic operation

+       failure

+ 

+ -  **ADD** **method** ListDomainGroups(String domain)

+ 

+    -  *domain*: the domain name in which the caller is interested

+    -  returns: *groups*: an array of paths for the group objects

+       representing all of the groups of which SSSD is aware in the named

+       domain

+ 

+ -  **ADD** **method** FindGroupById(Int64 id)

+ 

+    -  *id*: the group's GID

+    -  returns: Path *group*: the path for the group object

+    -  Error *org.freedesktop.Accounts.Error.Failed*: no such group

+       exists

+ 

+ -  **ADD** **method** FindGroupByIdInDomain(String domain, Int64 id)

+ 

+    -  *id*: the group's GID

+    -  *domain*: the groups's domain name

+    -  returns: Path *group*: the path for the group object

+    -  Error *org.freedesktop.Accounts.Error.Failed*: no such group

+       exists

+ 

+ -  **ADD** **method** FindGroupByName(String name)

+ 

+    -  *name*: the group's name

+    -  returns: Path *group*: the path for the group object

+    -  Error *org.freedesktop.Accounts.Error.Failed*: no such group

+       exists

+ 

+ -  **ADD** **method** FindGroupByNameInDomain(String domain, String

+    name)

+ 

+    -  *name*: the group's name

+    -  *domain*: the groups's domain name.

+    -  returns: Path *group*: the path for the group object

+    -  Error *org.freedesktop.Accounts.Error.Failed*: no such group

+       exists

+ 

+ -  **ADD** **signal** GroupAdded(Path group)

+ 

+    -  *path*: the path for the group's object

+    -  this signal is emitted when a group is created or appears in the

+       cache for a remote domain

+ 

+ -  **ADD** **signal** GroupDeleted(Path group)

+ 

+    -  *path*: the path for the group's object

+    -  this signal is emitted when a group is deleted or disappears from

+       the cache for a remote domain, though the latter is not expected

+       to happen often

+ 

+ -  **ADD** **method** ListDomains()

+ 

+    -  returns: *domains*: a list of domain name strings

+ 

+ User Objects

+ ~~~~~~~~~~~~

+ 

+ Users are represented by objects as well. The object path name used for

+ an object need not contain any identifying information about the user,

+ so no assumptions should be made about it. That all said, a typical user

+ object path is currently */org/freedesktop/Accounts/User500*.

+ 

+ User objects typically provide several properties, methods for setting

+ the properties which can be written to, and one signal, all grouped as

+ part of the *org.fredesktop.Accounts.User* interface:

+ 

+ -  **property** Boolean AutomaticLogin, **method**

+    SetAutomaticLogin(Boolean enabled)

+ 

+    -  Whether the user should be logged in automatically at boot.

+ 

+ -  **property** Boolean Locked, **method** SetLocked(Boolean locked)

+ 

+    -  Whether the user's account is locked.

+ 

+ -  **property** Int AccountType, **method** SetAccountType(Int type)

+ 

+    -  The type of the account. 0 is a *Standard* user, while 1 indicates

+       an *Administrator*.

+ 

+ -  **property** Int PasswordMode, **method** SetPasswordMode(Int mode)

+ 

+    -  Password flags. 0 is normal, 1 indicates that the password must be

+       changed at next login, and 2 indicates that no password is

+       necessary.

+ 

+ -  **property** Boolean SystemAccount

+ 

+    -  Whether or not the account is a system account, such as *adm*.

+       System accounts aren't returned by *ListCachedUsers* and should

+       generally be ignored.

+ 

+ -  **property** String Email, **method** SetEmail(String email)

+ 

+    -  The user's email address.

+ 

+ -  **property** String HomeDirectory, **method** SetHomeDirectory(String

+    homedir)

+ 

+    -  The user's home directory. If changed, the user's files are moved.

+ 

+ -  **property** String IconFile, **method** SetIconFile(String path)

+ 

+    -  The user's icon file. Its contents are copied from the specified

+       location to a location managed by the service, and when the value

+       is read, the location of the service's copy is returned.

+ 

+ -  **property** String Language, **method** SetLanguage(String locale)

+ 

+    -  The user's preferred language.

+ 

+ -  **property** String Location, **method** SetLocation(String where)

+ 

+    -  The user's location, as a free-form string.

+ 

+ -  **property** String RealName, **method** SetRealName(String fullname)

+ 

+    -  The user's real, full name.

+ 

+ -  **property** String Shell, **method** SetShell(String path)

+ 

+    -  The user's login shell.

+ 

+ -  **property** String UserName, **method** SetUserName(String name)

+ 

+    -  The user's login name.

+ 

+ -  **property** String XSession, **method** SetXSession(String session)

+ 

+    -  The user's preferred graphical session, e.g. *gnome-fallback*.

+ 

+ -  **property** Int64 Uid

+ 

+    -  The user's UID. Note that it is read-only.

+    -  **MODIFY** this is allowed to not be set.

+ 

+ -  **property** Int64 LoginFrequency

+ 

+    -  The user's login frequency. Currently this is the number of times

+       the user appears in lastlog (or maybe utmp).

+ 

+ -  **property** String PasswordHint

+ 

+    -  The user's password hint.

+ 

+ -  **ADD** **property** String Domain

+ 

+    -  The user's domain.

+ 

+ -  **ADD** **property** Int CredentialLifetime

+ 

+    -  The number of seconds left before the user's credentials expire,

+       if the service is managing and monitoring some on the user's

+       behalf.

+ 

+ -  **method** SetPassword(String crypted, String hint)

+ 

+    -  Resets the password mode to normal.

+    -  Unlocks the account.

+    -  Currently takes a **crypt** string as a parameter.

+    -  **ADD** Error

+       *org.fedorahosted.SSSD.Error.PasswordMustBePlaintext*: this user's

+       password must be set as plaintext by calling *SetAuthenticator*.

+ 

+ -  **ADD** **method** FindGroups(Boolean direct, Boolean indirect)

+ 

+    -  *direct*: return groups which explicitly list the user as a member

+    -  *indirect*: return groups which have the user as a member by

+       virtue of having, as a member, a group which lists the user as a

+       member

+    -  returns: *groups*: an array of paths for the matched group objects

+ 

+ -  **signal** Changed()

+ 

+    -  Emitted when the user's properties change.

+ 

+ -  Any attempt to change a property value can result in these errors:

+ 

+    -  Error *org.freedesktop.Accounts.Error.PermissionDenied*: caller

+       lacks appropriate PolicyKit authorization

+    -  Error *org.freedesktop.Accounts.Error.Failed*: generic operation

+       failure

+ 

+ -  **ADD** **method** Authenticate((Array of Bytes) handle, (Array of

+    String) types, (Array of Struct(Int, Array of Variant)) responses)

+ 

+    -  (Array of Bytes) *handle*: opaque *handle* returned by a previous

+       call to Authenticate(), an empty array or a previously-obtained

+       *info.session* value on the first call

+    -  (Array of String) *types*: a list of enumerated types of

+       information which the caller can supply (see below)

+    -  (Array of Struct(Int id, (Array of Variant) data)) *responses*:

+       array of responses to request for information returned by a

+       previous call to Authenticate()

+ 

+       -  *id*: an identifier specific to this request which identifiers

+          information being provided in response to a particular item

+          from a request

+       -  *data*: the information being provided in response to a

+          particular item requested by a previous call to Authenticate()

+ 

+    -  returns (Array of Bytes) *handle*: an opaque handle used to track

+       an ongoing authentication request

+    -  returns Boolean *more*: true if more information is needed; false

+       if authentication has succeeded (N.B.: failure is indicated by a

+       D-Bus-level error)

+    -  returns Int *timer*: amount of time the service is willing to wait

+       for answers to its requests for information, in seconds

+    -  returns (Array of Struct(Int id, String type, Variant prompt,

+       Boolean sensitive, Signature format)) *requests*: information and

+       requests for information

+ 

+       -  *id*: an identifier specific to this request which should be

+          used to identify the corresponding response when it is later

+          provided

+       -  *type*: a label which attempts to catalogue the various kinds

+          of information which may be provided or requested

+ 

+          -  *info.user*: user object (*prompt* is a Path), no input

+             requested (*format* is ignored)

+          -  *info.text*: user visible feedback (*prompt* is a String),

+             no input requested (*format* is ignored)

+          -  *input.text*: interactively-obtained string (*prompt* is a

+             String, *format*\ =String)

+ 

+             -  the service attempts to use this as infrequently as

+                possible

+ 

+          -  *input.boolean*: interactively-obtained boolean (*prompt* is

+             a String, *format*\ =Boolean)

+ 

+             -  the service attempts to use this as infrequently as

+                possible

+ 

+          -  *input.password*: current password (*prompt* is a String,

+             *format*\ =String)

+          -  *input.new-password*: new password value (*prompt* is a

+             String, *format*\ =String)

+          -  *input.otp*: current OTP value (*prompt* is a String,

+             *format*\ =String)

+          -  *input.otp-secret*: new OTP secret (*prompt* is a String,

+             *format*\ =Array of Byte)

+          -  *input.otp-next*: next OTP value (*prompt* is a String,

+             *format*\ =String)

+          -  *input.otp-new*: new OTP value (*prompt* is a String,

+             *format*\ =String)

+          -  *authz-data. ...*: authorization data returned on success,

+             the portion of the name after *authz-data.* is namespaced

+             either as an OID in text form or as a reversed domain name

+             (resembling a D-Bus interface name)

+          -  *info.cacheable*: an indicator that the calling application

+             is willing to accept results based on non-live (i.e. cached)

+             data

+          -  *info.session*: a handle for any SSO credentials obtained

+             during authentication (*prompt* is an Array of Bytes),

+             returned only when authentication succeeds, no input

+             requested; if the caller doesn't specify that it can accept

+             a handle, any SSO credentials which are obtained as a

+             side-effect of the authentication process (think: Kerberos

+             TGTs) are discarded; if the caller receives a session

+             handle, it accepts responsibility for eventually cleaning it

+             up

+          -  ...

+ 

+       -  *prompt*: as indicated by and appropriate for *type*

+ 

+          -  When an (Array of Byte) is expected, the *prompt* is usually

+             empty or an (Array of Byte) and the application is expected

+             to respond as indicated based only on *type*.

+ 

+       -  *sensitive*: if the user is supplying the value, if the value

+          is sensitive information.

+ 

+          -  For example, passwords are often considered to be sensitive.

+ 

+       -  *format*: the D-Bus type of the data which should be returned

+ 

+          -  usually Boolean, Int64, String, or Array of Byte

+ 

+       -  The overlap between *input.text* and various other input types

+          is intentional, as it should allow applications and the service

+          to share contextual information in cases where both support it,

+          and to still be able to function (though at a less convenient

+          level, programmatically) when one or the other is ignorant of

+          the specifics of a particular authentication exchange. If a

+          password is needed, for example, applications which advertise

+          that they can provide both *input.text* and *input.password*

+          will be prompted specifically for the password, while

+          applications which only claim to be able to handle *input.text*

+          will be prompted via that means. Hopefully this will provide

+          some level of compatibility, even if it is less than ideal, as

+          input types are added.

+       -  As a rule, multiple requests for *input.text* type should not

+          be assumed to be multiple requests for the same information,

+          and *input.text* values should not be considered appropriate

+          for being cached.

+       -  The input type mechanism is notionally related to Kerberos

+          preauthentication and authorization data, particularly in that

+          some *requests* are merely informational, and attempts to

+          provide a groundwork for eventually passing through binary

+          methods such as GSSAPI.

+ 

+ -  **ADD** **method** CancelAuthentication((Array of Bytes) handle)

+ 

+    -  (Array of Bytes) *handle*: opaque *handle* returned by a previous

+       call to Authenticate()

+ 

+ -  **ADD** **method** ClearSession((Array of Bytes) handle)

+ 

+    -  (Array of Bytes) *handle*: opaque *info.session* value returned by

+       a previous call to Authenticate()

+    -  Cleans up any resources being used to maintain the session's

+       credentials

+ 

+ -  **ADD** **method** SelectSession((Array of Bytes) handle, (Array of

+    String) types)

+ 

+    -  (Array of Bytes) *handle*: opaque *info.session* value returned by

+       a previous call to Authenticate()

+    -  (Array of String) *types*: a list of types of returned information

+       which the caller is able to usefully consume

+    -  returns (Array of Struct(String type, Variant value)) *info*:

+       information which the caller will need

+    -  Makes previously-obtained SSO credentials available for use by the

+       caller. When using Kerberos, the returned array includes an

+       *environment* value of type *Array of String*, one of which is a

+       KRB5CCNAME value which will be valid until the next time either

+       *SelectSession* or *ClearSession* is called, or the

+       *SessionCleared* signal is emitted. At this time, the only SSO

+       credentials which SSSD "knows" how to obtain are Kerberos

+       credentials, so the returned array will typically only contain an

+       *environment* member, but this may grow to include other data

+       items as additional authentication providers are added to SSSD.

+ 

+ -  **ADD** **method** SetAuthenticator((Array of Bytes) handle, (Array

+    of String) types, (Array of Struct(Int, Array of Bytes)) responses)

+ 

+    -  Same calling setup as *Authenticate*.

+ 

+ -  **ADD** **signal** AuthenticationOperationSucceeded((Array of Bytes)

+    handle)

+ 

+    -  Emitted when authentication or authenticator change succeeds.

+ 

+ -  **ADD** **signal** AuthenticationOperationFailed((Array of Bytes)

+    handle)

+ 

+    -  Emitted when authentication or authenticator change fails.

+ 

+ -  **ADD** **signal** AuthenticationOperationCanceled((Array of Bytes)

+    handle)

+ 

+    -  Emitted when authentication or authenticator change is canceled or

+       times out.

+ 

+ -  **ADD** **signal** SessionExpiring((Array of Bytes) session, Int

+    soon)

+ 

+    -  Emitted when the user's SSO credentials will soon need to be

+       refreshed, if the service is hanging on to and monitoring some on

+       the user's behalf.

+    -  (Array of Bytes) *session*: opaque *info.session* value returned

+       by a previous call to Authenticate()

+    -  Int *soon*: the amount of time left, in seconds, before the

+       credentials expire.

+ 

+ -  **ADD** **signal** SessionCleared(Array of Bytes) session)

+ 

+    -  (Array of Bytes) *session*: opaque *info.session* value returned

+       by a previous call to Authenticate()

+    -  Emitted when the user's credentials are either explicitly cleared

+       or expire.

+ 

+ -  **ADD** **signal** SessionRefreshed((Array of Bytes) session)

+ 

+    -  (Array of Bytes) *session*: opaque *info.session* value returned

+       by a previous call to Authenticate()

+    -  Emitted when the user's credentials are refreshed, if the service

+       is managing and monitoring some on the user's behalf.

+ 

+ Group Objects (All New)

+ ~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Going forward, groups, which were previously not exposed via this API,

+ will also be represented by objects. The object path name used for an

+ object need not contain any identifying information about the group, so

+ no assumptions should be made about it. That all said, a typical group

+ object path will be */org/freedesktop/Accounts/Domain2/Group500*.

+ 

+ Group objects will typically need to provide a few properties, methods

+ for setting the properties which can be written to, and one signal, all

+ grouped as part of the *org.fredesktop.Accounts.Group*, or more likely

+ an SSSD-specific, interface:

+ 

+ -  **ADD** **property** Boolean SystemGroup

+ 

+    -  Whether or not the account is a system group, such as *adm*.

+       System groups aren't returned by *ListCachedGroups* and should

+       generally be ignored.

+ 

+ -  **ADD** **property** String IconFile, **method** SetIconFile(String

+    path)

+ 

+    -  The group's icon file. Its contents are copied from the specified

+       location to a location managed by the service, and when the value

+       is read, the location of the service's copy is returned.

+ 

+ -  **ADD** **property** String GroupName, **method** SetGroupName(String

+    name)

+ 

+    -  The group's name.

+ 

+ -  **ADD** **property** Int64 Gid

+ 

+    -  The group's GID. This is read-only, optional, and is allowed to

+       not be set.

+ 

+ -  **ADD** **signal** Changed()

+ 

+    -  Emitted when the group's properties change.

+ 

+ -  **ADD** **property** String Domain

+ 

+    -  The group's domain.

+ 

+ -  **ADD** **property** (array of Paths) Users

+ 

+    -  An list of the group's member user objects.

+ 

+ -  **ADD** **property** (array of Paths) Groups

+ 

+    -  An list of the group's member group objects.

+ 

+ -  **ADD** **method** AddUser(Path user)

+ 

+    -  *user*: the object path of the user to add to the group's list of

+       users

+    -  If the user's domain and the group's domain are different, this is

+       allowed to fail.

+ 

+ -  **ADD** **method** RemoveUser(Path user)

+ 

+    -  *user*: the object path of the user to remove from the group's

+       list of users

+ 

+ -  **ADD** **method** AddGroup(Path group)

+ 

+    -  *group*: the object path of the group to add to the group's list

+       of groups

+    -  If the groups are not in the same domain, this is allowed to fail.

+    -  If the domain does not support groups being members of groups,

+       this will fail.

+ 

+ -  **ADD** **method** RemoveGroup(Path group)

+ 

+    -  *group*: the object path of the group to remove from the group's

+       list of groups

+ 

+ -  Any attempt to change a property value or alter membership can result

+    in these errors:

+ 

+    -  Error *org.freedesktop.Accounts.Error.PermissionDenied*: caller

+       lacks appropriate PolicyKit authorization

+    -  Error *org.freedesktop.Accounts.Error.Failed*: generic operation

+       failure

+ 

+ Deficiencies

+ ~~~~~~~~~~~~

+ 

+ -  No indication of primary group [mvo] (Current assumption: primary

+    groups are not exposed.)

+ -  What are the semantics of system groups [mvo] (Current assumption:

+    there is no concept of system groups.)

+ -  What are the semantics of cached groups [stefw]

+ -  Why are domains not first class DBus objects [stefw]

+ -  Does the local domain have a special value/identifier/path? [mvo]

+    (Current assumption: empty string, but there is also LocalGroup,

+    similar to LocalUser.)

+ -  We should have FindGroups also on Group objects. [mvo]

+ -  'direct' argument to FindGroups seems unmotivated. [mvo]

@@ -0,0 +1,285 @@ 

+ Active Directory client access control

+ --------------------------------------

+ 

+ Related ticket(s):

+ 

+ -  `RFE:Add a new option

+    ad\_access\_filter <https://pagure.io/SSSD/sssd/issue/2082>`__

+ -  `RFE:Change the default of

+    ldap\_access\_order <https://pagure.io/SSSD/sssd/issue/1975>`__

+ -  `issues when combining the AD provider and

+    ldap\_access\_filter <https://pagure.io/SSSD/sssd/issue/1977>`__

+ 

+ Somewhat related:

+ 

+ -  `Document the best practices for AD access

+    control <https://pagure.io/SSSD/sssd/issue/2083>`__

+ 

+ Problem Statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ The recommended way of connecting a Linux client to an Active Directory

+ domain is using the `AD

+ provider <http://jhrozek.fedorapeople.org/sssd/1.11.0/man/sssd-ad.5.html>`__.

+ However, in the default configuration of the Active Directory provider,

+ only account expiration is checked. Very often, the administrator needs

+ to restrict the access to the client machine further, limiting the

+ access to a certain user, group of users, or using some other custom

+ filtering mechanism. In order to do so, the administrator is required to

+ use an alternative access control provider. However, none of the

+ alternatives provide the full required functionality for all users

+ resolvable by the AD provider, moreover they are hard to configure. This

+ design page proposes extension of the AD access provider to address

+ these concerns.

+ 

+ Current access control options

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ With the existing SSSD, the administrator has two basic means to

+ restrict access control to the Linux client - using the `simple access

+ control

+ provider <http://jhrozek.fedorapeople.org/sssd/1.11.0/man/sssd-simple.5.html>`__

+ or configuring the LDAP access control provider. Each approach has its

+ pros and cons.

+ 

+ Using the simple access provider

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The simple access provider grants or denies access based on the contents

+ of allow and deny lists. There are separate lists for user and group

+ names as well as allowed and denied objects.

+ 

+ The following example shows configuration that grants access to user

+ named ``tux`` and group called ``linuxadmins``. ::

+ 

+      access_provider = simple

+      simple_allow_users = tux

+      simple_allow_groups = linuxadmins

+ 

+ -  Pros:

+ 

+    -  Easy to configure

+    -  Realmd provides an interface to configure the simple access

+       provider using its CLI

+ 

+ -  Cons:

+ 

+    -  Account expiration is not checked

+    -  Limited expresiveness. No way to combine several clauses

+    -  Does not align with the LDAP structure the Active Directory uses

+ 

+ Using the LDAP access provider

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The LDAP access provider offers a way to configure the access control

+ decision based on whether the user matches a preconfigured filter.

+ Moreover, the LDAP access provider also offers chaining other LDAP based

+ checks. For the vanilla AD environment, only account expiration check

+ applies.

+ 

+ The following example illustrates configuration that allows access to

+ those users, who are members of group named ``linuxadmins`` AND have a

+ valid home directory set using the ``ldap_access_filter`` directive. The

+ users who match the configured filter are also checked whether they are

+ expired (``ldap_access_order`` contains ``expire``). ::

+ 

+         access_provider = ldap

+         ldap_access_order = filter, expire

+         ldap_account_expire_policy = ad

+         ldap_access_filter = (&(memberOf=cn=admins,ou=groups,dc=example,dc=com)(unixHomeDirectory=*))

+         ldap_sasl_mech = GSSAPI

+         ldap_sasl_authid = CLIENT_SHORTNAME$@EXAMPLE.COM

+         ldap_schema = ad

+ 

+ -  Pros:

+ 

+    -  Allows the administrator to base access control on a custom LDAP

+       filter, making it possible to combine several conditions

+    -  Conditions are not limited to user names or group membership

+ 

+ -  Cons:

+ 

+    -  Nontrivial and clumsy configuration that must include several low

+       level LDAP settings, otherwise set automatically by the AD

+       provider. Defeats the whole purpose of the AD provider

+    -  The admin needs to combine AD and LDAP providers. Judging by

+       experience from triaging support cases with Red Hat support, this

+       is a problem for many admins.

+    -  Account expiration check must be configured separately, which is

+       not obvious

+    -  No support for users from trusted AD domains

+    -  No realmd integration

+ 

+ Proposed solution

+ ~~~~~~~~~~~~~~~~~

+ 

+ The proposal is to add a new access filter configuration option to the

+ existing AD access provider. Adding the option to the AD provider would

+ greatly simplify the configuration when compared to the LDAP access

+ control, while maintaining the full expresiveness of

+ ``ldap_access_filter``. The new option would be called

+ ``ad_access_filter``. If the new option was set, then the AD access

+ provider would first match the entry against the filter in that option.

+ If the entry matched, then the account would be checked for expiration.

+ 

+ The following exapmple illustrates an example similar to the one above,

+ using the proposed AD options: ::

+ 

+         access_provider = ad

+         ad_access_filter = (&(memberOf=cn=admins,ou=groups,dc=example,dc=com)(unixHomeDirectory=*))

+ 

+ The main advantage is simplified configuration. The admin doesn't have

+ to know or understand what "SASL ID" is.

+ 

+ In comparison with the two legacy solutions explained above:

+ 

+ -  Pros

+ 

+    -  Easy and intuitive configuration. Only one provider type is

+       configured

+    -  Sane defaults - always checks for expiration, also checks access

+       filter if configured that way

+    -  Would support users and groups from trusted domains by leveraging

+       the existing AD provider infrastructure

+ 

+ -  Cons

+ 

+    -  No realmd integration

+ 

+ Realmd integration

+ ~~~~~~~~~~~~~~~~~~

+ 

+ After a short discussion with the realmd upstream maintainer, it was

+ decided that these options do not fit the realmd use-cases well. If the

+ user needs to use such advanced techniques as LDAP filters, chances are

+ that he doesn't need a tool like realmd to set them up in the config

+ file.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ #. The default value of what AD access\_provider is set to should be

+    changed

+ 

+    -  Currently, if ``access_provider`` is not set explicitly, the

+       default is ``permit``, thus allowing even expired accounts

+    -  The new default would be ``ad``, checking account expiration even

+       with a minimal configuration

+ 

+ #. A new option would be added. The new option would be called

+    ``ad_access_filter``

+ #. The LDAP access provider must be extended to allow connecting to a GC

+    and support subdomains in general

+ 

+    -  Pass in ``struct sdap_domain`` and ``id_conn`` instead of using

+       the connection from ``sdap_id_ctx`` directly

+    -  The code must not read the ``sss_domain_info`` from ``be_ctx`` but

+       only from ``sdap_domain`` in order to support subdomain users

+ 

+ #. The AD access provider must call the improved LDAP access provider

+    internally with the right connection

+ 

+    -  The default should be GC

+    -  If POSIX attributes are in use and GC lookup wouldn't match,

+       optionally fall back to LDAP. This fallback could be tried just

+       once to speed up subsequent access control

+ 

+ #. The default chain of LDAP access filter the AD provider sets

+    internally must be changed.

+ 

+    -  Currently AD provider sets ``ldap_access_order=expire``. If (and

+       only if) ``ad_access_filter`` was set, the LDAP chain would become

+       ``ldap_access_order=filter,expire``

+ 

+ Parsing the ``ad_access_filter`` option

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ #. The ``ad_access_filter`` option is a comma-separated list of filters

+    that apply globally, per-domain or per-forest. The most specific

+    match is used

+ #. If the ``ad_access_filter`` value starts with an opening bracket

+    ``(``, it is used as a filter for all entries from all domains and

+    forests

+ 

+    -  example:

+       ``(&(memberOf=cn=admins,ou=groups,dc=example,dc=com)(unixHomeDirectory=*))``

+ 

+ #. More advanced format can be used to restrict the filter to a specific

+    domain or a specific forest. This format is ``KEYWORD:NAME:FILTER``

+ 

+    -  KEYWORD can be one of ``DOM`` or ``FOREST``

+ 

+       -  KEYWORD can be missing

+ 

+    -  NAME is a label.

+ 

+       -  if KEYWORD equals ``DOM`` or missing completely, the filter is

+          applied for users from domain named NAME only

+       -  if KEYWORD equals ``FOREST``, the filter is applied on users

+          from forest named NAME only

+ 

+    -  examples of valid filters are:

+ 

+       -  apply filter on domain called dom1 only:

+ 

+          -  ``dom1:(memberOf=cn=admins,ou=groups,dc=dom1,dc=com)``

+ 

+       -  apply filter on domain called dom2 only:

+ 

+          -  ``DOM:dom2:(memberOf=cn=admins,ou=groups,dc=dom2,dc=com)``

+ 

+       -  apply filter on forest called EXAMPLE.COM only:

+ 

+          -  ``FOREST:EXAMPLE.COM:(memberOf=cn=admins,ou=groups,dc=example,dc=com)``

+ 

+ #. If no filter matches the user's domain, access is denied

+ 

+    -  example

+       ``ad_access_filter = dom1:(memberOf=cn=admins,ou=groups,dc=dom1,dc=com), dom2:(memberOf=cn=admins,ou=groups,dc=dom2,dc=com)``,

+       user logs in from dom3

+ 

+ Contingency plan

+ ~~~~~~~~~~~~~~~~

+ 

+ None needed. The existing options would still exist and function as they

+ do now.

+ 

+ How to test

+ ~~~~~~~~~~~

+ 

+ #. Check that ``access_provider=ad`` without any other options allows

+    non-expired users

+ #. Check that ``access_provider=ad`` without any other options denies

+    expired users

+ #. Test that setting ``ad_access_filter`` restricts access to users who

+    match the filter

+ 

+    -  test that an expired user, even though he matches the filter, is

+       denied access

+    -  this test must include users from the primary domain as well as a

+       sub domain

+    -  Different filters should be tested to make sure the most speficic

+       filter applies

+ 

+       -  example: add a restrictive filter for dom1 and permissive

+          filter without specifying the domain. A user from dom1 must be

+          denied access, while a user from other domain must be allowed

+          access

+ 

+ #. When access is denied, the SSSD PAM responder must return a

+    reasonable return code (6)

+ 

+ Future and optional enhancements

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ In the future, we should extend the ``access_provider`` option itself

+ and allow chaining access providers. This enhancement would allow even

+ more flexibility and would allow the administrator to combine different

+ access providers, but is outside the scope of the change described by

+ this design page.

+ 

+ Author(s)

+ ~~~~~~~~~

+ 

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

+ -  Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,209 @@ 

+ Use Active Directory's DNS sites

+ --------------------------------

+ 

+ Related ticket(s):

+ 

+ -  `RFE sssd should support DNS

+    sites <https://pagure.io/SSSD/sssd/issue/1032>`__

+ 

+ Problem Statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ In larger Active Directory environments there is typically more than one

+ domain controller. Some of them are used for redundancy, others to build

+ different administrative domains. But in environments with multiple

+ physical locations each location often has at least one local domain

+ controller to reduce latency and network load between the locations.

+ 

+ Now clients have to find the local or nearest domain controller. For

+ this the concept of sites was introduce where each physical location can

+ be seen as an individual site with a unique name. The naming scheme for

+ DNS service records was extended (see e.g.

+ `http://technet.microsoft.com/en-us/library/cc759550(v=ws.10).aspx <http://technet.microsoft.com/en-us/library/cc759550(v=ws.10).aspx>`__)

+ so that clients can first try to find the needed service in the local

+ site and can fall back to look in the whole domain if there is no local

+ service available.

+ 

+ Additionally clients have to find out about which site they belong to.

+ This must be done dynamically because clients might move from one

+ location to a different one on regular basis (roaming users). For this a

+ special LDAP request, the (C)LDAP ping

+ (`http://msdn.microsoft.com/en-us/library/cc223811.aspx <http://msdn.microsoft.com/en-us/library/cc223811.aspx>`__),

+ was introduced.

+ 

+ Overview view of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ General considerations

+ ^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The solution in SSSD should take into account that other types of

+ domains, e.g. a FreeIPA domain, want to implement their own scheme to

+ discover the nearest service of a certain type. A plugin interface where

+ the configured ID provider can implement methods to determine the

+ location of the client looks like the most flexible solution here.

+ 

+ Since the currently available (AD sites) or discussed schemes

+ (`http://www.freeipa.org/page/V3/DNS\_Location\_Mechanism <http://www.freeipa.org/page/V3/DNS_Location_Mechanism>`__)

+ use DNS SRV lookups the plugin will be called in this code path. Since

+ network lookups will be needed the plugin interface must allow

+ asynchronous operations. SSSD prefers the tevent\_req style for

+ asynchronous operations where the plugin has to provide a \*\_send and a

+ \*\_recv method. Besides a list of server names which will be handled as

+ primary servers, like the servers currently returned by DNS SRV lookups,

+ the \*\_recv method can additionally return a list of fallback servers

+ to make full use of the current fallback infrastructure on SSSD.

+ 

+ Sites specific details

+ ^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The plugin of the AD provider will do the following steps:

+ 

+ #. do a DNS lookup to find any DC

+ #. send a CLDAP ping to the first DC returned to get the client's site

+ #. after a timeout send a CLDAP ping to the next DC on the list

+ #. if after an overall timeout no response is received the CLDAP lookups

+    will be terminated and the client's site is unknown

+ #. if the clients site is known a DNS SRV

+    \_service.\_protocol.site-name.\_sites.domain.name for primary server

+    and \_service.\_protocol.domain.name for backup server is send,

+    otherwise only one with \_service.\_protocol.domain.name is done

+ #. if primary and backup server lists are available all primary servers

+    are removed from the backup list

+ 

+ The results of the different step should be available with one of the

+ debug levels reserved for tracing to make debugging easier and to allow

+ acceptance tests to validate the behavior with the help of the debug

+ logs.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ struct resolv\_ctx should get 3 new members, two function pointer to

+ hold the \*\_send and \*\_recv method of the plugin and a pointer to

+ private data for the plugin. Since most of the structs related to the

+ fail-over and resolver code are private a setter method to add the

+ pointers should be added as well. This is more flexible than adding

+ additional arguments to resolv\_init().

+ 

+ Besides the the service type and protocol and domain, which are all

+ available in struct srv\_data, the plugin should get a tevent context

+ and its private data as arguments. With this the plugin interface might

+ look like: ::

+ 

+     typedef struct tevent_req *(*location_plugin_send_t)(TALLOC_CTX *mem_ctx, struct tevent_context *ev, const char *service, const char *protocol, const char *domain, void *private_data);

+     typedef int (*location_plugin_recv_t)(TALLOC_CTX *mem_ctx, struct tevent_req *req, int *status, int *timeouts, struct ares_srv_reply **primary_reply_list, struct ares_srv_reply **backup_reply_list);

+ 

+ If a plugin is defined it can then be called in resolve\_srv\_cont()

+ instead of get\_srv\_query(). If it is not defined either the result of

+ get\_srv\_query() can be used or a default request with the same

+ interface as the plugin can be used. I think the latter one would make

+ the code flow more easy to follow.

+ 

+ Additionally, if s backup server list is returned the results must be

+ added to the server list in resolve\_srv\_done().

+ 

+ Finding a DC for the CLDAP ping

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ To find any DC in the domain samba look for a \_ldap.\_tcp.domain.name.

+ I would suggest to use \_ldap.\_tcp.domain.name as well for the SSSD

+ implementation.

+ 

+ Sending the CLDAP ping

+ ^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The CLDAP ping is a LDAP search request with a filter like ::

+ 

+     (&(&(DnsDomain=ad.domain)(DomainSid=S-1-5-21-1111-2222-3333))(NtVer=0x01000016))

+ 

+ and the attribute "NetLogon". The flags given with the NtVer component

+ of the search filter will be different for a domain member (AD provider)

+ and an IPA server in an environment with trusts (IPA provider).

+ 

+ A domain member will belong to a site and the following flags from

+ /usr/include/samba-4.0/gen\_ndr/nbt.h should be used

+ 'NETLOGON\_NT\_VERSION\_5 \| NETLOGON\_NT\_VERSION\_5EX \|

+ NETLOGON\_NT\_VERSION\_IP'. A trusted server does not belong to one of

+ the sites of trusting domain so it can only ask for the closest site

+ with 'NETLOGON\_NT\_VERSION\_5 \| NETLOGON\_NT\_VERSION\_5EX \|

+ NETLOGON\_NT\_VERSION\_WITH\_CLOSEST\_SITE'. Maybe

+ NETLOGON\_NT\_VERSION\_WITH\_CLOSEST\_SITE is useful for a domain member

+ as well if e.g. the services on the local site are not available.

+ 

+ Parsing the server response

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The server response is a single attribute "NetLogon" which is a binary

+ blob containing multiple NDR encoded values. This value can be decoded

+ with ndr\_pull\_netlogon\_samlogon\_response() from the Samba library

+ libndr-nbt.

+ 

+ Side note about struct resolv\_ctx and the usage of resolv\_init()

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ In previous discussions it was said that resolv\_init() should be only

+ called once during the initialization of a provider, preferable from the

+ common responder code. This means that there is only one instance of the

+ resolv\_ctx for the whole provider.

+ 

+ Currently resolv\_init() is called at two other places as well, in

+ ipa\_dyndns.c and sdap\_async\_sudo\_hostinfo.c. I think the only reason

+ for calling resolv\_init() at those two place is, that both needed to

+ call some low level resolve routines which need a resolv\_ctx as

+ parameter and that there is no easy way to get the resolv\_ctx because

+ it is hidden in a private struct. Instead of adding an appropriate

+ getter method which returns the current resolve\_ctx resolv\_init() was

+ called for a second time.

+ 

+ If the resolv\_init() calls are removed from those two places with the

+ help of a getter method or similar, I think the prev and next members

+ can be removed from struct resolv\_ctx as well. Because there will not

+ be a list of resolver contexts, but only one.

+ 

+ How to test

+ ~~~~~~~~~~~

+ 

+ If this feature is tested the following scenarios can be considered:

+ 

+ AD domain does only has a single site

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ -  site name might be 'Default-First-Site-Name' but it can be renamed or

+    localized as well

+ -  SSSD should be able to discover the site, e.g.

+    'Default-First-Site-Name'

+ -  SSSD should connect to any DC.

+ 

+ AD domain has sites but the local site of the SSSD client has no domain controller

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ -  SSSD should be able to discover the local site

+ -  SSSD should connect to a any DC

+ 

+ AD domain has sites and the local site of the SSSD client has a domain controller

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ -  SSSD should be able to discover the local site

+ -  SSSD should connect to a DC from the local site

+ 

+ Besides inspection the log files with a high debug level to connection

+ to the domain controller can also be verified with the netstat or ss

+ utilities.

+ 

+ Useful links

+ ~~~~~~~~~~~~

+ 

+ -  `How DNS Support for Active Directory

+    Works <http://technet.microsoft.com/en-us/library/cc759550(v=ws.10).aspx>`__

+ -  `LDAP

+    Ping <http://msdn.microsoft.com/en-us/library/cc223811.aspx>`__

+ -  `Domain Controller Response to an LDAP

+    Ping <http://msdn.microsoft.com/en-us/library/cc223813.aspx>`__

+ -  `NETLOGON\_NT\_VERSION Options

+    Bits <http://msdn.microsoft.com/de-de/library/cc223801.aspx>`__

+ 

+ Author(s)

+ ~~~~~~~~~

+ 

+ Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,320 @@ 

+ Active Directory client DNS updates

+ -----------------------------------

+ 

+ Related ticket(s):

+ 

+ -  `RFE AD dyndns

+    updates <https://pagure.io/SSSD/sssd/issue/1504>`__

+ 

+ Problem Statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Clients enrolled to an Active Directory domain may be allowed to update

+ their DNS records stored in AD dynamically. At the same time, Active

+ Directory servers support DNS aging and scavenging, which means that

+ stale DNS records might be removed from AD after a period of inactivity.

+ 

+ While DNS scavenging is not enabled on Active Directory servers by

+ default, the SSSD should support this use case and refresh its DNS

+ records to simulate the behavior of Windows AD clients and keep their

+ address records from being removed if scavenging is used. The SSSD

+ should also enable the clients to update their DNS records if their IP

+ address changes.

+ 

+ Overview of Windows client side DNS updates

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ This section provides a brief overview of how Windows clients may update

+ their DNS records and how scavenging is configured and performed in a

+ Windows domain. For more complete information, please follow the links

+ at the bottom of this page.

+ 

+ Windows Resource Record information

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ To be able to detect if the resource record is stale, every dynamically

+ created RR in the Windows DNS has a timestamp that is updated with the

+ dynamic update if scavenging is enabled. Manually created DNS records do

+ not have a timestamp. In order to update the timestamp, the DNS records

+ are refreshed periodically even if they actually haven't changed, just

+ to bump the timestamp.

+ 

+ A special timestamp value of 0 can be set to the resource record,

+ indicating unlimited lifetime of the record. Such record is never

+ scavenged.

+ 

+ Update and refresh

+ ^^^^^^^^^^^^^^^^^^

+ 

+ When a Windows client updates its DNS information, it may perform either

+ an update or a refresh.

+ 

+ -  an *update* is performed when the IP address of a client changes.

+    Involves a refresh and a change of the IP address(es).

+ -  a *refresh* does not change the IP addresses themselves, but rather

+    only updates the timestamp of existing resource record, keeping it

+    from being scavenged.

+ 

+ In order to maintain a heartbeat on the resource records, the Windows

+ clients perform updates and/or refreshes under conditions outlined in

+ the next section.

+ 

+ Scavenging timeouts

+ ^^^^^^^^^^^^^^^^^^^

+ 

+ In the zone properties, there are two timeout settings that are

+ affecting the scavenging

+ 

+ -  No-refresh interval - minimal interval between last refresh after

+    which the record can be refreshed again

+ -  Refresh interval - interval during which the refreshes are allowed.

+    After the refresh interval passes, the stale records can be

+    scavenged. In other words, the refresh interval starts at

+    ``timestamp + no_refresh_interval``.

+ 

+ Windows clients update and refresh intervals

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ For Windows clients, refreshes or updates generally occur for the

+ following reasons:

+ 

+ -  the computer is restarted

+ -  the DHCP lease is renewed

+ -  periodicaly every 24 hours by default

+ 

+    -  this is configurable in the windows registry using the

+       ``DefaultRegistrationRefreshInterval`` key under the

+       ``HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TcpIp\Parameters``

+       subkey

+ 

+ The SSSD updates should be modeled to be close to what the Windows

+ clients do.

+ 

+ SSSD clients refresh intervals

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The SSSD would perform the dynamic DNS update or refresh under the

+ following conditions:

+ 

+ -  the back end becomes online

+ 

+    -  this would also cover the case where computer is restarted. For

+       long-running deployments where the SSSD is almost never offline,

+       the back end would only ever become online after bootup

+ 

+ -  periodically based on a configuration option

+ 

+    -  the configuration option could be named

+       ``dyndns_refresh_interval`` or similar and it would default to 24

+       hours

+    -  the granularity will be seconds. AD interface also allows to set

+       the refresh and no-refresh interval in hours, too, so our

+       granularity should not be lower. Seconds also allow expressing

+       other values that might for instance map to DHCP leases easier.

+    -  admin could change the option to be same as DHCP lease for example

+       to simulate the case where Windows workstations refresh the

+       address after lease is renewed

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Because the DNS records scavenging is not on by default on the server

+ side, the client side DNS updates would be off by default as well. A new

+ configuration option, called ``dyndns_update`` (bool) would control

+ whether the DNS update should be performed.

+ 

+ Addresses used during the update

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ We will reuse a similar mechanism used in the IPA provider where the

+ address used to connect to the AD server LDAP connection is used by

+ default. Optionally, for machines that use IP aliasing or setups that

+ wish to update both IPv4 and IPv6 addresses of an interface at the same

+ time there will be an option ``dyndns_iface``.

+ 

+ Contrary to IPA dynamic DNS update that generates the PTR record in the

+ bind dyndb plugin, AD wouldn't update the PTR record on its own when

+ only A/AAAA record is updated. To be able to keep the forward and

+ reverse zones in sync, the AD dynamic update message would also include

+ updating the PTR records. PTR records update would not be on by default

+ and could be turned on by setting an option (perhaps

+ ``dyndns_update_ptr``) to true.

+ 

+ Future and optional enhancements

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ -  Currently the information on whether scavenging is enabled and how

+    often is it performed is stored in GPOs. When SSSD has the ability to

+    process Group Policies, we would add a new special value to the

+    periodical update option that would tell the SSSD to simply honour

+    the Group Policies.

+ -  We could also integrate with netlink to perform IP address refresh on

+    DHCP lease renewals. This could be filed as a separate ticket and

+    implemented later.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ For the update itself, we can simply use the nsupdate utility the way we

+ use it in IPA domain. The update code is already there, it is mostly a

+ matter of splitting the code to be IPA-agnostic.

+ 

+ One change compared to the IPA code would be that IPA only sends the

+ refresh when the addresses change, to avoid unnecessary zone transfers

+ on the IPA server. As stated above Windows clients typically refresh

+ their address even if nothing changed, so our update code would run

+ unconditionally, too, based on timed events.

+ 

+ #. The use of ``resolv_init`` in the dynamic DNS update code should be

+    inspected. If it is not needed anymore and the resolver code could

+    already be told per-request to only go to DNS and ignore

+    ``/etc/hosts``, the initialization should be removed.

+ #. A new module shared between IPA and AD providers shall be created.

+    This module will contain generic functions related to dynamic DNS

+    update such as:

+ 

+    -  a variant of ``ipa_dyndns_add_ldap_iface`` decoupled from IPA

+       dependencies

+    -  function to gather all addresses of an interface

+    -  utility functions

+ 

+ #. The existing ``fork_nsupdate_send`` request would be split out to a

+    generic request that calls nsupdate with a specified message. This

+    request would be placed in the module created in the previous step.

+    The IPA provider would be converted to use these new generic request.

+    The interface might look like: ::

+ 

+            struct tevent_req *be_nsupdate_send(struct tevent_context *ev, const char *nsupdate_msg);

+            errno_t be_nsupdate_recv(struct tevent_req *req, int *child_retval);

+ 

+ #. In the AD provider, a variant of IPA dyndns code would be created,

+    using AD specific data structures and options. This interface would

+    consist of a tevent request that would wrap ``fork_nsupdate_send``

+    using ``struct ad_options`` and an initializer function called on

+    provider startup.

+ #. If the ``dyndns_update`` option was set to ``true``, then the AD

+    provider would:

+ 

+    -  set up a periodic task running each ``dyndns_refresh_interval``

+       hours updating the DNS records

+    -  set up an online callback to run the DNS update when the back end

+       goes online

+ 

+ List of all new configuration options

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ During design discussion, it was decided that the new options should be

+ not include the provider-specific prefix but rather be provider agnostic

+ to ease sharing the code and possibly allow other providers to use

+ dynamic DNS updates as well. The new options are:

+ 

+ #. ``dynds_update`` ``(bool)`` - whether to perform the dynamic DNS

+    update. Defaults to false.

+ #. ``dyndns_refresh_interval`` ``(integer)`` - how often to run the

+    periodic task to refresh the resource record

+ #. ``dyndns_iface`` ``(string)`` - instead of updating the DNS with the

+    address used to connect to LDAP, which is the default, use all

+    addresses configured on a particular interface

+ #. ``dyndns_update_ptr`` ``(bool)`` - whether to also update the reverse

+    zone when updating the forward zone

+ #. ``dyndns_auth`` ``(string)`` - how should the ``nsupdate`` utility

+    authenticate to DNS. Supported values would be ``gss-tsig`` and

+    ``none``. IPA and AD providers would default to ``gss-tsig``. In 1.10

+    this option would be undocumented and the only providers that would

+    document the other options in their man pages would be IPA and AD.

+    Future expansion of this feature into other providers would be as

+    easy as hooking online callbacks into dynamic DNS update handler.

+ 

+ The existing ``ipa_dyndns_update``, ``ipa_dyndns_ttl`` and

+ ``ipa_dyndns_iface`` options would map to these new options. The

+ ``sssd-ipa`` manual page would be amended to list the new options

+ primarily and also list the old ones as a fallback, which would

+ eventually be removed.

+ 

+ How to test

+ ~~~~~~~~~~~

+ 

+ #. Test that forward and reverse zone updates work

+ 

+    -  Make sure DNS updates are enabled on the zone

+ 

+       -  Right-click the zone and select the "General" tab

+       -  There is a combo-box labeled "Dynamic updates". Toggle it to

+          "Secure only".

+       -  Click "Apply"

+ 

+    -  Prepare a client with dynamically updated DNS address

+ 

+       -  the easiest way is to join the client with realmd -

+          ``realm join ad.domain.example.com``

+ 

+    -  Test updates when the address has changed

+ 

+       -  Change the address of a client

+       -  Perform an action that would trigger an online callback such as

+          login

+       -  In the AD MMC check if the DNS address is the same as the new

+          address on the client

+       -  Depending on the settings of ``dyndns_iface`` or

+          ``dyndns_update_ptr`` also check if all expected addresses have

+          been updated in both forward and reverse zones.

+ 

+    -  Test periodic refresh

+ 

+       -  Set the periodic refresh (``dyndns_refresh_interval`` in this

+          document) to some low value

+       -  Wait until that value passes or modify the system time

+       -  The timestamp of the resource records would be changed after

+          SSSD ran its periodic task. The timestamp will be rounded down

+          to the nearest hour by AD.

+ 

+ #. Test DNS scavenging

+ 

+    -  Enroll two SSSD clients into AD

+ 

+       -  Turn one of them off after enrollment. This client will be

+          scavenged.

+       -  Let the other one up and set its ``dyndns_refresh_interval`` to

+          a value shorter than the scavenging interval

+ 

+    -  Enable DNS scavenging on the server

+ 

+       -  In the DNS MMC console, right-click the DNS server in the tree

+          view, select Properties and navigate to the "Advanced" tab

+       -  Enable the "Enable automatic scavenging of stale records"

+          toggle and select a meaningful period

+       -  Hit apply

+ 

+    -  Enable DNS scavenging for the zone

+ 

+       -  Open the DNS administrative console

+       -  Right-click the zone and select the "General" tab.

+       -  Click the "Aging" button

+       -  Enable the "Scavenge stale resource records" toggle

+       -  Set the no refresh and refresh interval to a low value.

+       -  Check the "This zone can be scavenged after" text box. It

+          should list a date and time shortly in the future.

+ 

+    -  Let the scavenging interval pass

+ 

+       -  The client that was turned off after enrollment should be

+          scavenged. You should no longer be able to see its records in

+          the DNS zones on the server.

+       -  The other client's DNS records should remain intact in the DNS

+          MMC console

+ 

+ Links and recources

+ ~~~~~~~~~~~~~~~~~~~

+ 

+ -  `Understanding aging and

+    scavenging <http://technet.microsoft.com/en-us/library/cc759204%28v=ws.10%29.aspx>`__

+ -  `Using DNS Aging and

+    Scavenging <http://technet.microsoft.com/en-us/library/cc757041%28v=ws.10%29.aspx>`__

+ -  `Don't be afraid of DNS Scavenging. Just be patient. by MSFT

+    Networking

+    Team <http://blogs.technet.com/b/networking/archive/2008/03/19/don-t-be-afraid-of-dns-scavenging-just-be-patient.aspx>`__

+ 

+ Author(s)

+ ~~~~~~~~~

+ 

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

@@ -0,0 +1,106 @@ 

+ Fix DNS site a client is using

+ ==============================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2486 <https://pagure.io/SSSD/sssd/issue/2486>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Even though the Active Directory provider is able to leverage DNS sites,

+ the site discovery is always automatic. There is no way to ``pin`` a

+ particular client into a particular site. This design document describes

+ a way to do so.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ The site discovery relies on client being part of subnet. It is not

+ always practical or even possible to assign Linux machines to the right

+ subnet. Still, these clients should be able to leverage the nearest AD

+ site, even at the expense of manual configuration in ``sssd.conf``.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The SSSD will gain a new AD provider option that would, if AD sites are

+ enabled, override any dynamically discovered sites. This option would

+ pin the client to a particular site not only for primary domain but also

+ for subdomains. The discovery search would only be used to find the

+ forest we are enrolled with.

+ 

+ For Global Catalog service discovery of the primary and secondary

+ domains would then be defined as follows:

+ 

+ -  primary domain - ``$HARDCODED_SITE._sites.$FOREST``

+ -  backup domain - ``$FOREST``

+ 

+ For pure LDAP searches, the domains would then be defined as:

+ 

+ -  primary domain - ``$HARDCODED_SITE._sites.$DOMAIN``

+ -  backup domain - ``$DOMAIN``

+ 

+ Above, $FOREST is auto-discovered and $DOMAIN is either the SSSD domain

+ name as defined in the config file (for the main SSSD domain) or

+ autodiscovered from object of class ``trustedDomain``.

+ 

+ In both cases, the full DNS search consists of

+ ``_$service._$port.$domain``.

+ 

+ Especially for trusted domains, the overridden search domain might not

+ return anything, but the DNS resolver code is built such that it

+ iterates over search domains until the search yields some result.

+ 

+ Authentication against trusted domains

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ For trusted domains, we currently always talk to a local DC which gives

+ libkrb5 a referral to a trusted-domain specific DC that handles

+ authentication against a KDC from the trusted domain. This process is

+ completely opaque to SSSD, which means that Kerberos authentication

+ doesn't take the sites into account at all.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The SSSD AD provider would gain a new option called ``ad_site``. This

+ option would be unset by default.

+ 

+ The SRV initialization function ``ad_srv_plugin_ctx_init()`` needs to be

+ adjusted to accept a site override as a ``const char *site_override``

+ since the site name is just a string. In the default case, where the

+ option is unset, this option would be set to NULL. In any case, the

+ ``ad_get_client_site_send()/recv()`` request would run to completion

+ since we need to learn the forest name anyway. If the new option is set,

+ then the caller of ``ad_get_client_site_recv()`` would still read the

+ forest value, but ignore the site value and use the value of the

+ ``ad_site`` option instead.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ A new option ``ad_site`` as described above. The option would be both

+ described in man pages and implemented in the configAPI.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ The best testing would be performed using an AD test environment

+ consisting of at least two servers in the same domain. To test, join

+ both DCs to the same domain. Create two sites such that the IP address

+ of your SSSD client would set you in one of them.

+ 

+ Make sure that, by default, SSSD creates the kdcinfo file using the DC

+ in the autodetected site and authenticates you against the DCs from the

+ autodetected site. The latter can be verified using i.e. tcpdump and

+ krb5\_child log files.

+ 

+ Set the ``ad_site`` option to a non-default site. Verify, using tcpdump,

+ kdcinfo file contents and SSSD debug logs that SSSD redirects

+ communication to DCs in the non-default site.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

@@ -0,0 +1,483 @@ 

+ GPO-Based Access Control

+ ------------------------

+ 

+ Problem Statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ A common use case for managing computer-based access control in an AD

+ environment is through the use of GPO policy settings related to Windows

+ Logon Rights. This design page proposes adding support for this use case

+ by enhancing the SSSD AD provider to include the GPO support necessary

+ for this access control use case. We are not currently planning on

+ supporting other GPO-based use cases.

+ 

+ Use Cases

+ ~~~~~~~~~

+ 

+ Administrator, who maintains a heterogenous AD and RHEL network is able

+ to define login policies in one central place -- on the AD DC. The same

+ policies will be then honored by his RHEL clients and Windows clients

+ alike. Mapping between interactive or remote Windows logon methods and

+ RHEL PAM services has sensible defaults and can be customized further.

+ 

+ Proposed Solution

+ ~~~~~~~~~~~~~~~~~

+ 

+ .. FIXME: GPO Overview hasn't been migrated yet.

+ .. This link should be fixed whenever it happens.

+ 

+ .. For a general overview of GPO technology, visit `GPO

+ .. Overview <link here>`__

+ 

+ For a general overview of Windows Logon Rights, visit

+ `http://technet.microsoft.com/en-us/library/cc976701.aspx <http://technet.microsoft.com/en-us/library/cc976701.aspx>`__

+ 

+ GPO policy settings can be used to centrally configure several sets of

+ Windows Logon Rights, with each set classified by its logon method (e.g.

+ interactive, remote interactive) and consisting of a whitelist [and

+ blacklist] of users and groups that are allowed [or denied] access to

+ the computer using the set's logon method. In order to integrate Windows

+ Logon Rights into a Linux environment, we allow pam service names to be

+ mapped to a specific Logon Right. We provide default mappings for all of

+ the commonly used pam service names, but we also allow the admin to

+ add/remove mappings as needed (to support custom pam service names, for

+ example). The latter is done by using a new set of config options of the

+ form "gpo\_map\_<logon\_right>" (i.e. gpo\_map\_interactive,

+ gpo\_map\_network, etc), each of which consists of a comma-separated

+ list of entries beginning either with a '+' (for adding to default set)

+ or a '-' (for removing from default set). For example, since the

+ RemoteInteractive logon right maps to a single pam service name ("sshd")

+ by default, an admin could map their own pam service name

+ ("my\_pam\_service") and remove the "sshd" mapping with the following

+ sssd.conf line: "gpo\_map\_remote\_interactive = +my\_pam\_service,

+ -sshd"

+ 

+ For this project, the following options can be used to configure the

+ corresponding Logon Right (default values are also given):

+ 

+ -  ad\_gpo\_map\_interactive (default: login, su, su-l, gdm-fingerprint,

+    gdm-password, gdm-smartcard, kdm)

+ -  ad\_gpo\_map\_remote\_interactive (default: sshd)

+ -  ad\_gpo\_map\_network (default: ftp, samba)

+ -  ad\_gpo\_map\_batch (default: crond)

+ -  ad\_gpo\_map\_service (default: <not set>)

+ -  ad\_gpo\_map\_permit (default: sudo, sudo-i)

+ -  ad\_gpo\_map\_deny (default: <not set>)

+ -  ad\_gpo\_map\_default\_right (default: deny)

+ 

+ The first five options are used to associate specific pam service names

+ with each logon right. The ad\_gpo\_map\_permit [and ad\_gpo\_map\_deny]

+ is used to specify pam service names for which GPO-based access is

+ always [or never] granted. Unlike the other options, the

+ ad\_gpo\_map\_default\_right does not specify pam service names. Rather,

+ it allows the admin to specify a default logon right (or the special

+ permit/deny values)for pam service names that are not explicitly mapped

+ to any of the logon rights. Note that, in many cases, we do not expect

+ the admin will need to specify any of these config options, b/c the

+ defaults have been chosen carefully to cover the most commonly used pam

+ service names (with deny as the default for unmapped service names).

+ 

+ The semantics of each whitelist and blacklist are as follows:

+ 

+ -  whitelist ("allow"): When this policy setting is not defined, any

+    user can logon to the computer. When it is defined, only the users

+    and groups specified in the whitelist are allowed to logon to the

+    computer. In other words, by defining this setting, the semantics go

+    from "everyone allowed access to this computer" to "no one allowed

+    access to this computer, except principals on the whitelist".

+ -  blacklist ("deny"): When this policy setting is not defined, it has

+    no effect. When it is defined, the users and groups specified in the

+    blacklist are blocked from performing logons. For a particular Logon

+    Right (e.g. Interactive), if a user/group is specified in both the

+    whitelist and the blacklist, then the blacklist takes precedence.

+ 

+ In summary, if a user is trying to login to a computer (e.g.

+ pam\_service\_name = "login"), we first find which Logon Right the

+ "login" service maps to (i.e. Interactive, by default), and then process

+ only the corresponding policy settings found in GptTmpl.inf (which

+ contains policy settings for the "Security Settings" extension, of which

+ Logon Rights are a part). In the case of Interactive Logon Right, those

+ policy settings are named *SeInteractiveLogonRight* and

+ *SeDenyInteractiveLogonRight* in the GptTmpl.inf file.

+ 

+ A client-side implementation consists of the following components:

+ 

+ -  LDAP Engine: determines which GPOs are applicable for the computer

+    account from which the user is attempting to log in, filters those on

+    various criteria, and ultimately produces a set of cse\_filtered GPOs

+    that contain the "Security Settings" CSE, which it feeds, one by one,

+    to the SMB/CIFS Engine

+ -  SMB/CIFS Engine: makes blocking libsmbclient calls to retrieve each

+    GPO's GPT.INI and GptTmpl.inf files, and stores the files in the gpo

+    cache (/var/lib/sss/gpo\_cache), from which the GPO Enforcement

+    Engine will retrieve them

+ -  GPO Enforcement Engine: enforces GPO-based access control by

+    retrieving each GPO's policy file (GptTmpl.inf) from the gpo cache,

+    parsing it, and making an access control decision by comparing the

+    user/groups against the whitelist/blacklist of the Logon Right of

+    interest (which is based on the pam service name)

+ 

+ For the sake of clarity, the above description ignores some features,

+ such as GPO version caching/comparing, and offline support.

+ 

+ Implementation Details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Packaging

+ ^^^^^^^^^

+ 

+ Since the GPO-based access control feature will only be used by the AD

+ provider, it will be included as part of the sssd-ad package. The source

+ files for the feature would be included as part of libsss\_ad.so. In

+ order to ensure that existing configurations do not see changes in

+ behavior when upgrading, this feature will not be enabled by default.

+ Rather, a new "ad\_gpo\_access\_control" config option is provided which

+ can be set to "disabled" (neither evaluated nor enforced), "enforcing"

+ (evaluated and enforced), or "permissive" (evaluated, but not

+ enforced).The "permissive" value is the default, primarily to facilitate

+ a smooth transition for administrators; it evalutes the gpo-based access

+ control rules and outputs a syslog message if access would have been

+ denied. By examining the logs, administrators can then make the

+ necessary changes before setting the mode to "enforcing".

+ 

+ In addition to the new ad\_gpo\_access\_control and ad\_gpo\_map\_\*

+ config options, there is also a new config option named

+ ad\_gpo\_cache\_timeout, which can be used to specify the interval

+ during which subsequent access control requests can re-use the files

+ stored in the gpo\_cache (rather than retrieving them from the DC).

+ 

+ GPO Retrieval

+ ^^^^^^^^^^^^^

+ 

+ -  LDAP Engine (running in backend): This component runs as part of the

+    AD access provider. It does the following:

+ 

+    -  Determines which GPOs are applicable to the computer account from

+       which the user is attempting to log in. This is based on:

+ 

+       -  whether the GPO is linked to the site/domain/ou under which the

+          computer account is stored

+       -  whether the GPO is enabled or disabled

+       -  whether the GPO is enforced or unenforced

+       -  whether or not the GPO is allowed to be inherited from parent

+          containers

+       -  whether the user has the ApplyGroupPolicy permission on the

+          GPO's DACL

+ 

+    -  Retrieves relevant attributes of applicable GPOs (e.g. cse-guids,

+       file\_system\_paths, etc)

+    -  Extracts supported GPOs (i.e. those with "Security Settings" cse)

+       from the applicable GPOs

+    -  For each supported GPO

+ 

+       -  Retrieves the GPO's version and timeout from the sysdb cache

+          (from a previous transaction, if any)

+       -  If timeout is greater than current time, then skips to GPO

+          Enforcement

+       -  Else, sends to the gpo\_child the supported GPO, as well as the

+          cached GPO version (if any)

+ 

+ -  SMB/CIFS Engine (gpo\_child): This component is used to make blocking

+    SMB/CIFS calls. It does the following:

+ 

+    -  Retrieves the GPO's corresponding GPT.INI file (from which it

+       extracts the fresh version)

+    -  If the fresh version is greater than the cached version (or if

+       there is no cached version)

+ 

+       -  Retrieves the policy file corresponding to the GPO

+          (GptTmpl.inf) and saves it to the gpo cache

+          (/var/lib/sss/gpo\_cache)

+       -  Returns the fresh version to the backend, which stores it in

+          the cache

+ 

+ GPO Enforcement

+ ^^^^^^^^^^^^^^^

+ 

+ -  GPO Enforcement Engine: enforces GPO-based access control (note that

+    this will take place after existing AD access provider mechanisms,

+    such as account lockout, LDAP filter)

+ 

+    -  For each GPO

+ 

+       -  Retrieves GPO's corresponding policy file (i.e. GptTmpl.inf)

+          file from gpo cache

+       -  Parses policy file, extracting entries corresponding to the

+          Logon Right of interest (determined by the pam service name)

+       -  Enforces access control policy settings

+ 

+ Cache Schema

+ ^^^^^^^^^^^^

+ 

+ The Cache stores entries for individual GPOs in a new container

+ "cn=gpos, cn=ad, cn=custom, cn=<domain>, cn=sysdb" ::

+ 

+       // GPOs

+       dn: "name=<gpo-guid1>,cn=gpos,cn=ad,cn=custom,cn=<domain>,cn=sysdb"

+       gpoGUID: <gpo-guid1>            (string)

+       gpoVersion: <version>           (integer)

+       objectClass: "gpo"

+       gpoPolicyFileTimeout: <timeout> (integer)

+ 

+ Refresh Interval Configuration

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Microsoft specifies that there be separate configurable refresh

+ intervals (one for computer-based GPOs and one for user-based GPOs),

+ with each having a default of 90 minutes. If 0 minutes are specified,

+ Microsoft uses a 7-second refresh interval. Additionally, in order to

+ avoid performance degradation that could occur if several computers

+ perform a group policy refresh simultaneously, Microsoft also specifies

+ that a random offset interval be added to the refresh interval, with the

+ maximum offset interval having a default of 30 minutes. As such, there

+ are four settings (computer-based refresh interval, computer-based

+ maximum offset interval, user-based refresh interval, user-based maximum

+ offset interval). Additionally, Microsoft specifies a boolean

+ configuration setting that disables refresh altogether (in which case

+ none of the previous four configuration settings would be relevant). If

+ refresh is completely disabled, then GPOs would only be retrieved at

+ computer startup (or user login). One final note: the GPO mechanism

+ itself can be used to uniformly set these refresh configuration options

+ for a set of computers; namely, Microsoft specifies standard GPO policy

+ settings that can used to centrally specify the various refresh

+ parameters. Of course, these would not apply until after they had been

+ retrieved.

+ 

+ Although we are only implementing a computer-based GPO in the first

+ implementation, we should keep in mind that user-based GPOs could have a

+ different refresh interval. As such, we would need to add a new

+ configuration option ("computer\_gpo\_refresh\_interval") to the

+ existing AD access provider that would specify the gpo retrieval refresh

+ interval in seconds. This would specify the period to use in the

+ periodic task API to determine how often to call the gpo retrieval code.

+ By default, Microsoft sets this value to 90 minutes. It is an open issue

+ as to whether we want to support the random offset interval or the

+ ability to disable refresh altogether.

+ 

+ Unresolved Issues

+ ~~~~~~~~~~~~~~~~~

+ 

+ When should GPO retrieval take place? It could happen at one or more of

+ the following times:

+ 

+ -  If we follow the Microsoft spec, since "Allow / Deny Logon Locally"

+    are computer-based policy settings, GPO retrieval should take place

+    when the system boots and at regular refresh intervals. If we assume

+    system boot effectively coincides with sssd initialization (for our

+    purposes), we can retrieve the policy settings during ad\_init and

+    kick off a periodic task (similar to what we do for enumeration).

+    However, this will likely have an adverse performance impact on

+    system startup.

+ -  Alternatively, we can perform GPO retrieval in the AD access provider

+    itself (just before enforcing the policy settings), meaning that

+    retrieval would take place at every user login. This would ensure

+    that the freshest policy settings were being applied at every logon.

+    If we only performed GPO retrieval at this point, then periodic

+    refresh would not be needed (at least for the "Allow / Deny Logon

+    Locally" policy settings) since we are getting fresh data every time.

+ -  Additionally, we could register an online callback such that GPO

+    retrieval takes place when returning to online mode from offline

+    mode. This really depends on what we decide about the first two

+    retrieval times above. If we aren't doing periodic refresh, and are

+    only retrieving gpo's at login time, then an online callback might

+    not be needed. If we are doing periodic refresh, then we can set the

+    "offline" parameter of be\_ptask\_create(...) to DISABLE (which means

+    the task is disabled immediately when back end goes offline and then

+    enabled again when back end goes back online). Or we can play it safe

+    and always use DISABLE semantics (regardless of when GPO retrieval

+    takes place).

+ 

+ Should we enforce GPO logon policy settings only at user login, or also

+ at periodic intervals?

+ 

+ -  After a user has logged on successfully using GPO-based access

+    control, if new policy settings are retrieved during refresh

+    indicating that the user is no longer allowed to log in to this host,

+    should sssd log out the user (or should we only enforce the access

+    control at login time)? What do our other access control mechanisms

+    do here? If we wanted to log out the user, do we have an existing

+    mechanism to do this?

+ 

+ If we implement gpo refresh, which of the refresh configuration options

+ should we implement and how?

+ 

+ -  sssd configuration options

+ 

+    -  computer\_gpo\_refresh\_interval? If we use sssd configuration, we

+       would definitely want this one (although maybe with a shorter

+       name).

+    -  computer\_gpo\_max\_offset (default 30 minutes)? Do we think this

+       random offset adds enough value to be a configurable option?

+    -  disable\_gpo\_refresh (default false)? Presumably, this would be

+       done so that performance would not be adversely affected during

+       the logon session. Alternatively, we could tell admins that wanted

+       to disable gpo refresh to set the

+       entry\_cache\_computer\_gpo\_timeout to zero (0), although this

+       would not be how Microsoft interprets a zero value. Does sssd

+       interpret '0' as "disable" elsewhere?

+ 

+ -  gpo refresh interval GPO

+ 

+    -  if we didn't want to clutter sssd's configuration namespace, we

+       could just use the standard Microsoft GPO that allows an admin to

+       specify the aforementioend refresh intervals (and distribute a

+       consistent configuration to a set of computers)

+ 

+ Options

+ ~~~~~~~

+ 

+ Option 1: The straightforward option is to only perform GPO retrieval in

+ the AD access provider itself.

+ 

+ -  Pros

+ 

+    -  provides just-in-time retrieval (yielding fresh data)

+    -  does away with need for periodic refresh and refresh configuration

+    -  no performance hit at system startup (and at periodic refresh)

+ 

+ -  Cons

+ 

+    -  suffers a performance hit on every user login

+    -  doesn't allow us to perform user logout (if policy settings no

+       longer allow access)

+ 

+ Option 2: The spec-compliant option is to perform GPO retrieval (and

+ take the performance hit) at system start and then at periodic

+ intervals.

+ 

+ -  Pros

+ 

+    -  complies with spec

+    -  no performance hit at every user login

+    -  allows us to perform user logout (if policy settings no longer

+       allow access)

+ 

+ -  Cons

+ 

+    -  suffers performance hit at initial startup and then periodically

+    -  policy data likely to be stale

+    -  requires implementation of periodic refresh, including refresh

+       configuration (for which we should probably use gpo refresh GPO)

+ 

+ Recommendation

+ ~~~~~~~~~~~~~~

+ 

+ In order to avoid premature optimization, the team's recommendation is

+ to start by implementing the straightforward approach (Option 1), and to

+ address potential performance concerns later (when we will be able to

+ make actual measurements).

+ 

+ Configuration Changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ The following new options are added to the AD access provider. Kindly

+ see the sssd-ad man page for a complete description.

+ 

+ -  ad\_gpo\_access\_control - describes the operation mode of access

+    control (enforcing/permissive/disabled)

+ -  ad\_gpo\_cache\_timeout - amount of time between lookups of GPO files

+    on the AD server

+ -  ad\_gpo\_map\_interactive - PAM services that map onto

+    InteractiveLogonRight and DenyInteractiveLogonRight

+    policy settings.

+ -  ad\_gpo\_map\_remote\_interactive - PAM services that map onto

+    RemoteInteractiveLogonRight and DenyRemoteInteractiveLogonRight

+    policy settings.

+ -  ad\_gpo\_map\_network - PAM services that map onto

+    NetworkLogonRight and DenyNetworkLogonRight policy settings.

+ -  ad\_gpo\_map\_batch - PAM services that map onto

+    BatchLogonRight and DenyBatchLogonRight policy settings.

+ -  ad\_gpo\_map\_service - PAM services that map onto

+    ServiceLogonRight and DenyServiceLogonRight

+    policy settings.

+ -  ad\_gpo\_map\_permit - PAM service names for which GPO-based access

+    is always granted

+ -  ad\_gpo\_map\_deny - PAM service names for which GPO-based access is

+    always denied

+ -  ad\_gpo\_map\_default\_right - defines how access control is

+    evaluated for PAM service names that are not explicitly listed in one

+    of the ad\_gpo\_map\_\* options.

+ 

+ How to test

+ ~~~~~~~~~~~

+ 

+ -  Perform the following tests for each set of Logon Rights (not just

+    for Interactive, as shown)

+ 

+    -  Setup

+ 

+       -  Create AD users named allowed\_user, denied\_user,

+          regular\_user, allowed\_group\_user, denied\_group\_user,

+          allowed\_denied\_group\_user

+       -  Create AD groups named allowed\_group, denied\_group

+       -  Set allowed\_group\_user and allowed\_denied\_group\_user as

+          members of allowed\_group

+       -  Set denied\_group\_user and allowed\_denied\_group\_user as

+          members of denied\_group

+       -  Create GPO with two policy settings (in this case, we are using

+          Interactive Logon Right as an example)

+ 

+          -  "Allow Logon Locally" is set to "allowed\_user",

+             "allowed\_group"

+          -  "Deny Logon Locally" is set to "denied\_user" ,

+             "denied\_group"

+ 

+    -  Link GPO to specific site, domain, or OU node (under which the

+       host computer resides in AD)

+ 

+ -  Perform the following "standard test" using each logon method

+    (corresponding to each Logon Right). For example, we can use "ssh" to

+    test the RemoteInteractive Logon Right on a single computer

+    (localhost)

+ 

+    -  [yelley] $ ssh allowed\_user@foo.com@localhost

+    -  Note that "allowed\_user" and "allowed\_group\_user" should be

+       granted access

+    -  Note that "regular\_user", "denied\_user", "denied\_group\_user",

+       and "allowed\_denied\_group\_user" should be denied access

+ 

+ -  Create a new computer account in a location which should have no

+    linked GPOs in the AD hierarchy (site, domain, ou)

+ 

+    -  (Alternatively, use the same computer account, but disable any

+       applicable GPOs using GPMC; make sure to re-enable them after this

+       step!!)

+    -  Perform standard test and make sure that all users are able to log

+       in to host (since no GPOs apply to this host)

+ 

+ -  Offline Mode

+ 

+    -  take the system offline with no files in the gpo\_cache directory;

+       perform standard test and make sure it grants access

+    -  perform the standard test while online (download some files); then

+       take the system offline and make sure it behaves as expected

+ 

+ -  Test ad\_gpo\_access\_control config option

+ 

+    -  perform standard tests when this option is "permissive" (or

+       unspecified), "enforcing", "disabled"

+ 

+ -  Test ad\_gpo\_cache\_timeout config option

+ 

+    -  perform standard test with a sysdb cache with no gpo entries (or a

+       clean sysdb cache)

+    -  make a change to a GPO policy setting so that the

+       sysvol\_gpt\_version is incremented

+ 

+       -  perform standard test and make sure that the timestamps on

+          GPT.INI and GptTmpl.inf have changed

+ 

+    -  using a large value for this option (300 seconds), perform

+       standard test again within the timeout period; make sure the

+       timestamps on GPT.INI and GptTmpl.inf have not changed

+    -  using the default value for this option (5 seconds), perform

+       standard test again after the timeout period; make sure the

+       timestamp on GPT.INI has changed, but not the timestamp on

+       GptTmpl.inf (since no policy change was made in AD)

+ 

+ -  Test ad\_gpo\_map\_\* config options

+ 

+    -  perform standard tests after adding pam service names to default

+       set using '+'

+    -  perform standard tests after removing pam service names from

+       default set using '-'

@@ -0,0 +1,115 @@ 

+ Asynchronous LDAP connections

+ =============================

+ 

+ Problem Statement

+ -----------------

+ 

+ Currently, connecting to an LDAP server by the openldap library is

+ blocking. This means that when SSSD attempts to connect to an

+ unresponsive server, it can block up to 5 second (the current default

+ setting of ``ldap_network_timeout``). This is not ideal, as we would

+ like to be able to do something else while the connection is being made.

+ 

+ General Approach

+ ----------------

+ 

+ Recent versions of openldap have a new (and not very well documented)

+ option ``LDAP_OPT_CONNECT_ASYNC`` that can be set by

+ ``ldap_set_option()``. This option will cause all ldap functions that

+ create a new connection to only invoke ``connect()`` and not wait for

+ the socket to become ready for writing. If it is not, the function will

+ return ``LDAP_X_CONNECTING`` and it has to be executed again after the

+ socket becomes ready.

+ 

+ Implementation

+ --------------

+ 

+ Because a lot of ldap functions can cause the creation of a new socket,

+ every such function will be wrapped into a tevent\_req interface. Every

+ such wrapper will consist of a standard ``send`` and ``recv`` function,

+ as well as a so called ``try`` function.

+ 

+ The ``send`` function will copy all the arguments into the state data

+ structure and invoke the ``try`` function, passing the ``tevent_req`` to

+ it. The try function will invoke the openldap function with all the

+ arguments from the state. The ``try`` function will then return ``0`` if

+ ``LDAP_X_CONNECTING`` was returned signaling that we will have to invoke

+ it one more time later. In case the connection was already available,

+ ``tevent_req_done()`` or ``tevent_req_error()`` is invoked and ``1`` is

+ returned.

+ 

+ Callbacks

+ ~~~~~~~~~

+ 

+ The implementation will make a heavy use of both openldap and tevent

+ callbacks. Our goal is to be able to invoke the ``try`` function from

+ the tevent callback (which will be invoked by tevent after the socket is

+ ready for writing). The main problem is to make both the ``try``

+ function and the ``tevent_req`` available to this callback. The

+ callbacks that we need work as follows:

+ 

+ #. Openldap callbacks are set by ``ldap_set_option()`` with the

+    ``option`` argument set to ``LDAP_OPT_CONNECT_CB``. Additional data

+    structure (\`struct ldap\_cb\_data\`) is passed in as well. This data

+    structure will always be passed to the openldap callback. This is

+    currently done in ``setup_ldap_connection_callbacks()``.

+ #. Right after an openldap function creates a connection, it will call

+    the callback passing to it (among other things) the newly created

+    socket. In SSSD, this callback is

+    ``sdap_ldap_connect_callback_add()``.

+ #. The callback will then register a tevent callback

+    ``sdap_ldap_result()`` which is invoked when the socket is ready for

+    writing and is responsible for calling ``ldap_result()``.

+ 

+ What we need to do is to make sure that the tevent callback is called

+ not only when the socket is ready for reading, but also when it is ready

+ for writing (but only once, since it will always be ready for writing

+ after the connection is made). We also need to provide the tevent

+ callback with the ``try`` function and the associated ``tevent_req``

+ structure. After the socket is ready for writing, we call the ``try``

+ function and pass it the ``tevent_req`` so it can call the ldap function

+ again and mark the tevent request as finished.

+ 

+ To pass the tevent request to the tevent callback, we need to take a bit

+ of a detour. Every ``try`` function, before calling the ldap function

+ has to call ``set_fd_retry_cb()`` passing in a pointer to itself and the

+ tevent request. This function will save these to the data structure that

+ is available to the ldap callback. This callback is called after the

+ socket is created and in turn, the function pointer and the tevent

+ request are made available to the newly registered tevent callback,

+ which is what we wanted. The ``try`` function also has to call

+ ``set_fd_retry_cb()`` again after the ldap function is called and set

+ both the function pointer and the tevent request pointer to ``NULL``. So

+ now when it's all set, after the socket is ready for writing we can call

+ the ``try`` function from the tevent callback to finish the whole

+ transaction.

+ 

+ Spies

+ -----

+ 

+ One problem with the approach described in the previous section is with

+ storing the tevent request in a place out of the tevent chain. If the

+ request gets freed before there is a chance to call the ``try``

+ function, we will be left with a dangling pointer that might eventually

+ be dereferenced.

+ 

+ To solve this problem, we create a sort of a "spy" that will free the

+ ``fd_event_item`` associated with the tevent callback in case the

+ request is freed. The spy is created in the ldap callback. The following

+ diagram and code illustrate how the spy is used to make sure there are

+ no dangling pointers left: ::

+ 

+     request_spy_destructor(struct request_spy *spy)

+     {

+         if (spy->ptr) {

+             spy->ptr->spy = NULL;

+             talloc_free(spy->ptr);

+         }

+     }

+ 

+     fd_event_item_destructor(struct fd_event_item *fd_event_item)

+     {

+         if (fd_event_item->spy) {

+             fd_event_item->spy->ptr = NULL;

+         }

+     }

@@ -0,0 +1,81 @@ 

+ .. FIXME: Add a link to WinBind internal documentation on all "WinBind"

+ ..        references in this document!

+ 

+ Async WinBind

+ =============

+ 

+ The WinBind provider uses *libwbclient* library for communication with

+ WinBind to satisfy NSS and PAM requests. However this library doesn't

+ provide an asynchronous interface. We had a choice between creating this

+ interface or use synchronous calls in auxiliary processes running in

+ parallel to the main provider process.

+ 

+ General Approach

+ ----------------

+ 

+ There should always be at least one auxiliary process running. This

+ process will receive requests from the main provider process, handle

+ them as they come in and send back responses. The communication protocol

+ used should be DBus as it is used in other providers and therefor

+ doesn't require any extra dependencies or writing additional code. DBus

+ should also take care of request buffering.

+ 

+ Splitting the load

+ ------------------

+ 

+ If the host needs to process a huge amount of NSS and PAM requests in

+ short periods of time, it should be possible to setup more than one

+ auxiliary process to handle them. One should always be available before

+ hand, because starting it just before it's required adds extra overhead

+ and delay. A maximum (and maybe a minimum too) number of auxiliary

+ process should be configurable along with a threshold expressing when a

+ new process should be created or a spare process killed. The main

+ provider process need to keep track of this and send it's requests to

+ the least busy auxiliary process.

+ 

+ Implementation steps

+ --------------------

+ 

+ #. Have one auxiliary process started when the provider starts. It will

+    handle all requests.

+ 

+ 2. Add the possibility to have a pre-configured number of processes

+    (maximum=minimum) and split requests between them.

+ 

+ 3. Add the ability to spawn/kill processes based on load.\*

+ 

+ -  This needs more thinking: e.g. how long do we keep a spare process

+    alive? how is the threshold going to work?

+ 

+ 

+ Update

+ ------

+ 

+ .. FIXME: Do we have access to these diagrams?

+ ..        For now I'm commenting out this part.

+ 

+ .. Here are some diagrams that show how the solution is going to be

+ .. implemented. Inspiration has been taken from Apache process pool as

+ .. Jakub suggested in ticket discussion.

+ .. 

+ .. Sorry for the poor quality of diagrams, but Dia just sucks. :-/

+ 

+ The main process of the WinBind  provider will send requests to spare

+ processes in the pool. These processes will be allocated automatically

+ based on the number of spare processes available at any given time.

+ 

+ Requests from NSS and PAM will be forwarded to spare processes in the

+ pool if there are any available. If not, a new process will be created

+ unless the maximum number of processes has been reached. After the

+ request has been forwarded, the number of available spare processes is

+ checked and a new process is created if there are not enough. Note that

+ the pool is first populated with a minimum number of processes (spare or

+ not) when the WinBind provider starts.

+ 

+ In other words, there will be 3 settings:

+ 

+ **Mimimum** number of worker processes **running**.

+ 

+ **Maximum** number of worker processes **running**.

+ 

+ **Minimum** number of **spare** worker processes **running**.

@@ -0,0 +1,249 @@ 

+ SSSD and automounter integration

+ ================================

+ 

+ This design page describes integration of autofs and SSSD in a more

+ centralized manner. The discussion started on SSSD mailing list and then

+ in `Red Hat

+ Bugzilla <https://bugzilla.redhat.com/show_bug.cgi?id=683523>`__. This

+ page summarizes the discussions and design.

+ 

+ Autofs is able to look up maps stored in LDAP. However, autofs does all

+ the lookups on its own. Even though autofs uses the ``nsswitch.conf``

+ configuration file, there is no glibc interface such as those for

+ retreiving users and groups and by extension no nscd caching.

+ 

+ The benefits of the integration would be:

+ 

+ -  unified configuration of LDAP servers, timeout parameters, DNS SRV

+    lookups, ...

+ -  only one connection to the LDAP server open

+ -  caching of the data

+ -  offline access - even though if the client cannot connect to the LDAP

+    server chances are that the NFS server is unreachable as well

+ -  back end abstraction - data may be stored in NIS or other databases

+    and accessed by the automounter transparently

+ 

+ The solution we selected is to provide a new automounter lookup module

+ that would communicate with SSSD.

+ 

+ autofs lookup modules

+ ---------------------

+ 

+ There are several internal interfaces within autofs implemented as

+ shared libraries, one is the lookup module.

+ 

+ A lookup module is implemented for each information source and they each

+ have a fixed interface. Upon loading, automount will get the library

+ entry points via dlopen(). There are several entry points such as:

+ 

+ -  ``lookup_init()`` and ``lookup_done()`` are called when the module is

+    first used and when the module is no longer needed.

+ -  ``lookup_read_master()`` is called at program start to read the

+    master map.

+ -  ``lookup_read_map()`` reads the entire map.

+ -  ``lookup_mount()`` looks up an automount map key.

+ 

+ The lookup module is passed autofs internal data structures and must

+ handle all the corner cases there can be - so the lookup module

+ shouldn't be exposed outside autofs and should be developed as part of

+ autofs.

+ 

+ The lookup modules are named ``<autofs library dir>/lookup_<source>.so``

+ where ``<source>`` is the source name from the "automount:" line of

+ ``/etc/nssswitch.conf``. So the SSSD lookup module would be named

+ ``lookup_sss.so`` and selected in nsswitch.conf with the directive

+ ``automount: files sss`` (to allow for local client overrides) or just

+ ``automount: sss``.

+ 

+ In particular, the lookup module calls an iterator to walk through the

+ <key, value> pairs in a map or lookup a key by name in a map.

+ 

+ The lookup\_sss module needs to connect to SSSD and request the data

+ from SSSD somehow. This would be done by adding a couple of functions

+ into the libnss\_sss.so module. The lookup\_sss.so module would dlopen()

+ libnss\_sss.so and dlsym() the functions needed.

+ 

+ The API provided by SSSD

+ ------------------------

+ 

+ The SSSD API would live in libnsss\_sss.so. That means polluting the

+ library a little with functions that are not strictly

+ name-service-switch related, but would allow us to reuse a fair amount

+ of code and talk to the NSS responder socket easily.

+ 

+ The API itself would define the following functions:

+ 

+ -  iterator start that would allocate the private struct automtent and

+    pass it out as context

+ 

+         ``errno_t _sss_setautomntent(const char *mapname, void **context);``

+ 

+ -  iterator end that would free the private struct automtent

+ 

+         ``errno_t _sss_endautomntent(void **context);``

+ 

+ -  function that returns the next (key,value) pair given a context

+ 

+         ``errno_t _sss_getautomntent_r(const char **key, const char **value, void *context);``

+ 

+         The ``key`` and ``value`` strings are allocated with

+         ``malloc()`` and must be freed by the caller

+ 

+ -  function that looks up data for a given key

+ 

+         ``errno_t _sss_getautomntbyname_r(const char *key, const char **value, void *context);``

+ 

+         The ``value`` string is allocated with ``malloc()`` and must be

+         freed by the caller

+ 

+ The context parameter is a private structure defined in the libnss\_sss

+ library that would keep track of the iterator: ::

+ 

+     struct automtent {

+         const char *mapname;

+         size_t cursor;

+         /* Other data TBD as needed */

+     };

+ 

+ The iterator is passed as the last parameter of the functions which may

+ seem a bit odd, but it is an autofs convention. Because the sole

+ consumer of this interface would be autofs itself, I decided to keep it

+ the autofs way.

+ 

+ When the API functions are called, SSSD would send a request through the

+ NSS pipe to the responder, which would consult the back end similar to

+ how other name service switch requests are handled.

+ 

+ Storing the data in SSSD cache

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ In the first version, SSSD should just schedule a periodic task to

+ download automounter data similar to how user/group enumeration task is

+ scheduled. The automounter maps can potentially be huge, so we might

+ need to optimize the download task in later versions. One idea for

+ future enhancement is to use entryUSN number in deployments that support

+ it.

+ 

+ The LDAP schema used by autofs

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ There are three schemas that can be used for storing autofs data in

+ LDAP. They do not differ in semantics the way RFC2307 and RFC2307bis

+ schemas differ in the member/memberuid attribute. The difference in

+ schemas is mostly attribute and objectclasses naming and how the DNs are

+ constructed. The DNs are also not used by the client. SSSD should

+ convert the data into a cache-specific schema. The cache specific schema

+ will be based on the RFC2307bis automounter schema, which is by far the

+ most widely used schema.

+ 

+ Each of the schemas define objectclass names for map and entry and

+ attribute names for map name (used by map) and key and value attribute

+ names (used by map entry). ::

+ 

+     +----------------------+----------------------+------------+----------------------+

+     | *attribute*          | *RFC2307bis*         | *NIS*      | *RFC2307 extension*  |

+     +======================+======================+============+======================+

+     | *map objectclass*    | automountMap         | nisMap     | automountMap         |

+     +----------------------+----------------------+------------+----------------------+

+     | *entry objectclass*  | automount            | nisObject  | automount            |

+     +----------------------+----------------------+------------+----------------------+

+     | *map attribute*      | automountMapName     | nisMapName | ou                   |

+     +----------------------+----------------------+------------+----------------------+

+     | *entry attribute*    | automountKey         | cn         | cn                   |

+     +----------------------+----------------------+------------+----------------------+

+     | *value attribute*    | automountInformation | nisMapEntr | automountInformation |

+     |                      |                      | y          |                      |

+     +----------------------+----------------------+------------+----------------------+

+ 

+ An example of the RFC2307bis schema showing an entry for /home/foo

+ included in the master map: ::

+ 

+     dn: automountMapName=auto.master,dc=example,dc=com

+     objectClass: top

+     objectClass: automountMap

+     automountMapName: auto.master

+ 

+     dn: automountMapName=auto.master,dc=example,dc=com

+     objectClass: automount

+     cn: /home

+     automountKey: /home

+     automountInformation: auto.home

+ 

+     dn: automountMapName=auto.home,dc=example,dc=com

+     objectClass: automountMap

+     automountMapName: auto.home

+ 

+     dn: automountKey=foo,automountMapName=auto.home,dc=example,dc=com

+     objectClass: automount

+     automountKey: foo

+     automountInformation: filer.example.com:/export/foo

+ 

+ Most, if not all, of the autofs documentation out there describes the

+ naming schema as per RFC2307bis, but it is technically possible to use

+ autofs objects created according to RFC2307bis and user/group objects

+ created according to plain RFC2307 in the same tree. Because the schemas

+ differ in attribute naming only, not semantically, it is trivial to

+ override the schema in the config file. We just need to pick the right

+ defaults and adjust according to user feedback.

+ 

+ One difference between filesystem entries and entries in LDAP is that

+ the "cn" attribute is case-insensitive, unlike key names which are

+ essentially directory names. This seems to be one of the reasons the

+ RFC2307bis schema was adopted.

+ 

+ SSSD Configuration

+ ~~~~~~~~~~~~~~~~~~

+ 

+ The autofs support would be turned on by specifying

+ ``autofs_provider = ldap`` in a domain section. A new search base

+ ``ldap_autofs_search_base`` option will be introduced as well. The

+ periodic download task will default to ``ldap_search_base``.

+ 

+ SSSD will also include new attribute overrides for the new autofs map in

+ order to support all the schemas users might have been using.

+ 

+ This work is targeted at the same SSSD milestone as separating the cache

+ timeout parameters, so we might also need to include a new autofs cache

+ timeout.

+ 

+ We also need to create a migration document for users of the native

+ autofs LDAP support.

+ 

+ Fully Qualified Names

+ ^^^^^^^^^^^^^^^^^^^^^

+ 

+ With user/group lookups, the domain can be specified by using a

+ "fully-qualified-name", for example getent passwd

+ `jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__. We should support

+ something similar with autofs. However, maps can include any characters

+ that are valid for filesystem path names, including '@', so there's a

+ potential conflict.

+ 

+ -  if there are more LDAP domains with autofs on, they are searched

+    sequentially until a match is found. This is how user searches work,

+    too

+ -  FQDN requests will be allowed by default, but not required unless

+    ``use_fully_qualified_names`` is set to TRUE

+ -  The FQDN name-domain separator is @ by default, but SSSD allows it to

+    be configurable even in the current using the ``re_expression``

+    parameter.

+ 

+ Future and miscellanous work

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The first iteration will aim at providing a working autofs integration

+ for generic LDAP servers. There is a number of tasks that might not make

+ the first iteration but should be tracked and done in the future.

+ 

+ #. Native IPA automount schema

+ 

+    -  autofs client does not know the concept of "locations" but that

+       doesn't really matter. The locations objects in LDAP are of the

+       "nscontainer" class and are only part of the DN. The client does

+       not care about DNs, so we are safe storing the locations in cache

+       as-is.

+ 

+ #. A migration script

+ 

+    -  this can be lower priority with the migration documentation in

+       place

@@ -0,0 +1,91 @@ 

+ Backend DNS Helpers

+ ===================

+ 

+ Problem Statement

+ -----------------

+ 

+ In our back ends we need to be able to find out which server we are

+ supposed to connect to. We have various ways to define a server, such as

+ using lists of servers, or a Service type, and then using DNS SRV

+ records, or in some cases other ways (for example, CLDAP queries for AD

+ Sites, Location discovery for IPA, etc.). Because our back ends use

+ asynchronous calls, we also need to be able to resolve DNS domain names

+ asynchronously to avoid stalling other operations (such as Kerberos

+ authentication for a user while trying to resolve the LDAP identity

+ server name). We need to be able to handle fallback cases and have

+ blacklists of servers we know are not reachable. We also want to be able

+ to share this information between the authentication, identity and other

+ providers within the same back end/domain.

+ 

+ General Approach

+ ----------------

+ 

+ Given that most back ends need to configure servers to reach and need to

+ resolve their names and possibly allow for fallbacks to secondary

+ servers, a general mechanism should be provided for back ends so that we

+ have common basic helpers. Because some providers need the same

+ information (example: ldap id + Kerberos auth providers want to connect

+ to the same IPA server) it also make sense to provide this functionality

+ as a back end function.

+ 

+ The idea is to init a common set of structures to hold data + methods

+ that are passed to the providers at initialization time. More advanced

+ providers (IPA, AD) that have special needs for DNS discovery will also

+ be able to override the default helpers, otherwise the providers will

+ simply use the default common facility.

+ 

+ The helpers will use the tevent\_req interface and will be completely

+ asynchronous.

+ 

+ Methods

+ -------

+ 

+ We need a few basic methods to start:

+ 

+ #. Initialization method, to which we pass a list of servers:service we

+    need to connect to from the specified provider. The first provider

+    that sets up the list will initialize defaults; if no other provider

+    adds any server:service item during initialization the default ones

+    will be used by all.

+ 

+ 2. A secondary implementation method that provides a DNS domain and the

+    request to resolve SRV records instead of (or in addtion to)

+    providing a list of servers:services. The helper will decide when it

+    is time to refresh the SRV list.

+ 

+ 3. A simple method to ask for the first available server of type service

+    in the list for this provider.

+ 

+ 4. A method to give feedback about a returned result. If the resolved

+    server is not reachable it should be blacklisted for some time. If

+    all servers are blacklisted we should consider putting the whole

+    domain offline.

+ 

+ State Information

+ -----------------

+ 

+ In the initial implementation the black and white lists of servers will

+ be kept in memory. This means that any status will be lost if the

+ process is restarted. In future we may decide to cache the lists on

+ persistent storage (the domain's LDB file) to avoid delays on quick

+ restarts.

+ 

+ Configuration

+ -------------

+ 

+ The first implementation step will focus on manually configured lists

+ and the default resolution mechanism. The list of servers can be

+ explicitly configured in sssd.conf.

+ 

+ The list can:

+ 

+ #. Include host names, host IP addresses in v4 format or host IP

+    addresses in v6 format, and optionally a port number

+ #. Can have just one or multiple items

+ #. Can specify a domain name to be used to resolve SRV records

+ #. Can be empty in which case a default domain will be used (recovered

+    from the host name or the domain options in resolv.conf)

+ 

+ SRV records are not used if an explicit list is provided. This is the

+ behaviour of the default helpers; other providers can provide their own

+ resolution methods.

@@ -0,0 +1,172 @@ 

+ Authenticate against cache in SSSD

+ ==================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/1807 <https://pagure.io/SSSD/sssd/issue/1807>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ SSSD should allow cache authentication instead of authenticating

+ directly against network server every time. Authenticating against the

+ network many times can cause excessive application latency.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ In environments with tens of thousands of users login process may become

+ inappropriately long, when servers are running under high workload (e.g.

+ during classes, when many users log in simultaneously).

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Add new domain option ``cached_auth_timeout`` describing how long can be

+ cached credentials used for cached authentication before online

+ authentication must be performed. Update PAM responder functionality for

+ forwarding requests to domain providers by checking if request can be

+ served from cache and if so execute same code branch as for offline

+ authentication instead of contacting the domain provider.

+ 

+ If password is locally changed (via SSSD client) SSSD should use online

+ authentication for the next login attempt.

+ 

+ SSSD should immediately try an online login if the password doesn't

+ match while processing cached authentication. It will allow a user to

+ correctly login if the password was changed through another client. (It

+ will also make the server failed authentication counter go up if it was

+ a password guessing attempt and it would also made such attempts to pay

+ the full round trip to the server, which helps deter local attacks even

+ if we do not do any other checks like we do for real offline

+ authentication.)

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  extend structure *pam\_auth\_req*

+ 

+    -  add new field ``use_cached_auth`` (default value is false)

+ 

+ -  extend structure *sss\_domain\_info*

+ 

+    -  add new field ``cached_auth_timeout`` which will hold value of

+       newly introduced domain option ``cached_auth_timeout``

+ 

+ -  introduce new sysdb attribute

+    ``SYSDB_LAST_ONLINE_AUTH_WITH_CURRENT_TOKEN``

+ 

+    -  this attribute would mostly behave in the same way as

+       ``SYSDB_LAST_ONLINE_AUTH`` but would be set to 0 when local

+       password change happens

+    -  this would guarantee that after local password change, next login

+       attempt won't be cached (if SSSD is in the online mode)

+ 

+ -  extend *pam\_dom\_forwarder()*

+ 

+    -  set local copy of ``cached_auth_timeout`` to use the smaller of

+       the domain's ``cached_auth_timeout`` (given in seconds) and the

+       ``offline_credentials_expiration`` (given in days, also must

+       handle the special value 0 for offline\_credentials\_expiration)

+    -  do not forward request to a domain provider if

+ 

+       -  domain uses cached credentials *AND*

+       -  (local) ``cached_auth_timeout`` is greater than 0 *AND*

+       -  last online login (resp. attribute

+          ``SYSDB_LAST_ONLINE_AUTH_WITH_CURRENT_TOKEN``) of user who is

+          being authenticated is not stale (> *now()* - (local)

+          ``cached_auth_timeout``) *AND*

+       -  PAM request can be handled from cache (PAM command is

+          ``SSS_PAM_AUTHENTICATE``)

+ 

+          -  then set ``use_cached_auth`` to true

+          -  call *pam\_reply()*

+ 

+ -  extend *pam\_reply()*

+ 

+    -  extend condition for entering into block processing case when

+       pam\_status is PAM\_AUTHINFO\_UNAVAIL even for ``use_cached_auth``

+       being true

+    -  while in this block and if PAM command is SSS\_PAM\_AUTHENTICATE

+       then set ``use_cached_auth`` to false to avoid cyclic recursion

+       call of *pam\_reply()* which is subsequently called from

+       *pam\_handle\_cached\_login()*

+ 

+ -  extend *pam\_handle\_cached\_login()*

+ 

+    -  if permission is denied and and cached authentication was used

+       then return to *pam\_dom\_forwarder()* and perform online

+       authentication

+ 

+ -  introduce function

+    *sysdb\_get\_user\_lastlogin\_with\_current\_token()*

+ 

+    -  used to obtain value of attribute

+       ``SYSDB_LAST_ONLINE_AUTH_WITH_CURRENT_TOKEN`` for given user from

+       sysdb while deciding in pam\_dom\_forwarder() whether

+       authentication can be served from cache or domain provider must be

+       contacted (no output to console should happen here)

+ 

+ -  when password is being changed make sure that value of

+    ``SYSDB_LAST_ONLINE_AUTH_WITH_CURRENT_TOKEN`` is set to 0

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ A new domain option ``cached_auth_timeout`` will be added. The value of

+ this option is a time period in seconds for which cached authentication

+ can be used. After this period is exceeded online authentication must be

+ performed. The default value would be 0, which implies that this feature

+ is by default disabled.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ #. set ``cached_auth_timeout`` in sssd.conf to some non-null value (e.g.

+    120)

+ #. erase SSSD caches and restart SSSD

+ #. test with correct password

+ 

+    #. log in as user from domain which stores credentials and then log

+       out and log in again. The second login should use cached

+       credentials. Output should by similar to this, especially note the

+       line starting with: **Authenticated with cached credentials**

+       (Please note that to see this console output *pam\_verbosity = 2*

+       must be set in pam section of sssd.conf.) ::

+ 

+           devel@dev $ su john

+           Password: 

+           john@dev $ exit

+           devel@dev $ su john

+           Password: 

+           Authenticated with cached credentials, your cached password will expire at: Wed 22 Apr 2015 08:47:29 AM EDT.

+           john@dev $ 

+ 

+    #. for the ``cached_auth_timeout`` seconds since the 1st login all

+       subsequent login attempts (for the same user) should be served

+       from cache and domain provider should not be contacted, this can

+       be verified by changing password at server.

+    #. after passing more than ``cached_auth_timeout`` seconds since the

+       1st log in an online log in should be performed.

+ 

+ #. test with wrong password to check if:

+ 

+    #. *offline\_failed\_login\_attempts* is respected

+    #. *offline\_failed\_login\_delay* is respected

+ 

+ #. change locally password

+ 

+    -  verify that subsequent login attempt is processed online and that

+       new password is accepted and old one is denied

+ 

+ #. change password directly on server or via another client then SSSD

+ 

+    -  verify that new password is accepted and that logs inform that

+       cached authentication failed and online authentication had to be

+       performed (please note that old password would be accepted as SSSD

+       client has no knowledge that it was changed)

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Pavel Reichl <`preichl@redhat.com <mailto:preichl@redhat.com>`__>

@@ -0,0 +1,105 @@ 

+ DEPRECATED (moved to sssctl)

+ ============================

+ 

+ This tool is no longer developed and it's functionality was moved to

+ sssctl tool.

+ 

+ sss\_confcheck tool

+ ===================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2269 <https://pagure.io/SSSD/sssd/issue/2269>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ There is no easy way to debug the SSSD configuration without having to

+ look into the debug logs. Moreover the debug logs can be difficult to

+ understand for people outside SSSD development team. Some common issues

+ can be identified during static offline analysis of the config files. To

+ find these issues soon we need a tool that performs this analysis and

+ provides human readable report.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ -  performing ad-hoc static analysis of the installed SSSD configuration

+ -  performing ad-hoc static analysis of SSSD configuration files

+    retrieved from user with some SSSD problems

+ 

+ Overview of the solution - sss\_confcheck tool

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ A new tool will be added to sss\_\* tools that will perform static

+ analysis of SSSD configuration files. This tool can be run without any

+ parameters in which case it will print a report to the standard output

+ in the following or similar format: ::

+ 

+     $ sss_confcheck

+     Number of identified issues: 1

+     [rule/allowed_nss_options]: Attribute 'foo' is not allowed in section 'nss'. Check for typos.

+ 

+     Used configuration file:

+     <Here will be the contents of sssd.conf file>

+ 

+     Number of used configuration override snippets: 2

+     List of configuration override snippets in order of priority (lowest priority first):

+     snippet_name_1.conf

+     snippet_name_2.conf

+ 

+     Content of configuration override snippets:

+     snippet_name_1.conf:

+     <content of snippet_name_1.conf>

+ 

+     snippet_name_2.conf:

+     <content of snippet_name_2.conf>

+ 

+     Merged configuration:

+     <content of merged configuration (sssd.conf merged with snippets)>

+ 

+ Available options: ::

+ 

+       ?, --help

+       --config-file PATH_CONFIG_FILE                Path to config file that will be checked.

+       --snippets-dir PATH_TO_SNIPPETS_DIRECTORY     Path to snippets directory.

+       --no-validators                               Do not use validators (no analysis will be made).

+       --no-file-content                             Do not print config file or snippet contents.

+       --no-snippets                                 Ignore the snippets.

+       --silent                                      If no errors are detected, do not print anything.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The tool will use ding-libs validators feature described `in this design

+ document <https://docs.pagure.org/SSSD.sssd/design_pages/libini_config_file_checks.html>`__.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ No configuration changes.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ Depending on the capabilities of validators used by SSSD, make an error

+ in configuration and run sss\_confcheck to see if it was detected.

+ 

+ Planned features in version 1 - initial version

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ | Validators capabilities:

+ 

+ -  able to detect typo in option name

+ -  able to detect typo in section name

+ -  able to detect option in wrong section

+ 

+ Limitations:

+ 

+ -  printing the resulting configuration (after merging with snippets)

+    may not be in the initial version.

+ 

+ Authors

+ ~~~~~~~

+ 

+ Michal Židek `mzidek@redhat.com <mailto:mzidek@redhat.com>`__

@@ -0,0 +1,95 @@ 

+ Improve config validation

+ =========================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2247 <https://pagure.io/SSSD/sssd/issue/2247>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2028 <https://pagure.io/SSSD/sssd/issue/2028>`__

+ -  `https://pagure.io/SSSD/sssd/issue/1308 <https://pagure.io/SSSD/sssd/issue/1308>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2249 <https://pagure.io/SSSD/sssd/issue/2249>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2269 <https://pagure.io/SSSD/sssd/issue/2269>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2465 <https://pagure.io/SSSD/sssd/issue/2465>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2687 <https://pagure.io/SSSD/sssd/issue/2687>`__

+ -  ...and more

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Admins should be notified if their configuration is not valid Admins

+ should have an option to still log in to the system if they do an error

+ in configuration

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ -  Fallback config

+ 

+    -  With responders, we can use defaults, they are usually paranoid

+       enough

+    -  With domains, we probably can only fall back to last known good

+       (except local domain)

+    -  Could we start only responders so that if cached data is

+       available, the responders can be used?

+ 

+ -  Last known good (First known good)

+ 

+    -  For domains

+    -  Use-case: admin changes something and wants to still log in

+ 

+ -  Config merging

+ 

+    -  Deprecate "services" line

+    -  Be able to drop domain into /etc/sssd/sssd.conf.d/

+ 

+ -  Config validation

+ 

+    -  prerequisity: have a common definition of options and autogenerate

+       the rest

+ 

+       -  Autogenerate dp\_opts, man pages and configAPI sources from a

+          common location

+       -  Look at Samba

+ 

+    -  ...for that we need to use dp\_opts everywhere

+ 

+ To do

+ ~~~~~

+ 

+ -  Does ding-libs support config validation?

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Describe, without going too low into technical details, what changes

+ need to happen in SSSD during implementation of this feature. This

+ section should be understood by a person with understanding of how SSSD

+ works internally but doesn't have an in-depth understanding of the code.

+ For example, it's fine to say that we implement a new option ``foo``

+ with a default value ``bar``, but don't talk about how is ``foo``

+ processed internally and which structure stores the value of \`foo. In

+ some cases (internal APIs, refactoring, ...) this section might blend

+ with the next one.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ A more technical extension of the previous section. Might include

+ low-level details, such as C structures, function synopsis etc. In case

+ of very trivial features (e.g a new option), this section can be merged

+ with the previous one.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ Does your feature involve changes to configuration, like new options or

+ options changing values? Summarize them here. There's no need to go into

+ too many details, that's what man pages are for.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ This section should explain to a person with admin-level of SSSD

+ understanding how this change affects run time behaviour of SSSD and how

+ can an SSSD user test this change. If the feature is internal-only,

+ please list what areas of SSSD are affected so that testers know where

+ to focus.

@@ -0,0 +1,252 @@ 

+ LDAP provider integration tests

+ ===============================

+ 

+ Related tickets:

+ 

+ -  `#2541 <https://pagure.io/SSSD/sssd/issue/2541>`__

+ -  `#2545 <https://pagure.io/SSSD/sssd/issue/2545>`__

+ 

+ Problem statement

+ -----------------

+ 

+ We'd like to run some sssd/LDAP integration tests during day-to-day

+ development. They should be low-overhead, completing in under 5 minutes,

+ and run as part of "make check" and "contrib/ci/run", under a

+ non-privileged user. They may require special configure options to be

+ executable, and be skipped if the options are not provided.

+ 

+ Use cases

+ ---------

+ 

+ A developer modifies a part of LDAP-involved data path and wishes to

+ quickly check sanity of the change. He/she then runs "make check" or

+ "contrib/ci/run", which include the LDAP integration tests.

+ 

+ A developer submits a change (possibly) affecting the LDAP-involved data

+ paths and a reviewer wishes to check the sanity of the change before

+ ACK'ing it. The reviewer then requests a CI job run, which includes the

+ LDAP integration tests.

+ 

+ Overview of the solution

+ ------------------------

+ 

+ The suite should use pytest test framework.

+ 

+ Tests are executed as part of "make check", which is also included into

+ "contrib/ci/run". As our Makefiles use Automake's parallel test

+ execution harness and sssd data and socket directories are compiled-in

+ currently and cannot be shared, there can only be one Automake-level

+ integration test suite. Any possible parallelization should be

+ implemented within.

+ 

+ Because "make check" and "contrib/ci/run" are supposed to be executable

+ in largely arbitrary environments and under regular users, the sssd

+ needs to be tricked into believing it is running under a root account

+ and tests need to be tricked into using libnss\_sss and pam\_sss from

+ the build instead of the NSS and PAM services specified for the system.

+ The first two can be done with the help of "cwrap" wrappers. The latter

+ would require cwrap support for the PAM library, which isn't implemented

+ yet, but might be in the future. As of now, only libnss\_sss can be

+ tested.

+ 

+ Because default, compiled-in sssd data and socket locations are not

+ accessible to regular users, and there is currently no way to change

+ them after the build, running the tests will require configuring the

+ build with user-writeable locations. Otherwise the tests will be skipped

+ during the "make check" run and Automake will report them as such. It is

+ possible that in the future a way to change them after the build will be

+ implemented and this requirement will be lifted.

+ 

+ Implementation details

+ ----------------------

+ 

+ All tests are invoked with src/tests/cwrap/cwrap\_test\_setup.sh sourced

+ into the shell, which sets up NSS and UID wrappers to make tests assume

+ they're running under root and use libnss\_sss from the build tree.

+ 

+ At the moment, running the tests requires configuring the build to have

+ data and sockets located in user-writeable directories. The specific

+ locations might be communicated to the test suite via a

+ configure-generated Python or Bash module, or a C program outputting

+ them when invoked. If at least one of these locations is non-writeable,

+ the test suite will exit to Automake with code 77, indicating SKIPPED

+ status.

+ 

+ However, a way to change these at startup time might be implemented

+ later, removing this requirement. E.g. data and socket directories might

+ be specified in the configuration file for the sssd daemons, and the

+ socket location might be specified to libnss\_sss and pam\_sss via an

+ environment variable. See

+ `#2545 <https://pagure.io/SSSD/sssd/issue/2545>`__.

+ 

+ The OpenLDAP server can be executed with configuration and databases

+ located under arbitrary (temporary) directories which will be created

+ during testing. It is unknown yet how to make 389-ds do the same.

+ 

+ The communication with the LDAP server can be left unencrypted at least

+ for the start, simplifying setup and debugging.

+ 

+ The LDAP server setup/teardown (for either of the servers) will be done

+ in Bash to simplify initial development and later possibly converted to

+ (a bit more robust) Python, when all the details are clear. The

+ setup/teardown scripts will be executed from a pytest fixture

+ setup/teardown.

+ 

+ The pytest suite will do further setup itself according to specific test

+ requirements, including: directory population/cleanup, generating sssd

+ configuration, starting/stopping sssd.

+ 

+ The tests themselves might include listing/retrieving rfc2307(bis) user

+ and group information, including nested groups, perhaps using the

+ standard "pwd" and "grp" modules. Some of the tests that can be

+ implemented initially follow, most useful first.

+ 

+ Sanity

+ ~~~~~~

+ 

+ ::

+ 

+     Fixture rfc2307:

+         enumerate = true / false

+         ldap_schema = rfc2307

+         3 users

+         3 user groups

+         1 empty group

+         1 two-user group

+ 

+     Fixture rfc2307bis:

+         Fixture rfc2307

+         ldap_schema = rfc2307bis

+         1 group with empty group inside

+         1 group with two empty groups inside

+         1 group with a single-user group inside

+         1 group with a two-user group inside

+         1 group with two single-user groups inside

+         A basic group membership loop: A->B->A

+         A branched group membership loop: A->B, A->D, A->C->A

+ 

+     Tests:

+         List all users/groups with pwd.getpwall/grall()

+         Retrieve a user/group by UID/GID with pwd.getpwuid/grgid()

+         Retrieve a non-existent user/group by UID/GID with pwd.getpwuid/grgid()

+         Retrieve a user/group by name with pwd.getpwnam/grnam()

+         Retrieve a non-existent user/group by name with pwd.getpwnam/grnam()

+ 

+ Cache

+ ~~~~~

+ 

+  ::

+ 

+     Fixture:

+         enumerate = true / false

+         enum_cache_timeout = 4s

+         ldap_enumeration_refresh_timeout = 0

+         3 users

+         3 user groups

+ 

+     Tests:

+         Cache refresh

+         1. Enumerate users/groups with pwd.getpwall/grall()

+         2. Within enum_cache_timeout:

+             2.1 Add/remove user/group

+             2.2 Enumerate users/groups with pwd.getpwall/grall(),

+                 check for change absence

+         3. After enum_cache_timeout passed from step 1:

+            enumerate users/groups with pwd.getpwall/grall(), check for change

+         No-wait percentage

+         ...

+         Negative timeout

+         ...

+ 

+ Filter users/groups

+ ~~~~~~~~~~~~~~~~~~~

+ 

+  ::

+ 

+     Fixture:

+         3 users

+         3 user groups

+         filter_users/groups: none/one/two

+ 

+     Tests:

+         Enumerate users/groups with pwd.getpwall/grall()

+         Retrieve a filtered user/group by UID/GID with pwd.getpwuid/grgid()

+         Retrieve a non-filtered user/group by UID/GID with pwd.getpwuid/grgid()

+ 

+ Override homedir

+ ~~~~~~~~~~~~~~~~

+ 

+  ::

+ 

+     Fixture:

+         1 user with homedir A

+         1 user without homedir

+         override_homedir = B

+ 

+     Tests:

+         Retrieve the users with pwd.getpwuid/nam/all()

+ 

+ Fallback homedir

+ ~~~~~~~~~~~~~~~~

+ 

+  ::

+ 

+     Fixture:

+         1 user with homedir A

+         1 user without homedir

+         fallback_homedir = B

+ 

+     Tests:

+         Retrieve the users using pwd.getpwuid/nam/all()

+ 

+ Override shell

+ ~~~~~~~~~~~~~~

+ 

+  ::

+ 

+     Fixture:

+         1 user with shell A

+         1 user without shell

+         override_shell = B

+ 

+     Tests:

+         Retrieve the users using pwd.getpwuid/nam/all()

+ 

+ Vetoed shells / shell fallback

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+  ::

+ 

+     Fixture:

+         1 user with shell A

+         1 user with shell B

+         1 user without shell

+         override_shell = C

+ 

+     Tests:

+         Retrieve the users using pwd.getpwuid/nam/all()

+ 

+ Default shell

+ ~~~~~~~~~~~~~

+ 

+  ::

+ 

+     Fixture:

+         1 user with shell A

+         1 user without shell

+         default_shell = B

+ 

+     Tests:

+         Retrieve the users using pwd.getpwuid/nam/all()

+ 

+ Configuration changes

+ ---------------------

+ 

+ Sssd, libnss\_sss and pam\_sss might require changes allowing

+ configuration of data and socket locations.

+ 

+ Authors

+ -------

+ 

+ Nikolai Kondrashov with help from Martin Kosek, Jakub Hrozek, Lukas

+ Slebodnik and Simo Sorce.

@@ -0,0 +1,411 @@ 

+ Data Provider Refactoring

+ =========================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/385 <https://pagure.io/SSSD/sssd/issue/385>`__

+ 

+ Problem statement

+ -----------------

+ 

+ Current state of data provider interface is not extensible enough to

+ fulfil needs of planned SSSD features such as `SSSD Status Tool

+ <https://docs.pagure.org/SSSD.sssd/design_pages/sssctl.html>`__.

+ The main flaw that we aim to solve is to simplify adding of new methods,

+ properties and possibly signals using our *sbus* interface. As a side

+ effect we will also solve the following issues that are in current code:

+ 

+ -  encapsulate data provider from the rest of the code

+ -  fix poor memory hierarchy which creates occasional race condition on

+    shutdown

+ -  convert method handlers to *simple* tevent requests that are not

+    aware of data provider

+ -  handle D-Bus message reply automatically in data provider code

+ 

+ Terminology

+ ~~~~~~~~~~~

+ 

+ This section clarifies the terminology that is used in this document.

+ 

+ -  **Backend**: Implementation of a domain (periodic tasks,

+    online/offline callbacks, online check, ...)

+ -  **Data Provider**: Interface between backend and responders

+ -  **Module**: library implementing data provider interface (LDAP, IPA,

+    AD, KRB5, PROXY)

+ -  **Target**: functionality implemented in modules (id, auth, chpass,

+    selinux, autofs, sudo, hostid)

+ 

+ A general overview of the communication process is as follows.

+ 

+ #. Responder issues a method call with Data Provider through DP D-Bus

+    API

+ #. Data Provider calls a method handler registered by configured module

+ #. Method handler is finished

+ #. Reply is sent to responder

+ 

+ Current state

+ -------------

+ 

+ This is just a brief summarization, please refer to the code to get the

+ whole picture.

+ 

+ At this moment each target can have only one method specified. The

+ method is defined by providing bet\_ops data in

+ sssm\_$module\_$target\_init function. Structure bet\_ops contains

+ *handler* that defines a method handler in addition with *check\_online*

+ which defines a method that should be called when SSSD is trying to

+ check if it can reestablish a connection and it is used only in

+ connection with ID provider. Field *finalize* was probably introduced as

+ a clean up function, however it is not used at all at the moment.

+ 

+ Even though it is not possible with current code to have different

+ private data for different methods, it is possible to extend this

+ structure to allow more methods. However, it would be nice to have it in

+ more automated and controlled way and we still can't use properties and

+ signals this way though. ::

+ 

+     struct bet_ops {

+         be_req_fn_t check_online;

+         be_req_fn_t handler;

+         be_req_fn_t finalize;

+     };

+ 

+ Each target is defined in *struct bet\_data*. ::

+ 

+     static struct bet_data bet_data[] = {

+         {BET_NULL, NULL, NULL},

+         {BET_ID, CONFDB_DOMAIN_ID_PROVIDER, "sssm_%s_id_init"},

+         [...]

+         {BET_MAX, NULL, NULL}

+     };

+ 

+ Initialization function assigns the bet\_ops structure together with

+ private data. The private data are attached to *be\_ctx* in talloc

+ memory hierarchy which results in race conditions during shutdown

+ process. This is currently solved by *be\_spy* which basically forces

+ the desired order of freeing data, however we have seen some crashes on

+ shutdown which we were unable to figure out so far even with spies. ::

+ 

+     /* Auth Handler */

+     struct bet_ops sdap_auth_ops = {

+         .handler = sdap_pam_auth_handler,

+         .finalize = sdap_shutdown

+     };

+ 

+     int sssm_ldap_auth_init(struct be_ctx *bectx,

+                             struct bet_ops **ops,

+                             void **pvt_data)

+     {

+         struct sdap_auth_ctx *ctx;

+         int ret;

+ 

+         [...]

+ 

+             *ops = &sdap_auth_ops;

+             *pvt_data = ctx;

+         }

+ 

+         return ret;

+     }

+ 

+ Goals to achieve

+ ~~~~~~~~~~~~~~~~

+ 

+ -  make adding a new client automated and error proof

+ -  make adding a new target automated and error proof

+ -  make adding a new method automated and error proof

+ -  create a proper talloc hierarchy so we can control clean up process

+ -  support module's contructor and private data shared across target's

+    initialization functions

+ -  make method handlers pure tevent requests that returns single error

+    code

+ -  make method handlers not aware of reply process

+ -  improve debugging capabilities

+ 

+    -  keep track of active requests

+    -  make each request clearly visible in logs

+ 

+ -  allow methods with different output parameters

+ -  allow D-Bus objects, properties and signals

+ -  properly terminate all requests on clean up

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ A responder sends a *D-Bus method* to the data provider which is handled

+ by a D-Bus method handler. Depending on the introspect file this handler

+ may be called directly with *automatically parsed parameters or the

+ parsing may be left to handler implementation*. In the handler, we

+ process parameters and *create a data provider request*. This request

+ will call a data provider method handler which is a basic **tevent

+ request**. When the request is finished, data provider tevent callback

+ is invoked and it send a reply back to the responder. Depenging on the

+ request result the reply message may be either error, sending an error

+ code and message, or success where a default or *custom \_recv* function

+ may be called to obtain and send additional attributes.

+ 

+ The whole data provider lifetime is controlled by a tevent request.

+ There is only one way in *(\_send)* and one way out *(\_recv)* from the

+ request. The data provider method handler has no knowledge about D-Bus

+ or data provider at all. The data flow looks like this: ::

+ 

+     Responder -> (dbus) -> DP D-Bus method handler -> DP Request -> (tevent) -> DP method handler

+ 

+     ... asynchronous processing ...

+ 

+     (tevent done) -> (dp request done) -> (error detected) -> (dbus error) -> Responder

+                                        -> (success)        -> (recive callback) -> (dbus) -> Reponder

+ 

+ Data Provider Initialization

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ This section describes what is needed to initialize data provider. It

+ talks only about sections that may change in the future in order to

+ extend SSSD's functionality, it does not describe how it works under the

+ hood. The initializion basically consist of these steps:

+ 

+ **1. Initialization of data provider modules and targets**

+ 

+ Each modules and target needs to be initialized through it's initializer

+ functions in **src/providers/$modname/$modname\_init.c**. The whole

+ module can contain a constructor that may create data shared across all

+ or multiple modules, it is not required though. The functions names are

+ generated as follows:

+ 

+ A constructor is named **sssm\_$modname\_init** and has header: ::

+ 

+     errno_t sssm_$modname_init(TALLOC_CTX *mem_ctx, struct be_ctx *be_ctx, void **shared_data);

+ 

+ A target initializer is named **sssm\_$modname\_$target\_init** and has

+ header: ::

+ 

+     errno_t sssm_$modname_$target_init(TALLOC_CTX *mem_ctx, struct be_ctx *be_ctx, void *shared_data, struct dp_method *dp_methods);

+ 

+ Target initializer will at the end set all methods that are implemented

+ by this target via dp\_set\_method() example: ::

+ 

+     errno_t sssm_ipa_sudo_init(TALLOC_CTX *mem_ctx,

+                                struct be_ctx *be_ctx,

+                                void *module_data,

+                                struct dp_method *dp_methods)

+     {

+         struct ipa_sudo_ctx *sudo_ctx;

+ 

+         /* ... */

+ 

+         dp_set_method(dp_methods, DPM_SUDO_FULL_REFRESH, dp_ipa_sudo_full_refresh_send, dp_ipa_sudo_full_refresh_recv, sudo_ctx);

+         dp_set_method(dp_methods, DPM_SUDO_SMART_REFRESH, dp_ipa_sudo_smart_refresh_send, dp_ipa_sudo_smart_refresh_recv, sudo_ctx);

+         dp_set_method(dp_methods, DPM_SUDO_RULES_REFRESH, dp_ipa_sudo_rules_refresh_send, dp_ipa_sudo_rules_refresh_recv, sudo_ctx);

+     }

+ 

+ **2. Registering a data provider client -- responders**

+ 

+ When a responder wants to establish D-Bus connection with data provider

+ it needs to send a Register method to handshake with the provider. Here

+ we test that the client is known and setup D-Bus method handlers. Each

+ client is monitored and when the connection is dropped we remove active

+ requests of this client. Internally, we actually only remove sbus

+ connection from the request but try to finish the request otherwise so

+ we can completely save data that were already downloaded into the sysdb

+ for further usage.

+ 

+ To add a new well-known client just add it into **enum dp\_clients** in

+ *dp\_private.h* and alter **dp\_client\_to\_string()** in

+ *dp\_client.c*.

+ 

+ **3. Registering D-Bus methods**

+ 

+ When the D-Bus service is created a D-Bus method handlers needs to be

+ registered. The following steps are needed to add a new method or

+ interface into the data provider.

+ 

+ #. Add new method (or interface) into data provider introspection file

+    **dp\_iface.xml**

+ #. Register this interface or method in **dp\_iface.c** by providing the

+    interface structure generated from the instrospection file and

+    ammending **dp\_map** array

+ #. (optionally if needed) Add new data provider method and/or target

+    into **enum dp\_methods** and **enum dp\_targets** respectively

+ #. Implement the method handler

+ 

+ D-Bus method handlers

+ ^^^^^^^^^^^^^^^^^^^^^

+ 

+ The purpose of a D-Bus method handler is to parse parameters from a

+ D-Bus message (if they are not parsed automatically) and to create data

+ specific to the method called. Then the handler issues a new data

+ provider request through dp\_file\_request(). For example: ::

+ 

+     int dp_sudo_full_refresh(struct sbus_request *sbus_req,

+                              void *dp_cli,

+                              uint32_t dp_flags)

+     {

+         dp_file_request(dp_cli, "SUDO Full Refresh", sbus_req,

+                         dp_req_reply_default,

+                         DPT_SUDO, DPM_SUDO_FULL_REFRESH, dp_flags, NULL);

+ 

+         return EOK;

+     }

+ 

+ The current handler rewritten to the new data provider interface may

+ look like: ::

+ 

+     int dp_sudo_handler(struct sbus_request *sbus_req, void *dp_cli)

+     {

+         struct dp_sudo_data *data;

+         uint32_t dp_flags;

+         errno_t ret;

+ 

+         data = talloc_zero(sbus_req, struct dp_sudo_data);

+         if (data == NULL) {

+             return ENOMEM;

+         }

+ 

+         ret = dp_sudo_parse_message(data, sbus_req->message, &dp_flags,

+                                     &data->type, &data->rules);

+         if (ret != EOK) {

+             return ret;

+         }

+ 

+         dp_file_request(dp_cli, "sudo", sbus_req, dp_req_reply_std,

+                         DPT_SUDO, DPM_SUDO_HANDLER, dp_flags, data);

+ 

+         return EOK;

+     }

+ 

+ If dp\_flags are provider the data provider will check the flags and act

+ accordingly. Currently only DP\_FAST\_REPLY is available which if set

+ sends

+ *org.freedesktop.sssd.Error.DataProvider.Offline*

+ immediately without calling the request handler.

+ 

+ Data Provider Request Handlers

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Data provider request handler is a tevent request implementing the

+ following headers: ::

+ 

+     struct dp_req_params {

+         struct tevent_context *ev;

+         struct be_ctx *be_ctx;

+         struct sss_domain_info *domain;

+         enum dp_methods method;

+         void *method_data;

+         void *req_data;

+     };

+ 

+     typedef struct tevent_req *

+     (*dp_req_send_fn)(TALLOC_CTX *mem_ctx, struct dp_req_params *params);

+ 

+     typedef errno_t

+     (*dp_req_recv_fn)(TALLOC_CTX *mem_ctx, struct tevent_req *req, void *data);

+ 

+ All parameters except memory context are combined into one structure to

+ simplify possible future extensions (thus when a new parameter needs to

+ be added we don't have to modify existing handler). The *data* in

+ recieve function may be used to pass output parameters into the D-Bus

+ reply. For example, the following reply callback simulates current reply

+ message which returns major and minor error together with error message. ::

+ 

+     void dp_req_reply_std(const char *req_name,

+                           struct sbus_request *sbus_req,

+                           struct tevent_req *handler_req,

+                           dp_req_recv_fn recv_fn,

+                           void *pvt)

+     {

+         struct dp_reply_data reply;

+         const char *safe_err_msg;

+         errno_t ret;

+ 

+         ret = recv_fn(sbus_req, handler_req, &reply);

+         if (ret != EOK) {

+             DEBUG(SSSDBG_CRIT_FAILURE, "Bug: !EOK code returned?\n");

+             talloc_free(sbus_req);

+             return;

+         }

+ 

+         safe_err_msg = safe_be_req_err_msg(reply.message, reply.dp_error);

+ 

+         DP_REQ_DEBUG(SSSDBG_TRACE_LIBS, req_name, "Returning [%s]: %d,%d,%s",

+                      dp_err_to_string(reply.dp_error), reply.dp_error,

+                      reply.error, reply.message);

+ 

+         sbus_request_return_and_finish(sbus_req,

+                                        DBUS_TYPE_UINT16, &reply.dp_error,

+                                        DBUS_TYPE_UINT32, &reply.error,

+                                        DBUS_TYPE_STRING, &safe_err_msg,

+                                        DBUS_TYPE_INVALID);

+     }

+ 

+ On memory hierarchy

+ ~~~~~~~~~~~~~~~~~~~

+ 

+ The memory hierarchy is known strictly specified and should not be

+ broken. It gives us the ability to cleary clean up all data provider

+ data on SSSD exit. ::

+ 

+                                    struct be_ctx

+                                          |

+                                 struct data_provider

+                             /            |              \

+           struct dp_module[]      struct dp_target[]     struct dp_req [...]

+                            |             |                |

+                   module_data     struct dp_methods[]    req_data,tevent_req state,...

+                                          |

+                                     method_data

+ 

+ A destructor on data\_provider is set to ensure that all DP requests are

+ correctly terminated (sending a proper error message back to responder)

+ prior its private data is freed.

+ 

+ Implementation steps

+ ~~~~~~~~~~~~~~~~~~~~

+ 

+ #. (done) Implement the new data provider interface

+ #. (wip) Convert modules init functions

+ #. (wip) Convert existing handlers into tevent requests

+ #. Switch to the new interface

+ #. Add new methods and interfaces as needed

+ 

+ Responders

+ ~~~~~~~~~~

+ 

+ In the first stage no change to the responders needs to be done. All

+ existing data provider methods will always succeed and return three

+ output parameters (major error, minor error, error message) as the

+ current code does. New methods that return error or some output

+ parameters may be added without affecting the current responder data

+ provider code. When the new code is thoroughly tested we can change the

+ existing methods to return either error or success but this requires

+ also changes in responders. I would like to write something similar to

+ cache\_req but I don't have any specific plan so far.

+ 

+ Questions

+ ~~~~~~~~~

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ No configuration changes.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ All existing test must pass and no functionality is broken.

+ 

+ How To Debug

+ ~~~~~~~~~~~~

+ 

+ Each data provider request life cycle can be tracked in debug logs with

+ a special message prefix: **DP Request [$name #$index]**. The $name is

+ the name of the request (i.e. which method was called), $index is a

+ cyclic number assigned to the request. When we run out of number we

+ siply start from 1 again.

+ 

+ In the debugger we can monitor active data provider request, clients,

+ modules and targets in **be\_ctx->provider**.

+ 

+ Authors

+ ~~~~~~~

+ 

+ Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

@@ -0,0 +1,99 @@ 

+ D-Bus Interface: Cached Objects

+ ===============================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2338 <https://pagure.io/SSSD/sssd/issue/2338>`__

+ 

+ Related design page(s):

+ 

+ - `DBus Responder <https://docs.pagure.org/SSSD.sssd/design_pages/dbus_responder.html>`__

+ 

+ Problem statement

+ -----------------

+ 

+ This design document describes how objects can be marked as cached.

+ 

+ Use cases

+ ---------

+ 

+ -  Allow tools like graphical user management to display a list of users

+    who recently logged in.

+ 

+ D-Bus Interface

+ ---------------

+ 

+ org.freedesktop.sssd.infopipe.Cache

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Object paths implementing this interface

+ """"""""""""""""""""""""""""""""""""""""

+ 

+ -  /org/freedesktop/sssd/infopipe/Users

+ -  /org/freedesktop/sssd/infopipe/Groups

+ 

+ Methods

+ """""""

+ 

+ -  ao List()

+ -  ao ListByDomain(s:domain\_name)

+ 

+    -  Returns list of objects that contains *cached* attribute.

+ 

+ Signals

+ """""""

+ 

+ None.

+ 

+ Properties

+ """"""""""

+ 

+ None.

+ 

+ org.freedesktop.sssd.infopipe.Cache.Object

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Object paths implementing this interface

+ """"""""""""""""""""""""""""""""""""""""

+ 

+ -  /org/freedesktop/sssd/infopipe/Users/\*

+ -  /org/freedesktop/sssd/infopipe/Groups/\*

+ 

+ Methods

+ """""""

+ 

+ -  b Store()

+ -  b Remove()

+ 

+    -  Those methods will add/remove *cached* attributed to the object

+       under *path* implementing this interface.

+ 

+ Signals

+ """""""

+ 

+ None.

+ 

+ Properties

+ """"""""""

+ 

+ None.

+ 

+ Overview of the solution

+ ------------------------

+ 

+ New sysdb attribute *ifp\_cached* is created for users and groups

+ objects. If this attribute is present, the object is considered to be

+ cached on IFP D-Bus. The introspection of an object path */obj/path*

+ will report all cached objects in the subtree */obj/path/\**.

+ 

+ How To Test

+ -----------

+ 

+ Call the D-Bus methods and properties. For example with **dbus-send**

+ tool. A cached object is supposed to appear in introspection.

+ 

+ Authors

+ -------

+ 

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

+ -  Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

@@ -0,0 +1,137 @@ 

+ D-Bus Interface: Domains

+ ========================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2187 <https://pagure.io/SSSD/sssd/issue/2187>`__

+ 

+ Related design page(s):

+ 

+ -  `DBus Responder <https://docs.pagure.org/SSSD.sssd/design_pages/dbus_responder.html>`__

+ 

+ Problem statement

+ -----------------

+ 

+ This design document describes how domain objects are exposed on the

+ bus.

+ 

+ D-Bus Interface

+ ---------------

+ 

+ org.freedesktop.sssd.infopipe.Domains

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Object paths implementing this interface

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ -  /org/freedesktop/sssd/infopipe/Domains

+ 

+ Methods

+ ^^^^^^^

+ 

+ -  ao List()

+ 

+    -  Returns list of domains.

+ 

+ -  ao FindByName(s:domain\_name)

+ 

+    -  Returns object path of *domain\_name*.

+ 

+ Signals

+ ^^^^^^^

+ 

+ None.

+ 

+ Properties

+ ^^^^^^^^^^

+ 

+ None.

+ 

+ org.freedesktop.sssd.infopipe.Domains.Domain

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Object paths implementing this interface

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ -  /org/freedesktop/sssd/infopipe/Domains/\*

+ 

+ Methods

+ ^^^^^^^

+ 

+ -  ao ListSubdomains()

+ 

+    -  Returns all subdomains associated with this domain.

+ 

+ Signals

+ ^^^^^^^

+ 

+ None.

+ 

+ Properties

+ ^^^^^^^^^^

+ 

+ -  **property** String name

+ 

+    -  The name of this domain. Same as the domain stanza in the

+       sssd.conf

+ 

+ -  **property** String[] primary\_servers

+ 

+    -  Array of primary servers associated with this domain

+ 

+ -  **property** String[] backup\_servers

+ 

+    -  Array of backup servers associated with this domain

+ 

+ -  **property** Uint32 min\_id

+ 

+    -  Minimum uid and gid value for this domain

+ 

+ -  **property** Uint32 max\_id

+ 

+    -  Maximum uid and gid value for this domain

+ 

+ -  **property** String realm

+ 

+    -  The Kerberos realm this domain is configured with

+ 

+ -  **property** String forest

+ 

+    -  The domain forest this domain belongs to

+ 

+ -  **property** String login\_format

+ 

+    -  The login format this domain expects.

+ 

+ -  **property** String fully\_qualified\_name\_format

+ 

+    -  The format of fully qualified names this domain uses

+ 

+ -  **property** Boolean enumerable

+ 

+    -  Whether this domain can be enumarated or not

+ 

+ -  **property** Boolean use\_fully\_qualified\_names

+ 

+    -  Whether this domain requires fully qualified names

+ 

+ -  **property** Boolean subdomain

+ 

+    -  Whether the domain is an autodiscovered subdomain or a

+       user-defined domain

+ 

+ -  **property** ObjectPath parent\_domain

+ 

+    -  Object path of a parent domain or empty string if this is a root

+       domain

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ Call the D-Bus methods and properties. For example with **dbus-send**

+ tool.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

@@ -0,0 +1,73 @@ 

+ Support for multiple D-Bus interfaces on single object path

+ ===========================================================

+ 

+ Related ticket(s):

+ 

+ -  `IFP: support multiple interfaces for

+    object <https://pagure.io/SSSD/sssd/issue/2339>`__

+ 

+ Problem Statement

+ -----------------

+ 

+ Currently our D-Bus implementation supports having only one interface

+ registered per one object path prefix. This can be rather limiting

+ feature since it is often useful to separate shared methods and

+ properties under one interface that can even have the same

+ implementation for different object types. This will soon become a

+ serious drawback when the Infopipe starts supporting signals and new

+ objects. SSSD already supports few specialized interfaces such as

+ DBus.Properties and DBus.Introspectable but those are currently

+ hardcoded to the S-Bus dispatching logic which can be avoided with

+ direct support for multiple interfaces.

+ 

+ Current state

+ -------------

+ 

+ The D-Bus interface in the InfoPipe is defined by a sbus\_vtable

+ structure and associated with a single object path or an object path

+ prefix followed by '\*' wildcard. For example the current list of

+ InfoPipe interfaces looks as follows: ::

+ 

+     static struct sysbus_iface ifp_ifaces[] = {

+         { "/org/freedesktop/sssd/infopipe", &ifp_iface.vtable },

+         { "/org/freedesktop/sssd/infopipe/Domains*", &ifp_domain.vtable },

+         { "/org/freedesktop/sssd/infopipe/Components*", &ifp_component.vtable },

+         { NULL, NULL },

+     };

+ 

+ There is only one allowed wildcard '\*' and if present it has to be the

+ last character of the object path.

+ 

+ The dispatch logic is:

+ 

+ #. Lookup the vtable by object path from the message

+ #. Check that the interface name is the same as the interface define in

+    the message, if not check if the interface in the message is one of

+    the hard coded interfaces

+ #. Fail if the interface is invalid

+ #. Find and execute the method handler

+ 

+ Proposed solution

+ -----------------

+ 

+ The structure sysbus\_iface combines an object path (pattern) and a list

+ of supported interfaces on this object path.

+ 

+ There can still be only one supported wildcard and that is '\*' but it

+ is possible to use it everywhere and multiple times in one object path.

+ 

+ The dispatch logic is:

+ 

+ #. Acquire interface and object path from D-Bus message

+ #. Find an object path pattern that matches the object path from the

+    message, if more than one such pattern exist then the first one

+    defined is used

+ #. Find a list of interfaces associated with this object path pattern,

+    if not found continue with next pattern that matches

+ #. Execute a message handler that is associated with both object path

+    and interface

+ 

+ Authors

+ -------

+ 

+ -  Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

@@ -0,0 +1,354 @@ 

+ DBus responder design

+ =====================

+ 

+ Related ticket(s):

+ 

+ -  `Provide an experimental DBus responder to retrieve custom

+    attributes from SSSD

+    cache <https://pagure.io/SSSD/sssd/issue/2072>`__

+ -  `Extend the LDAP backend to retrieve extended set of

+    attributes <https://pagure.io/SSSD/sssd/issue/2073>`__

+ 

+ Problem Statement

+ -----------------

+ 

+ The contemporary centralized user databases such as IPA or Active

+ Directory store many attributes that describe the user. Apart from

+ attributes that are related to a "computer user" entry such as user

+ name, login shell or an ID, the databases often store data about the

+ physical user represented by the entry, such as telephone number. Since

+ the SSSD already has means of connecting to the remote directory,

+ including advanced features like offline support or fail over, it would

+ appear as a natural choice for retrieving these attributes. However, the

+ only interface the SSSD provides towards the system at the moment is the

+ standard `POSIX

+ interface <https://www.gnu.org/software/libc/manual/html_node/Name-Service-Switch.html>`__

+ and a couple of ad-hoc application specific responders (sudo, ssh,

+ autofs). The purpose of this document is to describe a design of a new

+ responder, that would listen on the system bus and allow third party

+ consumers to retrieve custom attributes stored in a centralized database

+ via a DBus call.

+ 

+ The DBus interface design

+ -------------------------

+ 

+ This section gathers feedback expressed in mailing lists, private e-mail

+ conversations and IRC discussions and summarizes feature requests and

+ areas that need improvement into a design proposal of both the DBus API

+ and several required changes in the core SSSD deamon.

+ 

+ Cached objects

+ ~~~~~~~~~~~~~~

+ 

+ `D-Bus Interface: Cached

+ Objects <https://docs.pagure.org/SSSD.sssd/design_pages/dbus_cached_objects.html>`__

+ 

+ Object exposed on the bus

+ ~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Instead of a single interface returning object attributes in an

+ LDAP-like way, the interface would be built in an object-oriented

+ fashion. Each object (ie a user or a group) would be identified with an

+ object path and methods would be available to the interface user to make

+ it possible to retrieve either a single object or a set of object.

+ 

+ The interface will support users, groups and domains.

+ 

+ Representing users and groups on the bus

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ `D-Bus Interface: Users and

+ Groups <https://docs.pagure.org/SSSD.sssd/design_pages/dbus_users_and_groups.html>`__

+ 

+ Representing SSSD processes on the bus

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  **object path**: /org/freedesktop/sssd/infopipe/Components/monitor

+ -  **object path**:

+    /org/freedesktop/sssd/infopipe/Components/Responders/$responder\_name

+ -  **object path**:

+    /org/freedesktop/sssd/infopipe/Components/Backends/$sssd\_domain\_name

+ 

+ -  **method** org.freedesktop.sssd.infopipe.ListComponents()

+ 

+    -  returns: Array of object paths representing component objects

+ 

+ -  **method** org.freedesktop.sssd.infopipe.ListResponders()

+ 

+    -  returns: Array of object paths representing component objects

+ 

+ -  **method** org.freedesktop.sssd.infopipe.ListBackends()

+ 

+    -  returns: Array of object paths representing component objects

+ 

+ -  **method** org.freedesktop.sssd.infopipe.FindMonitor()

+ 

+    -  returns: Object path representing the monitor object

+ 

+ -  **method** org.freedesktop.sssd.infopipe.FindResponderByName(String

+    name)

+ 

+    -  *name*: The name of the responder to retrieve

+    -  returns: Object path representing the responder object

+ 

+ -  **method** org.freedesktop.sssd.infopipe.FindBackendByName(String

+    name)

+ 

+    -  *name*: The name of the backend to retrieve

+    -  returns: Object path representing the backend object

+ 

+ The name "Components" is chosen to not imply any particular

+ implementation on SSSD side.

+ 

+ The component objects implements

+ org.freedesktop.sssd.infopipe.Components interface, which is define as:

+ 

+ -  **method** org.freedesktop.sssd.infopipe.Components.Enable()

+ 

+    -  returns: nothing

+    -  note: changes will be visible after SSSD is restarted

+ 

+ -  **method** org.freedesktop.sssd.infopipe.Components.Disable()

+ 

+    -  returns: nothing

+    -  note: changes will be visible after SSSD is restarted

+ 

+ -  **method**

+    org.freedesktop.sssd.infopipe.Components.ChangeDebugLevel(Uint32

+    debug\_level)

+ 

+    -  *debug\_level*: Debug level to set

+    -  returns: nothing

+    -  note: changes will be permanent but do not require restart of the

+       deamon

+ 

+ -  **property** String name

+ 

+    -  The name of this service.

+ 

+ -  **property** Uint32 debug\_level

+ 

+    -  The name of this service.

+ 

+ -  **property** Boolean enabled

+ 

+    -  Whether the service is enabled or not

+ 

+ -  **property** string type

+ 

+    -  Type of the component. One of "monitor", "responder", "backend".

+ 

+ This approach will completely distinguish SSSD processes from services

+ and domains, which are logical units that should not contain any

+ information about SSSD architecture.

+ 

+ Representing service objects on the bus

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ This API should include methods to represent service object(s) and

+ provide basic information and configuration abilities.

+ 

+ -  **object path**: /org/freedesktop/sssd/infopipe/Services/$service

+ 

+ -  **method** org.freedesktop.sssd.infopipe.ListServices()

+ 

+    -  returns: Array of object paths representing Service objects

+ 

+ -  **method** org.freedesktop.sssd.infopipe.FindServiceByName(String

+    name)

+ 

+    -  *name*: The name of the service to retrieve

+    -  returns: Object path representing the service object

+ 

+ The service object will in the first iteration include several

+ properties describing the domain. As this iteration doesn't allow any

+ modification, only properties and not methods are considered:

+ 

+ -  **property** String name

+ 

+    -  The name of this service.

+ 

+ -  service dependent properties

+ 

+ Other properties might be added upon request.

+ 

+ Representing domain objects on the bus

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ For some consumers (such as realmd), it's important to also know the

+ properties of a domain. The API should include methods to retrieve a

+ active domain object(s) and represent the domains as objects on the bus

+ as well.

+ 

+ -  **object path**: /org/freedesktop/sssd/infopipe/Domains/$domain

+ 

+ The synopsis of these calls would look like:

+ 

+ -  **method** org.freedesktop.sssd.infopipe.ListDomains()

+ 

+    -  returns: Array of object paths representing Domain objects

+ 

+ -  **method**

+    org.freedesktop.sssd.infopipe.ListSubdomainsByDomain(String name)

+ 

+    -  returns: Array of object paths representing Domain objects

+       associated with domain $name

+ 

+ -  **method** org.freedesktop.sssd.infopipe.FindDomainByName(String

+    name)

+ 

+    -  *name*: The name of the domain to retrieve

+    -  returns: Object path representing the domain object

+ 

+ The domain object will in the first iteration include several properties

+ describing the domain. As this iteration doesn't allow any modification,

+ only properties and not methods are considered:

+ 

+ -  **property** String name

+ 

+    -  The name of this domain. Same as the domain stanza in the

+       sssd.conf

+ 

+ -  **property** String[] primary\_servers

+ 

+    -  Array of primary servers associated with this domain

+ 

+ -  **property** String[] backup\_servers

+ 

+    -  Array of backup servers associated with this domain

+ 

+ -  **property** Uint32 min\_id

+ 

+    -  Minimum uid and gid value for this domain

+ 

+ -  **property** Uint32 max\_id

+ 

+    -  Maximum uid and gid value for this domain

+ 

+ -  **property** String realm

+ 

+    -  The Kerberos realm this domain is configured with

+ 

+ -  **property** String forest

+ 

+    -  The domain forest this domain belongs to

+ 

+ -  **property** String login\_format

+ 

+    -  The login format this domain expects.

+ 

+ -  **property** String fully\_qualified\_name\_format

+ 

+    -  The format of fully qualified names this domain uses

+ 

+ -  **property** Boolean enumerable

+ 

+    -  Whether this domain can be enumarated or not

+ 

+ -  **property** Boolean use\_fully\_qualified\_names

+ 

+    -  Whether this domain requires fully qualified names

+ 

+ -  **property** Boolean subdomain

+ 

+    -  Whether the domain is an autodiscovered subdomain or a

+       user-defined domain

+ 

+ -  **property** ObjectPath parent\_domain

+ 

+    -  Object path of a parent domain or empty string if this is a root

+       domain

+ 

+ Other properties such as provider type or case sensitivity might be

+ added upon request. Right now, we need something other developers can

+ experiment with.

+ 

+ Synchronous getter behaviour

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Retrieving a property with a getter will always be sychronous and return

+ the value currently cached. The getter might schedule an out-of-band

+ update depending on the state of the cache object. The primary reason

+ for the getter being synchronous is to be able to be composable, in

+ other words being able to call N getters in a loop and construct a reply

+ message containing N properties without resorting to asychronous updates

+ of the properties.

+ 

+ Callers that with to have an up-to-date view of the properties should

+ update the object by calling a special ``update`` (not included atm)

+ method or subscribe to the PropertiesChanged interface.

+ 

+ SSSD daemon features

+ --------------------

+ 

+ Apart from features that will directly benefit the new interface, the

+ SSSD itself must adapt to some requirements as well.

+ 

+ Access control

+ ~~~~~~~~~~~~~~

+ 

+ The DBus responder needs to limit who can request information at all and

+ what attributes can be returned.

+ 

+ Limiting access to the responder

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The DBus responder will re-use the same mechanism the PAC responder uses

+ where UIDs of clients that can contact the responder will be enumerated

+ in the "allowed\_uids" parameter of the responder configuration.

+ 

+ In a future enhancement, we might add a "self" mechanism, where client

+ will be allowed to read its own attributes. As limiting attribute access

+ might be different for this use-case, the first iteration of the

+ responder will not include the "self" mechanism.

+ 

+ Limiting access to attributes

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The responder will have a whitelist of attributes that the client can

+ query. No other attributes will be returned. Requesting an attribute

+ that is not permitted will yield an empty response, same as if the

+ attribute didn't exist. The whitelist will include the standard set of

+ POSIX attributes as returned by ie ``getpwnam`` by default.

+ 

+ The administrator will be allowed to extend the whitelist in sssd.conf

+ using a configuration directive either in the ``[ifp]`` section itself

+ or per-domain. The configuration directive shall allow either explicitly

+ adding attributes to the whitelist (using ``+attrname``) or explicitly

+ remove them using ``-attrname``.

+ 

+ The following example illustrates explicitly allowing the

+ telephoneNumber attribute to be queried and removing the gecos attribute

+ from the whitelist. ::

+ 

+         [ifp]

+         user_attributes = +telephoneNumer, -gecos

+ 

+ Support for non-POSIX users and groups

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Currently the SSSD supports looking up POSIX users and groups, mostly

+ due to the fact that primary consumers are POSIX interfaces such as the

+ Name Service Switch. For instance, the search filters in back ends

+ require the presence of attributes ID.

+ 

+ In contrast, users and groups that consumers of this new interface

+ require often lack the POSIX attributes. The SSSD must be extended so

+ that even non-POSIX users and groups are handled well.

+ 

+ Do not require enumeration to be enabled to retrieve set of users

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ At the moment, the SSSD can either fetch a single user (using getpwnam

+ for example) or all available users (using getpwent). As an effect, all

+ proposed DBus calls require enumeration to be switched on in order to be

+ able to retrieve sets of users. The SSSD needs to either grow a way to

+ retrieve several entries at once without enumerating or needs to make

+ enumeration much faster.

+ 

+ Authors

+ -------

+ 

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

+ -  Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

+ -  Stef Walter <`stefw@redhat.com <mailto:stefw@redhat.com>`__>

@@ -0,0 +1,117 @@ 

+ D-Bus Signal: Notify Property Changed

+ =====================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2233 <https://pagure.io/SSSD/sssd/issue/2233>`__

+ 

+ Related design page(s):

+ 

+ -  `DBus Responder <https://docs.pagure.org/SSSD.sssd/design_pages/dbus_responder.html>`__

+ 

+ Problem statement

+ -----------------

+ 

+ This design document describes how to implement

+ org.freedesktop.DBus.Properties.PropertiesChanged signal for SSSD

+ objects exported in the IFP responder.

+ 

+ D-Bus Interface

+ ---------------

+ 

+ org.freedesktop.DBus.Properties

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Signals

+ ^^^^^^^

+ 

+ -  PropertiesChanged(s interface\_name, {sv} changed\_properties, as

+    invalidated\_properties)

+ 

+    -  interface\_name: name of the interface on which the properties are

+       defined

+    -  changed\_properties: changed properties with new values

+    -  invalidated\_properties: changed properties but the new values are

+       not send with them

+    -  this signal is emitted for every property annotated with

+       org.freedesktop.DBus.Property.EmitsChangedSignal, this annotation

+       may be also used for the whole interface meaning that every

+       property within this interface emits the signal

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Changes in properties are detected in new LDB plugin inside a *mod*

+ hook. The plugin writes list of changed properties in a TDB-based

+ changelog which is periodically consumed by IFP responder. IFP then

+ emits PropertiesChanged signal per each modified object.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ TDB Format

+ ^^^^^^^^^^

+ 

+ -  **TDB Name**: *ifp\_changelog.tdb*

+ -  **Key**: dn of modified object

+ -  **Value**: chained list of modified properties in the form

+    *total\_num\\0prop1\\0prop2\\0...\\0*

+ 

+ IFP Side

+ ^^^^^^^^

+ 

+ #. TDB database is created on IFP start and deleted on IFP termination.

+ 

+    -  on IFP start:

+ 

+       -  if TDB file does not exist it is created

+       -  if TDB file exist (unexpected termination of IFP) it is

+          flushed, we do not care about the data inside

+ 

+    -  on correct IFP termination

+ 

+       -  the TDB file is deleted

+ 

+ #. A periodic task *IFP: notify properties changed* is created, it is

+    responsible for emitting the *PropertiesChanged* signal

+ 

+    -  Periodic task flow:

+ 

+       #. Lock TDB for read-only access

+       #. Traverse the TDB and remember dn and properties for all

+          modified objects

+       #. Flush TDB

+       #. Release the lock

+       #. Create and emit D-Bus signal per each object that is exported

+          on IFP bus and supports *PropertiesChanged* signal

+ 

+ LDB Plugin Side

+ ^^^^^^^^^^^^^^^

+ 

+ #. If TDB file does not exist just quit

+ #. If modified object supports the signal store it in the TDB

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ In IFP section:

+ 

+ -  **ifp\_notification\_interval**: period of *IFP: notify properties

+    changed*, disabled if 0, default 300 (5 minutes)

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ #. Hook onto *PropertiesChanged* signal, e. g. with *dbus-monitor'̈́'*

+ #. Trigger change of user/group

+ #. Signal should be recieved

+ 

+ Questions

+ ~~~~~~~~~

+ 

+ #. Do we want to use *changed\_properties* or *invalidated\_properties*

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

@@ -0,0 +1,636 @@ 

+ Simple D-Bus API design

+ =======================

+ 

+ Related ticket(s):

+ 

+ -  `Create a library to simplify usage of D-Bus

+    responder <https://pagure.io/SSSD/sssd/issue/2254>`__

+ 

+ Problem Statement

+ -----------------

+ 

+ Using D-Bus requires a significant amount of knowledge of D-Bus and its

+ underlying library API. Libraries like libdbus or libdbus\_glib are

+ quite complex and requires a lot of code to do even the simplest things.

+ The purpose of this document is to describe a new public API to access

+ most fundamental parts of SSSD's D-Bus repsonder in a simple way so that

+ a user doesn't have to deal with D-Bus at all.

+ 

+ Prerequisites

+ -------------

+ 

+ -  Each attribute of every D-Bus object accessible via this API is

+    represented as string.

+ -  Naming convention of D-Bus methods:

+ 

+    -  **List<class><condition>(arg1, ...)** returning array of object

+       paths

+ 

+       -  ListUsers()

+       -  ListDomains()

+       -  ListUsersByName(filter)

+       -  ListGroupsByName(filter)

+ 

+    -  **Find<class><condition>(arg1, ...)** returning single object path

+ 

+       -  FindUserById(id)

+       -  FindDomainByName(name)

+ 

+ The API

+ -------

+ 

+  ::

+ 

+     /**

+      * @defgroup sss_dbus Simple interface to SSSD InfoPipe responder.

+      * Libsss_dbus provides a synchronous interface to simplify basic communication

+      * with SSSD InfoPipe responder.

+      * @{

+      */

+ 

+     /** SSSD InfoPipe bus address */

+     #define SSS_DBUS_IFP "org.freedesktop.sssd.infopipe"

+ 

+     /** SSSD InfoPipe interface */

+     #define SSS_DBUS_IFACE_IFP SSS_DBUS_IFP

+     #define SSS_DBUS_IFACE_COMPONENTS "org.freedesktop.sssd.infopipe.Components"

+     #define SSS_DBUS_IFACE_SERVICES "org.freedesktop.sssd.infopipe.Services"

+     #define SSS_DBUS_IFACE_DOMAINS "org.freedesktop.sssd.infopipe.Domains"

+     #define SSS_DBUS_IFACE_USERS "org.freedesktop.sssd.infopipe.Users"

+     #define SSS_DBUS_IFACE_GROUPS "org.freedesktop.sssd.infopipe.Groups"

+ 

+     /**

+      * Opaque libsss_dbus context. One context shall not be used by multiple

+      * threads. Each thread needs to create and use its own context.

+      *

+      * @see sss_dbus_init

+      * @see sss_dbus_init_ex

+      */

+     typedef struct sss_dbus_ctx sss_dbus_ctx;

+ 

+     /**

+      * Typedef for memory allocation functions

+      */

+     typedef void (sss_dbus_free_func)(void *ptr, void *pvt);

+     typedef void *(sss_dbus_alloc_func)(size_t size, void *pvt);

+ 

+     /**

+      * Error codes used by libsss_dbus

+      */

+     typedef enum sss_dbus_error {

+         /** Success */

+         SSS_DBUS_OK = 0,

+ 

+         /** Ran out of memory during processing */

+         SSS_DBUS_OUT_OF_MEMORY,

+ 

+         /** Invalid argument */

+         SSS_DBUS_INVALID_ARGUMENT,

+ 

+         /**

+          * Input/output error

+          *

+          * @see sss_dbus_get_last_io_error() to get more information

+          */

+         SSS_DBUS_IO_ERROR,

+ 

+         /** Internal error */

+         SSS_DBUS_INTERNAL_ERROR,

+ 

+         /** Operation not supported */

+         SSS_DBUS_NOT_SUPPORTED,

+ 

+         /** Attribute does not exist */

+         SSS_DBUS_ATTR_MISSING,

+ 

+         /** Attribute does not have any value set */

+         SSS_DBUS_ATTR_NULL,

+ 

+         /** Incorrect attribute type */

+         SSS_DBUS_INCORRECT_TYPE,

+ 

+         /** Always last */

+         SSS_DBUS_ERROR_SENTINEL

+     } sss_dbus_error;

+ 

+     /**

+      * Boolean type

+      */

+     typedef uint32_t sss_dbus_bool;

+ 

+     /**

+      * D-Bus object attribute

+      */

+     typedef struct sss_dbus_attr sss_dbus_attr;

+ 

+     /**

+      * String dictionary

+      */

+     typedef struct {

+         char *key;

+         char **values;

+         unsigned int num_values;

+     } sss_dbus_str_dict;

+ 

+     /**

+      * D-Bus object

+      */

+     typedef struct sss_dbus_object {

+         char *name;

+         char *object_path;

+         char *interface;

+         sss_dbus_attr **attrs;

+     } sss_dbus_object;

+ 

+     /**

+      * @brief Initialize sss_dbus context using default allocator (malloc)

+      *

+      * @param[out] _ctx sss_dbus context

+      */

+     sss_dbus_error

+     sss_dbus_init(sss_dbus_ctx **_ctx);

+ 

+     /**

+      * @brief Initialize sss_dbus context

+      *

+      * @param[in] alloc_pvt  Private data for allocation routine

+      * @param[in] alloc_func Function to allocate memory for the context, if

+      *                       NULL malloc() is used

+      * @param[in] free_func  Function to free the memory of the context, if

+      *                       NULL free() is used

+      * @param[out] _ctx      sss_dbus context

+      */

+     sss_dbus_error

+     sss_dbus_init_ex(void *alloc_pvt,

+                      sss_dbus_alloc_func *alloc_func,

+                      sss_dbus_free_func *free_func,

+                      sss_dbus_ctx **_ctx);

+ 

+     /**

+      * @brief Return last error message from underlying D-Bus communication

+      *

+      * @param[in] ctx sss_dbus context

+      * @return Error message or NULL if no error occurred during last D-Bus call.

+      */

+     const char *

+     sss_dbus_get_last_io_error(sss_dbus_ctx *ctx);

+ 

+     /**

+      * @brief Return default interface for object with path @object_path.

+      *

+      * @param[in] ctx object_path D-Bus object path

+      * @return Interface name or NULL if the object path is unknown.

+      */

+     const char *

+     sss_dbus_get_iface_for_object(const char *object_path);

+ 

+     /**

+      * @brief Create message for SSSD InfoPipe bus.

+      *

+      * @param[in] object_path D-Bus object path

+      * @param[in] interface   D-Bus interface

+      * @param[in] method      D-Bus method

+      *

+      * @return D-Bus message.

+      */

+     DBusMessage *

+     sss_dbus_create_message(const char *object_path,

+                             const char *interface,

+                             const char *method);

+ 

+     /**

+      * @brief Send D-Bus message to SSSD InfoPipe bus.

+      *

+      * @param[in] ctx         sss_dbus context

+      * @param[in] interface   D-Bus interface

+      * @param[in] object_path D-Bus object path

+      * @param[in] method      D-Bus method

+      *

+      * @return D-Bus message.

+      */

+     sss_dbus_error

+     sss_dbus_send_message(sss_dbus_ctx *ctx,

+                           DBusMessage *msg,

+                           DBusMessage **_reply);

+ 

+     /**

+      * @brief Fetch selected attributes of given object.

+      *

+      * @param[in] ctx         sss_dbus context

+      * @param[in] object_path D-Bus object path

+      * @param[in] interface   D-Bus interface

+      * @param[in] name        Name of desired attribute

+      * @param[out] _attrs     List of acquired attributes

+      */

+     sss_dbus_error

+     sss_dbus_fetch_attr(sss_dbus_ctx *ctx,

+                         const char *object_path,

+                         const char *name,

+                         const char *interface,

+                         sss_dbus_attr ***_attrs);

+ 

+     /**

+      * @brief Fetch all attributes of given object.

+      *

+      * @param[in] ctx         sss_dbus context

+      * @param[in] object_path D-Bus object path

+      * @param[in] interface   D-Bus interface

+      * @param[out] _attrs     Acquired attributes

+      */

+     sss_dbus_error

+     sss_dbus_fetch_all_attrs(sss_dbus_ctx *ctx,

+                              const char *object_path,

+                              const char *interface,

+                              sss_dbus_attr ***_attrs);

+ 

+     /**

+      * @brief Fetch D-Bus object.

+      *

+      * @param[in] ctx         sss_dbus context

+      * @param[in] object_path D-Bus object path

+      * @param[in] interface   D-Bus interface

+      * @param[out] _object    Object and its attributes

+      */

+     sss_dbus_error

+     sss_dbus_fetch_object(sss_dbus_ctx *ctx,

+                           const char *object_path,

+                           const char *interface,

+                           sss_dbus_object **_object);

+ 

+     /**

+      * @brief List objects that satisfies given conditions. This routine will

+      * invoke List<method> D-Bus method on SSSD InfoPipe interface. Arguments

+      * to this method are given as standard variadic D-Bus arguments.

+      *

+      * @param[in] ctx            sss_dbus context

+      * @param[in] method         D-Bus method to call without the 'List' prefix

+      * @param[out] _object_paths List of object paths

+      * @param[in] first_arg_type Type of the first D-Bus argument

+      * @param[in] ...            D-Bus arguments

+      */

+     sss_dbus_error

+     sss_dbus_invoke_list(sss_dbus_ctx *ctx,

+                          const char *method,

+                          char ***_object_paths,

+                          int first_arg_type,

+                          ...);

+ 

+     /**

+      * @brief Find single object that satisfies given conditions. This routine will

+      * invoke Find<method> D-Bus method on SSSD InfoPipe interface. Arguments

+      * to this method are given as standard variadic D-Bus arguments.

+      *

+      * @param[in] ctx            sss_dbus context

+      * @param[in] method         D-Bus method to call without the 'Find' prefix

+      * @param[out] _object_path Object path

+      * @param[in] first_arg_type Type of the first D-Bus argument

+      * @param[in] ...            D-Bus arguments

+      */

+     sss_dbus_error

+     sss_dbus_invoke_find(sss_dbus_ctx *ctx,

+                          const char *method,

+                          char **_object_path,

+                          int first_arg_type,

+                          ...);

+ 

+     /**

+      * @brief Find attribute in list and return its value.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _value Output value

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_bool(sss_dbus_attr **attrs,

+                                const char *name,

+                                sss_dbus_bool *_value);

+ 

+     /**

+      * @brief Find attribute in list and return its value.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _value Output value

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_int16(sss_dbus_attr **attrs,

+                                 const char *name,

+                                 int16_t *_value);

+ 

+     /**

+      * @brief Find attribute in list and return its value.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _value Output value

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_uint16(sss_dbus_attr **attrs,

+                                  const char *name,

+                                  uint16_t *_value);

+ 

+     /**

+      * @brief Find attribute in list and return its value.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _value Output value

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_int32(sss_dbus_attr **attrs,

+                                 const char *name,

+                                 int32_t *_value);

+ 

+     /**

+      * @brief Find attribute in list and return its value.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _value Output value

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_uint32(sss_dbus_attr **attrs,

+                                  const char *name,

+                                  uint32_t *_value);

+ 

+     /**

+      * @brief Find attribute in list and return its value.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _value Output value

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_int64(sss_dbus_attr **attrs,

+                                 const char *name,

+                                 int64_t *_value);

+ 

+     /**

+      * @brief Find attribute in list and return its value.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _value Output value

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_uint64(sss_dbus_attr **attrs,

+                                  const char *name,

+                                  uint64_t *_value);

+ 

+     /**

+      * @brief Find attribute in list and return its value.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _value Output value

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_string(sss_dbus_attr **attrs,

+                                  const char *name,

+                                  const char **_value);

+ 

+     /**

+      * @brief Find attribute in list and return its value.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _value Output value

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_string_dict(sss_dbus_attr **attrs,

+                                       const char *name,

+                                       sss_dbus_str_dict *_value);

+ 

+     /**

+      * @brief Find attribute in list and return its values.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _num_values Number of values in the array

+      * @param[out] _values Output array

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_bool_array(sss_dbus_attr **attrs,

+                                      const char *name,

+                                      unsigned int *_num_values,

+                                      sss_dbus_bool **_value);

+ 

+     /**

+      * @brief Find attribute in list and return its values.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _num_values Number of values in the array

+      * @param[out] _values Output array

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_int16_array(sss_dbus_attr **attrs,

+                                       const char *name,

+                                       unsigned int *_num_values,

+                                       int16_t **_value);

+ 

+     /**

+      * @brief Find attribute in list and return its values.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _num_values Number of values in the array

+      * @param[out] _values Output array

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_uint16_array(sss_dbus_attr **attrs,

+                                        const char *name,

+                                        unsigned int *_num_values,

+                                        uint16_t **_value);

+ 

+     /**

+      * @brief Find attribute in list and return its values.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _num_values Number of values in the array

+      * @param[out] _values Output array

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_int32_array(sss_dbus_attr **attrs,

+                                       const char *name,

+                                       unsigned int *_num_values,

+                                       int32_t **_value);

+ 

+     /**

+      * @brief Find attribute in list and return its values.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _num_values Number of values in the array

+      * @param[out] _values Output array

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_uint32_array(sss_dbus_attr **attrs,

+                                        const char *name,

+                                        unsigned int *_num_values,

+                                        uint32_t **_value);

+ 

+     /**

+      * @brief Find attribute in list and return its values.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _num_values Number of values in the array

+      * @param[out] _values Output array

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_int64_array(sss_dbus_attr **attrs,

+                                       const char *name,

+                                       unsigned int *_num_values,

+                                       int64_t **_value);

+ 

+     /**

+      * @brief Find attribute in list and return its values.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _num_values Number of values in the array

+      * @param[out] _values Output array

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_uint64_array(sss_dbus_attr **attrs,

+                                        const char *name,

+                                        unsigned int *_num_values,

+                                        uint64_t **_value);

+ 

+     /**

+      * @brief Find attribute in list and return its values.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _num_values Number of values in the array

+      * @param[out] _values Output array

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_string_array(sss_dbus_attr **attrs,

+                                        const char *name,

+                                        unsigned int *_num_values,

+                                        const char * const **_value);

+ 

+     /**

+      * @brief Find attribute in list and return its values.

+      *

+      * @param[in] attrs Attributes

+      * @param[in] name Name of the attribute to find

+      * @param[out] _num_values Number of values in the array

+      * @param[out] _values Output array

+      */

+     sss_dbus_error

+     sss_dbus_find_attr_as_string_dict_array(sss_dbus_attr **attrs,

+                                             const char *name,

+                                             unsigned int *_num_values,

+                                             sss_dbus_str_dict **_value);

+ 

+     /**

+      * @brief Free sss_dbus context and set it to NULL.

+      *

+      * @param[in,out] _ctx sss_dbus context

+      */

+     void

+     sss_dbus_free(sss_dbus_ctx **_ctx);

+ 

+     /**

+      * @brief Free attribute list and set it to NULL.

+      *

+      * @param[in,out] _attrs Attributes

+      */

+     void

+     sss_dbus_free_attrs(sss_dbus_ctx *ctx,

+                         sss_dbus_attr ***_attrs);

+ 

+     /**

+      * @brief Free sss_dbus object and set it to NULL.

+      *

+      * @param[in,out] _object Object

+      */

+     void

+     sss_dbus_free_object(sss_dbus_ctx *ctx,

+                          sss_dbus_object **_object);

+ 

+     /**

+      * @brief Free string and set it to NULL.

+      *

+      * @param[in,out] _str String

+      */

+     void

+     sss_dbus_free_string(sss_dbus_ctx *ctx,

+                          char **_str);

+ 

+     /**

+      * @brief Free array of strings and set it to NULL.

+      *

+      * @param[in,out] _str_array Array of strings

+      */

+     void

+     sss_dbus_free_string_array(sss_dbus_ctx *ctx,

+                                char ***_str_array);

+ 

+     /**

+      * @}

+      */

+ 

+     /**

+      * @defgroup common Most common use cases of SSSD InfoPipe responder.

+      * @{

+      */

+ 

+     /**

+      * @brief List names of available domains.

+      *

+      * @param[in] ctx       sss_dbus context

+      * @param[out] _domains List of domain names

+      */

+     sss_dbus_error

+     sss_dbus_list_domains(sss_dbus_ctx *ctx,

+                           char ***_domains);

+ 

+     /**

+      * @brief Fetch all information about domain by name.

+      *

+      * @param[in] ctx      sss_dbus context

+      * @param[in] name     Domain name

+      * @param[out] _domain Domain object

+      */

+     sss_dbus_error

+     sss_dbus_fetch_domain_by_name(sss_dbus_ctx *ctx,

+                                   const char *name,

+                                   sss_dbus_object **_domain);

+ 

+     /**

+      * @brief Fetch all information about user by uid.

+      *

+      * @param[in] ctx    sss_dbus context

+      * @param[in] uid    User ID

+      * @param[out] _user User object

+      */

+     sss_dbus_error

+     sss_dbus_fetch_user_by_uid(sss_dbus_ctx *ctx,

+                                uid_t uid,

+                                sss_dbus_object **_user);

+ 

+     /**

+      * @brief Fetch all information about user by name.

+      *

+      * @param[in] ctx    sss_dbus context

+      * @param[in] name   User name

+      * @param[out] _user User object

+      */

+     sss_dbus_error

+     sss_dbus_fetch_user_by_name(sss_dbus_ctx *ctx,

+                                 const char *name,

+                                 sss_dbus_object **_user);

+ 

+     /**

+      * @}

+      */

+ 

+ Authors

+ -------

+ 

+ -  Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

@@ -0,0 +1,245 @@ 

+ D-Bus Interface: Users and Groups

+ =================================

+ 

+ Related ticket(s):

+ 

+ -  `​https://pagure.io/SSSD/sssd/issue/2150 <https://pagure.io/SSSD/sssd/issue/2150>`__

+ 

+ Related design page(s):

+ 

+ -  `DBus Responder <https://docs.pagure.org/SSSD.sssd/design_pages/dbus_responder.html>`__

+ 

+ Problem statement

+ ----------------~

+ 

+ This design document describes how users and groups are represented on

+ SSSD D-Bus interface.

+ 

+ Use cases

+ ---------

+ 

+ -  Listing users and groups in access control GUI

+ -  Obtaining extra information about user that is not available through

+    standard APIs

+ 

+ D-Bus Interface

+ ---------------

+ 

+ org.freedesktop.sssd.infopipe.Users

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Object paths implementing this interface

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ -  /org/freedesktop/sssd/infopipe/Users

+ 

+ Methods

+ ^^^^^^^

+ 

+ -  o FindByName(s:name)

+ -  o FindByID(u:id)

+ -  ao ListByName(s:filter, u:limit)

+ 

+    -  filter: possible asterisk as wildcard character; minimum length is

+       required

+    -  limit: maximum number of entries returned, 0 means unlimited or to

+       maximum allowed number

+ 

+ -  ao ListByDomainAndName(s:domain\_name, s:filter, u:limit)

+ 

+    -  filter: possible asterisk as wildcard character; minimum length is

+       required

+    -  limit: maximum number of entries returned, 0 means unlimited or to

+       maximum allowed number

+ 

+ Signals

+ ^^^^^^^

+ 

+ None.

+ 

+ Properties

+ ^^^^^^^^^^

+ 

+ None.

+ 

+ org.freedesktop.sssd.infopipe.Users.User

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Object paths implementing this interface

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ -  /org/freedesktop/sssd/infopipe/Users/$DOMAIN/$UID

+ 

+ Methods

+ ^^^^^^^

+ 

+ -  void UpdateGroupsList()

+ 

+    -  Performs initgroups on the user.

+ 

+ Signals

+ ^^^^^^^

+ 

+ None.

+ 

+ Properties

+ ^^^^^^^^^^

+ 

+ -  s name

+ 

+    -  The user's login name.

+ 

+ -  u uidNumber

+ 

+    -  The user's UID.

+ 

+ -  u gidNumber

+ 

+    -  The user's primary GID.

+ 

+ -  s gecos

+ 

+    -  The user's real name.

+ 

+ -  s homeDirectory

+ 

+    -  The user's home directory

+ 

+ -  s loginShell

+ 

+    -  The user's login shell

+ 

+ -  a{sas} extraAttributes

+ 

+    -  Extra attributes as configured by the SSSD. The key is the

+       attribute name, value is array of strings that contains the

+       values.

+ 

+ -  ao groups

+ 

+    -  An array of object paths representing the groups the user is a

+       member of.

+ 

+ org.freedesktop.sssd.infopipe.Groups

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Object paths implementing this interface

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ -  /org/freedesktop/sssd/infopipe/Groups

+ 

+ Methods

+ ^^^^^^^

+ 

+ -  o FindByName(s:name)

+ -  o FindByID(u:id)

+ -  ao ListByName(s:filter, u:limit)

+ 

+    -  filter: possible asterisk as wildcard character; minimum length is

+       required

+    -  limit: maximum number of entries returned, 0 means unlimited or to

+       maximum allowed number

+ 

+ -  ao ListByDomainAndName(s:domain\_name, s:filter, u:limit)

+ 

+    -  filter: possible asterisk as wildcard character; minimum length is

+       required

+    -  limit: maximum number of entries returned, 0 means unlimited or to

+       maximum allowed number

+ 

+ Signals

+ ^^^^^^^

+ 

+ None.

+ 

+ Properties

+ ^^^^^^^^^^

+ 

+ org.freedesktop.sssd.infopipe.Groups.Group

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Object paths implementing this interface

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ -  /org/freedesktop/sssd/infopipe/Groups/$DOMAIN/$GID

+ 

+ Methods

+ ^^^^^^^

+ 

+ None.

+ 

+ Signals

+ ^^^^^^^

+ 

+ None.

+ 

+ Properties

+ ^^^^^^^^^^

+ 

+ -  s name

+ 

+    -  The group's name.

+ 

+ -  u gidNumber

+ 

+    -  The groups's primary GID.

+ 

+ -  ao users

+ 

+    -  A list of the group's member user objects.

+ 

+ -  ao groups

+ 

+    -  A list of the group's member group objects.

+ 

+ Overview of the solution

+ ------------------------

+ 

+ New D-Bus interfaces will be implemented in the IFP responder.

+ 

+ Find methods perform online lookup if the entry is missing or expired.

+ 

+ Listing methods always perform online lookup to ensure that even

+ recently added entries are in the list.

+ 

+ Listing methods can return only a limited number of entries. Number of

+ entries returned can be controlled by **limit** parameter with hard

+ limit set in sssd.conf with a new configuration option

+ **filter\_limit**. This option can be present in [ifp] and [domain]

+ sections to set this limit for data provider filter searches ([domain]

+ section) and also global hard limit for the listing methods itself

+ ([ifp] section). This limit is supposed to improve performace with large

+ databases so we process only a small number of records. If the option is

+ set to 0, the limit is disabled.

+ 

+ Filter may contain only '\*' asterisk as a wildcard character, it does

+ not support complete set of regular expressions. The asterisk can be

+ present on the beginning of the filter '\*filter', its end 'filter\*',

+ both sides '\*filter\*' or even in the middle '\*fil\*ter\*', since it

+ is supported by both LDAP and LDB. However, only prefix-filter

+ ('filter\*'), can take the performace boost from indices so other filter

+ may not perform so good with huge databases.

+ 

+ Configuration changes

+ ---------------------

+ 

+ The following options will be created in the [ifp] and [domain]

+ sections:

+ 

+ -  wildcard\_search\_limit (uint32)

+ 

+ See the `wildcard refresh design page

+ <https://docs.pagure.org/SSSD.sssd/design_pages/wildcard_refresh.html>`__

+ for more details.

+ 

+ How To Test

+ -----------

+ 

+ Call the D-Bus methods and properties. For example with **dbus-send**

+ tool.

+ 

+ Authors

+ -------

+ 

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

+ -  Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

@@ -0,0 +1,99 @@ 

+ DDNS - update quality of input for nsupdate

+ ===========================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2495 <https://pagure.io/SSSD/sssd/issue/2495>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ SSSD provides input to nsupdate that's redundant and might actually

+ impair proper resolution using DNS. Further, format of messages pasted

+ to nsupdate is grouped by command type - deleting and adding addresses

+ are two separate batches which is not optimal as update of one of

+ address families might be prohibited to be changed on server.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ If DNS system is broken using dyndns\_server option might be an

+ workaround

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Allow SSSD to add a new option that would specify which server to

+    update DNS with ('dyndns\_server')

+ -  In first attempt to perform DDNS **do not specify zone/server/realm**

+    commands in input for nsupdate (nsupdate will then utilize DNS)

+ -  As fallback (if first attempt fails) specify realm command and if

+    'dyndns\_server' option is specified then also add server command

+    with value of 'dyndns\_server'

+ -  Bulk deleting and adding addresses from the same address family into

+    one batch rather then grouping by command type

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Remove servername and dns\_zone parameters from

+    sdap\_dyndns\_update\_send() as they are no longer used. Remove code

+    from AD and IPA provider which was passing this data to

+    sdap\_dyndns\_update\_send().

+ -  Remove dns\_zone field from struct sdap\_dyndns\_update\_state and

+    remove all code relating to it.

+ -  In sdap\_dyndns\_update\_done() make setting command realm

+    conditional same way as command server is.

+ -  Update nsupdate\_msg\_add\_fwd() to group commands by address family

+    processed IP belongs to.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ New option dyndns\_server

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ Check that IP addresses get changed in IPA and on AD. Break DNS

+ resolving to force first attempt of DDNS to fail. Check that messages

+ generated as input for nsupdate in domain logs are matching the

+ expectation.

+ 

+ Example of expected format of messages

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ **First attempt**

+ 

+  ::

+ 

+     update delete husker.human.bs. in A

+     update add husker.human.bs. 1200 in A 192.168.122.180

+     send

+     update delete husker.human.bs. in AAAA

+     update add husker.human.bs. 1200 in AAAA 2001:cdba::666

+     send

+ 

+ **Fallback attempt**

+ 

+  ::

+ 

+     ;sever is present only if option dyndns_server is set in sssd.conf 

+     server 192.168.122.20

+     ;realm is used always in fallback message

+     realm IPA.WORK

+     update delete husker.human.bs. in A

+     update add husker.human.bs. 1200 in A 192.168.122.180

+     send

+     ;sever is present only if option dyndns_server is set in sssd.conf 

+     server 192.168.122.20

+     ;realm is used always in fallback message

+     realm IPA.WORK

+     update delete husker.human.bs. in AAAA

+     update add husker.human.bs. 1200 in AAAA 2001:cdba::666

+     send

+ 

+ Authors

+ ~~~~~~~

+ 

+ `preichl@redhat.com <mailto:preichl@redhat.com>`__

@@ -0,0 +1,97 @@ 

+ FastNSSCache

+ ============

+ 

+ Problem Statement

+ -----------------

+ 

+ Currently in SSSD user and group lookups are performed by using a

+ connection to a named socket. This means each client has to talk to the

+ sssd\_nss daemon for every lookup it needs to perform.

+ 

+ Although this behaviour works well for normal machines, it has

+ scalability limits on busy machines with many processes that need to

+ query the user/group database frequently.

+ 

+ There are 2 factors that cause scalability issues:

+ 

+ #. context switches

+ #. the responder process is single-threaded (although asynchronous) so

+    the amount of processing it can do is limited by the speed of 1 cpu

+ 

+ Each request suffers from at least 2 context switches (and a few copies

+ of the data in memory) to write() the request in, wait until it is

+ processed by sssd\_nss, read() the reply back. Because sssd\_nss may be

+ busy answering many requests a queue may build up and replies be

+ delayed.

+ 

+ Allowing the clinets to direcly access SSSD caches is not possible for

+ various reasons including:

+ 

+ -  sssd uses LDB as caching backend and LDB depends on byte range locks.

+    Letting a client read access to the cache would allow DoS if the

+    client lcoks a record and never unlock it.

+ -  sssd stores data not all clients are allowed to get access to

+    (password hashes for example) and partitioning access to this data

+    within the LDB cache is not feasible.

+ 

+ A method to avoid context switches and the sssd\_nss bottleneck without

+ compromising th security of the system is therefore desirable.

+ 

+ Overview of FastNSSCache solution

+ ---------------------------------

+ 

+ The FastNSSCache feature addresses both issues.

+ 

+ This is done by creating a specialized cache that have a few properties

+ 

+ -  Contains only public data (the same data available in a public passwd

+    or group files)

+ -  Read only for clients

+ -  Does not use locking and yet prevents access to inconsistent data

+ -  Cache has a fixed size and uses a FIFO (for now) method to know which

+    entries to purge

+ -  Fallback to named sockets if entry is not found in the Fast Cache

+ 

+ Implementation details

+ ----------------------

+ 

+ The cache files are opened on the client at the first query and mmaped

+ in the process memory, all accesses to data are therefore direct access

+ to memory and do not suffer from any context switch. They also happen in

+ parallel within each process with synchronization (in order to allow

+ updates) performed by using memory barriers.

+ 

+ Cache files can only be used for direct lookups (by name or by uid/gid),

+ enumerations are \_never\_ handled via fast cache lookups by design,

+ they always fallback to socket communication.

+ 

+ The "maps" currently available are the \_passwd\_ and \_group\_ maps,

+ each map has a file associated in the /var/lib/sss/mc directory which is

+ accessible read-only by clients.

+ 

+ Configuration

+ -------------

+ 

+ At the moement we plan to provide 3 parameters per map that can control

+ the caches.

+ 

+ -  Per map enablement parameter that allows to activate/deactiveate maps

+    invidiually.

+ -  Per map cache size to fine tune the cache sizes in case space is at a

+    premium or the dataset does not fit the default cache.

+ -  Expiration time for entries.

+ 

+ Cache entries warrant a shorter expiration time than current LDB caches

+ because access to these entries is undetectable by sssd\_nss which

+ cannot decide how much an entry is required and whether a midway refresh

+ is needed. By shortening the FastNSSCache entries life time we incur in

+ the penalty of using the pipe from time to time but in turn we allow

+ ssd\_nss to decide whether it is required to refresh the entry or not.

+ 

+ Future Improvements

+ -------------------

+ 

+ -  Better garbage collection on the server side, at the moment a FIFO is

+    used.

+ -  Handle caching other nsswitch.conf database plugins in order to avoid

+    slow access to the files db

@@ -0,0 +1,260 @@ 

+ "Files" data provider

+ =====================

+ 

+ Related ticket(s):

+ 

+ -  The umbrella tracking ticket:

+    `https://pagure.io/SSSD/sssd/issue/2228 <https://pagure.io/SSSD/sssd/issue/2228>`__

+ 

+ which includes the following sub-tasks:

+ 

+ -  Ship an immutable recovery mode config for local accounts -

+    `https://pagure.io/SSSD/sssd/issue/2229 <https://pagure.io/SSSD/sssd/issue/2229>`__

+ -  [RFE] Support UID/GID changes -

+    `https://pagure.io/SSSD/sssd/issue/2244 <https://pagure.io/SSSD/sssd/issue/2244>`__

+ -  Provide a "writable" D-Bus management API for local users -

+    `https://pagure.io/SSSD/sssd/issue/3242 <https://pagure.io/SSSD/sssd/issue/3242>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ SSSD does not behave well with nscd, so we recommend that it be

+ disabled. However, this comes with a price in the form of every

+ nameservice lookup hitting the disk for ``/etc/passwd`` and friends

+ every time. SSSD should be able to read and monitor these files and

+ serve them from its cache, allowing ``sss`` to sort before ``files`` in

+ ``/etc/nsswitch.conf``

+ 

+ In addition, SSSD provides some useful interfaces, such as `the dbus

+ interface <https://docs.pagure.org/SSSD.sssd/design_pages/dbus_users_and_groups.html>`__

+ which only work for users and groups SSSD knows about.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ Use Case: Default Configuration

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ SSSD (and its useful APIs) should always be available. This means that

+ SSSD must ship with a default configuration that works (and requires no

+ manual configuration or joining a domain). This default configuration

+ should provide a fast in-memory cache for all user and group information

+ that SSSD can support, including those traditionally stored in

+ ``/etc/passwd`` and friends.

+ 

+ Use Case: Programatically managing POSIX attributes of a user or a group

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Currently the available ways to manage users and groups is either spawn

+ and call shadow-utils binaries like ``useradd`` or libuser. SSSD already

+ has a D-Bus API used to provide custom attributes of domain users. This

+ interface should be be extended to provide 'writable' methods to manage

+ users and groups from files. This is tracked by `ticket #3242

+ <https://pagure.io/SSSD/sssd/issue/3242>`__

+ 

+ Use Case: Manage extended attributes of users and groups

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Some applications (such as desktop environments) additional attributes

+ (such as keyboard layout) should be stored along with the user. Since

+ the passwd file has only a fixed number of fields, it might make sense

+ to allow additional attributes to be stored in SSSD database and

+ retrieved with sssd's D-Bus interface. Again, this is tracked by

+ `ticket #3242 <https://pagure.io/SSSD/sssd/issue/3242>`__

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ .. FIXME: Add a link to the INI config merge design page in line 78

+ 

+ SSSD should ship a ``files`` provider as part of its required minimal

+ package. Absent any user modifications, SSSD should be configured to

+ start at boot and use this provider to serve local identity information.

+ 

+ This provider may or may not be optional. For example, we might decide

+ that it always exists as the first domain in the list, even if not

+ explicitly specified. Alternatively, distributions that wish to always

+ include the files provider will be able (starting with SSSD 1.14 and its

+ config merging feature to drop a definition of the files provider into

+ ``/etc/sssd/conf.d``. In order for this functionality to work, we would

+ have to deprecate the ``domains`` line and instead load all

+ ``[domain/XXXX]`` sections from all available sources, unless the

+ ``domains`` line is specified for backwards-compatibility.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Upon SSSD startup, the ``files`` provider will always run a complete

+ enumeration pass on the ``/etc/passwd``, ``/etc/group`` and other files

+ as appropriate. The provider will then configure an appropriate set of

+ file monitors (using ``inotify()``) and will re-run the enumeration if

+ any of those files are modified or replaced. The implementation of

+ enumeration would use the ``nss_files`` module interface - we would

+ ``dlopen`` the module and ``dlsym`` the appropriate functions like

+ ``__nss_files_getpwent``.

+ 

+ The fast-cache must also be flushed any time the enumeration is run, to

+ ensure that stale data is cleaned up. We should also consider turning

+ off the fast memory cache while we are performing the update.

+ 

+ In addition, the nscd cache (if applicable) should also be flushed

+ during an update. The updates to the files should be sufficiently rare

+ so the performance impact would be negligible.

+ 

+ The ``files`` provider in its first incarnation is expected to be a

+ read-only tool, making no direct modifications to local passwords. In

+ future enhancements, the Infopipe may grow the capability to serve the

+ AccountsServices API and make changes.

+ 

+ When a change in the files is detected, we should also flush the

+ negative cache - either only the changes or just flush it whole. This

+ would prevent scenarios like: ::

+ 

+         getent passwd foo # see that there is no user foo

+         useradd foo       # OK, let's add it then

+         getent passwd foo # still no user returned until the negative cache expires

+ 

+ from confusing admins.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ We may need the ability to choose non-default locations for files. This

+ can be a hidden (undocumented) option in the first version and if there

+ is a need to actually configure a non-default location, we can later

+ expose these configuration options.

+ 

+ We may also need to set a configurable number of seconds between

+ detecting a change and running enumerations. This could be implemented

+ in waiting a short time (2-3 seconds perhaps?) before detecting the

+ change and running the enumeration to avoid excessive enumerations and

+ invalidating the fastacache during subsequent shadow-utils invocations.

+ 

+ Performance impact

+ ~~~~~~~~~~~~~~~~~~

+ 

+ For measuring performance impact, we have developed a simple project

+ called `nssbench <https://github.com/jhrozek/nssbench>`__ which

+ measures the time spent in NSS with systemtap. For each case, results

+ are included for a single lookup which simulate the simplest case of an

+ application that is spawned and exists and a case where an application

+ performs several lookup and is able to benefit from the memory cache

+ which is opened once per application. For single lookups, we ran the

+ tests 10 times and averaged the Below are test results from different

+ scenarios:

+ 

+ #. Base-line: Looking up a local user directly from ``nss_files``

+ 

+    -  Single lookup ::

+ 

+           nss operation getpwnam(jhrozek) took 226 us

+           _nss_files_getpwnam cnt:1 avg:30 min:30 max:30 sum:30 us

+           _nss_sss_getpwnam cnt:0 avg:0 min:0 max:0 sum:0 us

+ 

+    -  100 lookups ::

+ 

+           nss operation getpwnam(jhrozek) took 2717 us

+           _nss_files_getpwnam cnt:100 avg:21 min:14 max:524 sum:2159 us

+           _nss_sss_getpwnam cnt:0 avg:0 min:0 max:0 sum:0 us

+ 

+ #. Failover from ``sss`` to ``files`` when SSSD is not running - this is

+    the 'worst' case where ``sss`` is enabled in ``nsswitch.conf`` but

+    the deamon is not running at all, so the system falls back from

+    ``sss`` to ``files`` for user lookups.

+ 

+    -  Single lookup ::

+ 

+           nss operation getpwnam(jhrozek) took 549 us

+           _nss_files_getpwnam cnt:1 avg:32 min:32 max:32 sum:32 us

+           _nss_sss_getpwnam cnt:1 avg:72 min:72 max:72 sum:72 us

+ 

+    -  100 lookups ::

+ 

+           nss operation getpwnam(jhrozek) took 6078 us

+           _nss_files_getpwnam cnt:100 avg:19 min:16 max:42 sum:1907 us

+           _nss_sss_getpwnam cnt:100 avg:22 min:19 max:74 sum:2248 us

+ 

+ #. Round-trip between SSSD deamon's populated cache and OS when the

+    memory cache is not used or not populated

+ 

+    -  Single lookup ::

+ 

+           nss operation getpwnam(jhrozek) took 755 us

+           _nss_files_getpwnam cnt:0 avg:0 min:0 max:0 sum:0 us

+           _nss_sss_getpwnam cnt:1 avg:384 min:384 max:384 sum:384 us

+ 

+    -  100 lookups ::

+ 

+           nss operation getpwnam(jhrozek) took 97831 us

+           _nss_files_getpwnam cnt:0 avg:0 min:0 max:0 sum:0 us

+           _nss_sss_getpwnam cnt:100 avg:968 min:115 max:22153 sum:96812 us

+ 

+ #. Performance benefit from using the memory cache

+ 

+    -  Single lookup ::

+ 

+           nss operation getpwnam(jhrozek) took 373 us

+           _nss_files_getpwnam cnt:0 avg:0 min:0 max:0 sum:0 us

+           _nss_sss_getpwnam cnt:1 avg:37 min:37 max:37 sum:37 us

+ 

+    -  100 lookups ::

+ 

+           nss operation getpwnam(jhrozek) took 1355 us

+           _nss_files_getpwnam cnt:0 avg:0 min:0 max:0 sum:0 us

+           _nss_sss_getpwnam cnt:100 avg:4 min:3 max:42 sum:408 us

+ 

+ The testing shows substantial benefit from SSSD cache for applications

+ that perform several lookup. The first lookup, which opens the memory

+ cache file takes about as much time as lookup against files. However,

+ subsequent lookups are almost an order of magnitude faster.

+ 

+ For setups that do not run SSSD by default, there is a performance hit

+ by failover from ``sss`` to ``files``. During testing, the failover took

+ up to 300us, about ~70us was spent in the ``sss`` module and about ~200

+ us seems to be the failover in libc itself.

+ 

+ Compatibility issues

+ ~~~~~~~~~~~~~~~~~~~~

+ 

+ Unless the ordering is specified, the files provider should be loaded

+ first.

+ 

+ Other distributions should be involved as well - we should work with

+ Ubuntu as well.

+ 

+ abrt and coredumpd must be run with ``SSS_LOOPS=no`` in order to avoid

+ looping when analyzing a crash. We need to test this by reverting the

+ order of modules, attaching a debugger and crashing SSSD on purpose.

+ 

+ Packaging issues

+ ~~~~~~~~~~~~~~~~

+ 

+ We need to add conflicts between glibc an an sssd version that doesn't

+ provide the files provider.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ When properly configured, SSSD should be able to serve local users and

+ groups. Testing this could be as simple as ::

+ 

+     getent -s sss passwd localuser

+ 

+ Of course, testing on the distribution level could be more involved. For

+ the first phase, of just adding the files provider, nothing should break

+ and the only thing the user should notice is improved performance.

+ Corner cases like running ``sssd_nss`` under gdb or corefile generation

+ with setup where ``sss`` is set first in nsswitch.conf must be done as

+ well.

+ 

+ How To Debug

+ ~~~~~~~~~~~~

+ 

+ A simple way of checking is some issue is caused by this new setup is to

+ revert the order of NSS modules back to read ``files sss``.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Stephen Gallagher <`sgallagh@redhat.com <mailto:sgallagh@redhat.com>`__>

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

@@ -0,0 +1,135 @@ 

+ Global Catalog Lookups in SSSD

+ ------------------------------

+ 

+ Related tickets:

+ 

+ -  `RFE Use the Global Catalog in SSSD for the AD

+    provider <https://pagure.io/SSSD/sssd/issue/1557>`__

+ -  `RFE sssd should support DNS

+    sites <https://pagure.io/SSSD/sssd/issue/1032>`__

+ 

+ Problem Statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Currently SSSD uses the standard LDAP interface of Active Directory to

+ lookup users and groups when joined to an Active Directory domain. But

+ the LDAP interface only offers information for users and groups of the

+ local domain and not from the whole forest. This information is

+ available in the Global Catalog of an Active Directory domain.

+ 

+ To make lookups of users and groups from the whole forest easier SSSD

+ should use the Global Catalog instead of the standard LDAP interface for

+ the lookups.

+ 

+ Additionally SSSD should provide an interface to allow other

+ applications to do Global Catalog lookups. Initially it is sufficient to

+ offer SID-to-Name and Name-to-SID lookups if SSSD is running on an IPA

+ server.

+ 

+ Overview view of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ General components

+ ^^^^^^^^^^^^^^^^^^

+ 

+ Service discovery

+ '''''''''''''''''

+ 

+ #. DNS lookup for any AD DC

+ #. CLDAP query to find the site of the client

+ #. DNS lookup for a Global Catalog server from the local site, fall back

+    to any Global catalog server

+ 

+ Authentication against the Global Catalog

+ '''''''''''''''''''''''''''''''''''''''''

+ 

+ GSSAPI with keytabs will be used for authentication.

+ 

+ If the SSSD client is joined to an AD the keytab is created during the

+ join process.

+ 

+ If SSSD is running on an IPA server with trust configured, the keytab

+ will be generated by samba. It has to be created and updated when trust

+ is established or the trust password is changed. Additionally there

+ should be a method to generate the keytab if it does not exist even if

+ there is no change in the trust state.

+ 

+ New NSS-Responder calls for SID-to-Name and Name-to-SID lookups

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Two new calls should be added to the NSS-Responder to give other

+ applications (the first user would be the FreeIPA directory server) a

+ simple interface for SID-to-Name and Name-to-SID lookups. It has to be

+ sorted out if and how those two new call will interact with lookup by

+ SID feature described in

+ `#1559 <https://pagure.io/SSSD/sssd/issue/1559>`__ "Use the

+ getpwnam()/getgrnam() interface as a gateway to resolve SID to Names".

+ 

+ Memory cache for the new lookups

+ ''''''''''''''''''''''''''''''''

+ 

+ To speed up lookup the new calls should be able to use the memory cache.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Since the Global Catalog is just an LDAP server running on a

+ non-standard port the general LDAP lookup code would not need much

+ change if any. But resolving the Global Catalog server would be a bit

+ different because before the actual DNS service record lookup the site

+ has to be determined.

+ 

+ I think it is sufficient to determine the site on startup and when SSSD

+ switches from offline to online.

+ 

+ To decode the blob returned by the CLDAP request libndr-nbt from the

+ samba4-libs package would be useful. Since libndr-krb5pac is already

+ used in the PAC responder I think it is ok to add this new dependency

+ here.

+ 

+ Environments with trusts

+ ^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ In an environment with trust it must be possible to handle multiple open

+ connections to Global Catalogs of different forests.

+ 

+ According to

+ `http://technet.microsoft.com/en-us/library/cc772808%28v=ws.10%29.aspx <http://technet.microsoft.com/en-us/library/cc772808%28v=ws.10%29.aspx>`__

+ referrals are used during Kerberos requests to guide the client to the

+ right KDC. I have not found a similar document for LDAP requests to the

+ Global Catalog. I have to find out if it is possible to work with

+ referrals here, too or if it is needed to read the trusted domain

+ objects

+ (`http://msdn.microsoft.com/en-us/library/cc223754.aspx <http://msdn.microsoft.com/en-us/library/cc223754.aspx>`__)

+ from the Global Catalog.

+ 

+ New NSS responder calls

+ ^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ How to test

+ ~~~~~~~~~~~

+ 

+ Since the Global Catalog lookups just replace the current lookup methods

+ SSSD should just behave as before (Regression testing).

+ 

+ Additional the following changes might be visible on the user or admind

+ level.

+ 

+ AD provider

+ ^^^^^^^^^^^

+ 

+ If the SSSD client is joined to a Windows domain which is part of a

+ forest, Global Catalog lookups should be able to resolve all users and

+ groups in the forest and not only the ones from the joined domain.

+ 

+ IPA provider running on a FreeIPA server

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If the FreeIPA server is able to use SSSD for SID-to-Name and

+ Name-to-SID lookups running winbind on the FreeIPA server is not needed

+ anymore.

+ 

+ Author(s)

+ ~~~~~~~~~

+ 

+ Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,127 @@ 

+ ID mapping - Automatically assign new slices for any AD domain

+ ==============================================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2188 <https://pagure.io/SSSD/sssd/issue/2188>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ When a domain RID grows beyond the slice size it may make sense to have

+ SSSD allocate a new slice automatically instead of relying on the admin

+ to find the fault and increase the slice size in sssd.conf and then

+ remove SYSDB cache and restart SSSD.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ If RID part of Active Directories user's SID (e.g.

+ S-1-5-21-2153326666-2176343378-3404031434-300000) is greater than value

+ of option ldap\_idmap\_range\_size (default: 200000) then ID mapping

+ will not work properly for such user. To resolve such situation:

+ 

+ -  Administrator has to notice this happening.

+ -  Fix configuration of ID mapping - increase value of

+    ldap\_idmap\_range\_size option.

+ -  Stop SSSD, remove SYSDB cache, start SSSD.

+ 

+ Downside of such configuration change is that the mapping function will

+ change. SIDs can be mapped to different UIDs and UIDs might be mapped on

+ different SIDs or at no SIDs at all.

+ 

+ For example Active Directory users might not be able to access their

+ files on UNIX hosts any more as the files would belong to their formal

+ UNIX IDs not the current ones.

+ 

+ After this feature is implemented administrator's action will not be

+ required in most cases. Also restarting SSSD and removing SYSDB cache

+ will not be necessary and so user ownership of resources such as files

+ will not be lost.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ After generating primary range for domain helper ranges are generated.

+ Number of helper ranges is adjustable (new option

+ ldap\_idmap\_helper\_table\_size). Special value 0 of this option

+ disables this feature.

+ 

+ -  Unique identifier for helper ranges is a string

+    *domain\_sid-$first\_rid* where $first\_rid is value of the first rid

+    for this helper range.

+ -  This unique identifier is later passed to

+    *sss\_idmap\_calculate\_range()* where it is used as input for murmur

+    hash.

+ 

+ Update algorithm for mapping SID to UNIX ID:

+ 

+ -  After mapping using primary slice fails then generate list of all

+    domains that SID belongs to.

+ -  If such list is not empty then check if SID matches against secondary

+    ranges of these domains and perform similar computation of UNIX ID as

+    is done for primary slices. If SID is not in helper ranges new range

+    is generated and its identifier string is *domain\_sid-$first\_rid*

+    where $first\_rid is **((int)(RIDofSID / range\_size)) \*

+    range\_size**.

+ 

+ Update algorithm for mapping UNIX ID to SID:

+ 

+ -  After mapping using primary slice fails then iterate through whole

+    list of domains.

+ -  For each domain check all helper ranges for match with ID.

+ -  If match is found compute SID in the same manner as is done if match

+    is in primary slice.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Introduce new struct **idmap\_range\_params** that holds all relevant

+ information for slice ranges, such as:

+ 

+ -  uint32\_t min\_id - minimal unix ID in given slice

+ -  uint32\_t max\_id - maximal unix ID in given slice

+ -  char \*range\_id

+ -  uint32\_t first\_rid

+ 

+ These fields should be replaced in struct **idmap\_domain\_info** by new

+ field struct idmap\_range\_params **range\_params** that would describe

+ primary slice assigned to this domain.

+ 

+ Add a linked list of struct idmap\_range\_params **helpers** into

+ idmap\_domain\_info. These helpers will hold information for secondary

+ slices assigned to this domain.

+ 

+ Update **sss\_idmap\_calculate\_range()** to check for collision even

+ with secondary slices.

+ 

+ Update **sss\_idmap\_sid\_to\_unix()** and

+ **sss\_idmap\_unix\_to\_sid()**.

+ 

+ Add new function **sss\_idmap\_add\_auto\_domain\_ex()** which is

+ similar to sss\_idmap\_add\_domain\_ex() but generates helper ranges for

+ domains and also takes callbacks which can be used to store domains

+ generated for helper ranges.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ ldap\_idmap\_helper\_table\_size (integer) - Maximal number of secondary

+ slices that is tried when performing mapping from UNIX ID to SID. If

+ value of ldap\_idmap\_helper\_table\_size is equal to 0 then no

+ additional secondary slices are generated.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ Create user in Active Directory whose RID part of SID is out of range

+ size (value of option ldap\_idmap\_range\_size). ID mapping should fail

+ for such user, warning in logs should be generated and mainly lookup

+ query should not return the user. After this feature is implemented it

+ should be working.

+ 

+ Authors

+ ~~~~~~~

+ 

+ Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__> Pavel

+ Reichl <`preichl@redhat.com <mailto:preichl@redhat.com>`__>

file modified
+85 -3
@@ -11,11 +11,93 @@ 

  .. toctree::

     :maxdepth: 1

  

-    matching_and_mapping_certificates

-    subdomain_configuration

+    accounts_service

+    active_directory_access_control

+    active_directory_dns_sites

+    active_directory_dns_updates

+    active_directory_fixed_dns_site

+    active_directory_gpo_integration

+    async_ldap_connections

+    async_winbind

+    autofs_integration

+    backend_dns_helpers

+    cached_authentication

+    config_check_tool

+    config_enhancements

+    cwrap_ldap

+    data_provider

+    dbus_cached_objects

+    dbus_domains

+    dbus_multiplier_interfaces

+    dbus_responder

+    dbus_signal_property_changed

+    dbus_simple_api

+    dbus_users_and_groups

+    ddns_messages_update

+    fast_nss_cache

+    files_provider

+    fleet_commander_integration

+    global_catalog_lookups

+    idmap_auto_assign_new_slices

+    integrate_sssd_with_cifs_client

+    ipa_server_mode

+    ipc

     kcm

+    kerberos_locator

+    kerberos_principal_mapping_to_proxy_users

+    ldap_referrals

+    libini_config_file_checks

+    local_group_members_for_rfc2307

+    lookup_users_by_certificate

+    lookup_users_by_certificate_part2

+    matching_and_mapping_certificates

+    member_of_v1

+    member_of_v2

+    multiple_search_bases

+    netgroups

     non_posix_support

+    not_root_sssd

+    nss_responder_id_mapping_calls

+    nss_with_kerberos_principal

+    one_fifteen_code_refactoring

+    one_fourteen_performance_improvements

+    one_way_trusts

+    open_lmi_provider

+    otp_related_improvements

+    pam_conversation_for_otp

+    periodical_refresh_of_expired_entries

+    periodic_tasks

+    prompting_for_multiple_authentication_types

+    recognize_trusted_domains_in_ad_provider

+    restrict_domains_in_pam

+    rpc_idmapd_plugin

+    secrets_service

     shortnames

+    sigchld

+    smartcard_authentication_pkinit

+    smartcard_authentication_step1

+    smartcard_authentication_testing_with_ad

+    smartcards_and_multiple_identities

+    smartcards

+    socket_activatable_responders

+    sockets_for_domains

+    sssctl

+    sssd_two_point_oh

+    subdomains

+    subdomain_configuration

+    sudo_caching_rules

+    sudo_caching_rules_invalidate

+    sudo_integration

+    sudo_integration_new_approach

+    sudo_ipa_schema

+    sudo_responder_cache_behaviour

+    sudo_support

+    sudo_support_plugin_wire_protocol

+    sudo_support_sample_sudo_rules_ldif

+    sysdb_fully_qualified_names

     systemd_activatable_responders

-    fleet_commander_integration

+    test_coverage

+    use_ad_homedir

+    usr_account_mgmt_consolidation

+    wildcard_refresh

     blank_template

@@ -0,0 +1,285 @@ 

+ Integrate SSSD with CIFS Client

+ -------------------------------

+ 

+ Related tickets:

+ 

+ -  `RFE Integrate SSSD with CIFS

+    client <https://pagure.io/SSSD/sssd/issue/1534>`__

+ -  `RFE Allow SSSD to be used with smbd

+    shares <https://pagure.io/SSSD/sssd/issue/1588>`__

+ 

+ Designs and tickets this design (might) depend:

+ 

+ -  `ID Mapping calls for the NSS

+    responder <https://docs.pagure.org/SSSD.sssd/design_pages/nss_responder_id_mapping_calls.html>`__

+ -  `Global Catalog Lookups in

+    SSSD <https://docs.pagure.org/SSSD.sssd/design_pages/global_catalog_lookups.html>`__

+ -  `RFE Use the getpwnam()/getgrnam() interface as a gateway to resolve

+    SID to Names <https://pagure.io/SSSD/sssd/issue/1559>`__

+ 

+ Problem Statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Although mapping of Posix UIDs and SIDs is not needed mounting a CIFS

+ share it might become necessary when working with files on the share,

+ e.g. when modifying ACLs. Up to version 5.8 the cifs-utils package uses

+ Winbind for this exclusively and the following binaries were linked

+ against libwbclient:

+ 

+ -  /usr/bin/getcifsacl

+ -  /usr/bin/setcifsacl

+ -  /usr/sbin/cifs.idmap

+ 

+ With version 5.9 of cifs-utils a plugin interface was introduced by Jeff

+ Layton (Thank you very much Jeff) to allow services other than winbind

+ to handle the mapping of Posix UIDs and SIDs. SSSD will provide a plugin

+ to allow the cifs-utils to ask SSSD to map the ID. With this plugin an

+ SSSD client can access a CIFS share with the same functionality as a

+ client running Winbind.

+ 

+ Use Case

+ ~~~~~~~~

+ 

+ Environment where FreeIPA and AD trusts are used already, but also Samba

+ file server should be used. It's important that UNIX IDs are mapped the

+ same way in all utilities, then and all IDs are consistent.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ There are two parts of this feature - a plugin for cifs-utils and a

+ library implementing the winbind API, but with SSSD calls. Both these

+ parts are fairly self-contained and do not touch the SSSD internals. See

+ the next section for the implementation details.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The plugin interface is defined in cifsidmap.h which can be found in the

+ cifs-utils-devel package. For easier reference a copy of the relevant

+ section is included below.

+ 

+ -  From the 6 expected functions, cifs\_idmap\_init\_plugin() and

+    cifs\_idmap\_exit\_plugin() are obvious.

+ 

+ -  cifs\_idmap\_sid\_to\_str() and cifs\_idmap\_str\_to\_sid() are

+    SID-to-Name and Name-to-SID mappings as discussed in

+    NSSResponderIDMappingCalls. I think the new libsss\_nss\_idmap.so

+    mentioned there can be used here, too.

+ 

+ -  cifs\_idmap\_sids\_to\_ids() and cifs\_idmap\_ids\_to\_sids() are the

+    ID mapping calls. Although it might be possible possible to map IDs

+    algorithmically without talking to SSSD I think those calls should

+    also reach out to SSSD to do the mapping. The main reason is to allow

+    other kind of mappings (e.g. using Posix attributes if available in

+    AD). ::

+ 

+       57 /*

+       58  * Plugins should implement the following functions:

+       59  */

+       60 

+       61 /**

+       62  * cifs_idmap_init_plugin - Initialize the plugin interface

+       63  * @handle - return pointer for an opaque handle

+       64  * @errmsg - pointer to error message pointer

+       65  *

+       66  * This function should do whatever is required to establish a context

+       67  * for later ID mapping operations. The "handle" is an opaque context

+       68  * cookie that will be passed in on subsequent ID mapping operations.

+       69  * The errmsg is used to pass back an error string both during the init

+       70  * and in subsequent idmapping functions. On any error, the plugin

+       71  * should point *errmsg at a string describing that error. Returns 0

+       72  * on success and non-zero on error.

+       73  *

+       74  * int cifs_idmap_init_plugin(void **handle, const char **errmsg);

+       75  */

+       76 

+       77 /**

+       78  * cifs_idmap_exit_plugin - Destroy an idmapping context

+       79  * @handle - context handle that should be destroyed

+       80  *

+       81  * When programs are finished with the idmapping plugin, they'll call

+       82  * this function to destroy any context that was created during the

+       83  * init_plugin. The handle passed back in was the one given by the init

+       84  * routine.

+       85  *

+       86  * void cifs_idmap_exit_plugin(void *handle);

+       87  */

+       88 

+       89 /**

+       90  * cifs_idmap_sid_to_str - convert cifs_sid to a string

+       91  * @handle - context handle

+       92  * @sid    - pointer to a cifs_sid

+       93  * @name   - return pointer for the name

+       94  *

+       95  * This function should convert the given cifs_sid to a string

+       96  * representation or mapped name in a heap-allocated buffer. The caller

+       97  * of this function is expected to free "name" on success. Returns 0 on

+       98  * success and non-zero on error. On error, the errmsg pointer passed

+       99  * in to the init_plugin function should point to an error string. The

+      100  * caller will not free the error string.

+      101  *

+      102  * int cifs_idmap_sid_to_str(void *handle, const struct cifs_sid *sid,

+      103  *                              char **name);

+      104  */

+      105 

+      106 /**

+      107  * cifs_idmap_str_to_sid - convert string to struct cifs_sid

+      108  * @handle - context handle

+      109  * @name   - pointer to name string to be converted

+      110  * @sid    - pointer to struct cifs_sid where result should go

+      111  *

+      112  * This function converts a name string or string representation of

+      113  * a SID to a struct cifs_sid. The cifs_sid should already be

+      114  * allocated. Returns 0 on success and non-zero on error. On error, the

+      115  * plugin should reset the errmsg pointer passed to the init_plugin

+      116  * function to an error string. The caller will not free the error string.

+      117  *

+      118  * int cifs_idmap_str_to_sid(void *handle, const char *name,

+      119  *                              struct cifs_sid *sid);

+      120  */

+      121 

+      122 /**

+      123  * cifs_idmap_sids_to_ids - convert struct cifs_sids to struct cifs_uxids

+      124  * @handle - context handle

+      125  * @sid    - pointer to array of struct cifs_sids to be converted

+      126  * @num    - number of sids to be converted

+      127  * @cuxid  - pointer to preallocated array of struct cifs_uxids for return

+      128  *

+      129  * This function should map an array of struct cifs_sids to an array of

+      130  * struct cifs_uxids.

+      131  *

+      132  * Returns 0 if at least one conversion was successful and non-zero on error.

+      133  * Any that were not successfully converted will have a cuxid->type of

+      134  * CIFS_UXID_TYPE_UNKNOWN.

+      135  *

+      136  * On any error, the plugin should reset the errmsg pointer passed to the

+      137  * init_plugin function to an error string. The caller will not free the error

+      138  * string.

+      139  *

+      140  * int cifs_idmap_sids_to_ids(void *handle, const struct cifs_sid *sid,

+      141  *                              const size_t num, struct cifs_uxid *cuxid);

+      142  */

+      143 

+      144 /**

+      145  * cifs_idmap_ids_to_sids - convert uid to struct cifs_sid

+      146  * @handle - context handle

+      147  * @cuxid  - pointer to array of struct cifs_uxid to be converted to SIDs

+      148  * @num    - number of cifs_uxids to be converted to SIDs

+      149  * @sid    - pointer to preallocated array of struct cifs_sid where results

+      150  *           should be stored

+      151  *

+      152  * This function should map an array of cifs_uxids an array of struct cifs_sids.

+      153  * Returns 0 if at least one conversion was successful and non-zero on error.

+      154  * Any sids that were not successfully converted should have their revision

+      155  * number set to 0.

+      156  *

+      157  * On any error, the plugin should reset the errmsg pointer passed to the

+      158  * init_plugin function to an error string. The caller will not free the error

+      159  * string.

+      160  *

+      161  * int cifs_idmap_ids_to_sids(void *handle, const struct cifs_uxid *cuxid,

+      162  *                              const size_t num, struct cifs_sid *sid);

+      163  */

+ 

+ SSSD will provide a plugin which will basically act as a wrapper for the

+ calls in libsss\_nss\_idmap.so.

+ 

+ The libwbclient plugin will include implementation of the following

+ functions that call into SSSD: ::

+ 

+     wbcLookupName

+     wbcLookupSid

+     wbcLookupRids

+     wbcSidToUid

+     wbcUidToSid

+     wbcSidToGid

+     wbcGidToSid

+     wbcGetpwnam

+     wbcGetpwuid

+     wbcGetpwsid

+     wbcGetgrnam

+     wbcGetgrgid

+ 

+ How to test

+ ~~~~~~~~~~~

+ 

+ Testing with getcifsacl

+ ^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If there is no plugin for the CIFS client utilities or the plugin cannot

+ resolve the SIDs to names getcifsacl will only show the SID strings in

+ the outout: ::

+ 

+     # getcifsacl /tmp/bla/Users/Administrator/Desktop/putty.exe

+     REVISION:0x1

+     CONTROL:0x8004

+     OWNER:S-1-5-32-544

+     GROUP:S-1-5-21-3090815309-2627318493-3395719201-513

+     ACL:S-1-5-18:ALLOWED/0x0/FULL

+     ACL:S-1-5-32-544:ALLOWED/0x0/FULL

+     ACL:S-1-5-21-3090815309-2627318493-3395719201-500:ALLOWED/0x0/FULL

+ 

+ otherwise the output might look like ::

+ 

+     # getcifsacl /tmp/bla/Users/Administrator/Desktop/putty.exe

+     REVISION:0x1

+     CONTROL:0x8004

+     OWNER:BUILTIN\Administrators

+     GROUP:AD18\Domain Users

+     ACL:S-1-5-18:ALLOWED/0x0/FULL

+     ACL:BUILTIN\Administrators:ALLOWED/0x0/FULL

+     ACL:AD18\Administrator:ALLOWED/0x0/FULL

+ 

+ Testing with cifsacl option to mount.cifs

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If the cifsacl mount option is used the cifs kernel module will call

+ cifs.idmap to translate the Windows SIDs into the corresponding

+ UIDs/GIDs of the client system so that the ownership of the files in the

+ mounted file system is not mapped to the user how mounted the file

+ system, but corresponds to the owning user and group of the Windows

+ domain.

+ 

+ Testing the libwbclient API

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Switching between the Winbind implementation and the SSSD implementation

+ can be done using alternatives: ::

+ 

+         alternatives --set libwbclient.so.11 /usr/lib64/sssd/modules/libwbclient.so.0.11.0

+         alternatives --list

+ 

+ When SSSD is set as the libwbclient implementation, you can test the

+ calls using wbinfo: ::

+ 

+     $ /usr/bin/wbinfo -n 'AD18\Administrator'

+     S-1-5-21-3090815309-2627318493-3395719201-500 SID_USER (1)

+     $ /usr/bin/wbinfo -S S-1-5-21-3090815309-2627318493-3395719201-500

+     1670800500

+ 

+ The following switches can be used to test the functions mentioned in

+ the implementation section: ::

+ 

+       -n, --name-to-sid=NAME                Converts name to sid

+       -s, --sid-to-name=SID                 Converts sid to name

+       -U, --uid-to-sid=UID                  Converts uid to sid

+       -G, --gid-to-sid=GID                  Converts gid to sid

+       -S, --sid-to-uid=SID                  Converts sid to uid

+       -Y, --sid-to-gid=SID                  Converts sid to gid

+       -i, --user-info=USER                  Get user info

+           --uid-info=UID                    Get user info from uid

+           --group-info=GROUP                Get group info

+           --user-sidinfo=SID                Get user info from sid

+           --gid-info=GID                    Get group info from gid

+       -r, --user-groups=USER                Get user groups

+ 

+ Additional links

+ ~~~~~~~~~~~~~~~~

+ 

+ `https://access.redhat.com/documentation/en-US/Red\_Hat\_Enterprise\_Linux/7/html/Windows\_Integration\_Guide/sssd-ad-integration.html#CIFS-SSSD <https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Windows_Integration_Guide/sssd-ad-integration.html#CIFS-SSSD>`__

+ 

+ Author(s)

+ ~~~~~~~~~

+ 

+ Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,278 @@ 

+ IPA Server Mode

+ ---------------

+ 

+ Related tickets:

+ 

+ -  `RFE Allow using UIDs and GIDs from AD in trust

+    case <https://pagure.io/SSSD/sssd/issue/1821>`__

+ -  `RFE Determine how to map SID to UID/GID based on IdM server

+    configuration <https://pagure.io/SSSD/sssd/issue/1881>`__

+ -  more to come

+ 

+ Problem Statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ FreeIPA is planning to make users and groups from trusted domains

+ available to legacy systems, e.g. systems where only nss\_ldap and

+ pam\_ldap are available. For this a new directory server plugin

+ (`https://pagure.io/freeipa/issue/3567 <https://pagure.io/freeipa/issue/3567>`__)

+ will accept the LDAP search request from the legacy systems for the

+ trusted users and groups, resolve the requested objects and send the

+ result back to the legacy client.

+ 

+ Since all trusted users and groups are resolvable on the IPA server via

+ the SSSD IPA provider the idea is that the new plugin will just run

+ getpwnam\_r(), getgrnam\_r() and related calls. The SSSD disk and memory

+ cache will help to answer those request fast without the need of

+ additional caching inside the directory server.

+ 

+ To offer reliable group lookups to legacy systems it must be possible to

+ lookup all the members of a group from a trusted domain and not only

+ show members which already logged in once on the FreeIPA server, which

+ is the current status on IPA clients with a recent version of SSSD.

+ Additionally legacy systems tend to rely on user and group enumerations.

+ Both requirements force an enumeration and caching of all trusted users

+ and groups on the FreeIPA server.

+ 

+ If the legacy systems used an algorithmic mapping scheme based on the

+ RID of the AD object and an offset to find a POSIX ID for the trusted

+ user or group the *--base-id* of the *ipa trust-add* command can be used

+ to get the same ID mapping. For legacy systems which read the POSIX IDs

+ directly from AD a new idrange type must be introduced on the FreeIPA

+ server

+ (`https://pagure.io/freeipa/issue/3647 <https://pagure.io/freeipa/issue/3647>`__)

+ to indicate that for those trusted users an groups the POSIX ID must be

+ read from AD.

+ 

+ All of the above can basically be solved with the current layout of the

+ FreeIPA server where winbind is doing the lookups against AD and SSSD is

+ using the extdom LDAP plugin to read this data via the directory server.

+ But it was decided to enhance SSSD to do the lookup. Some of the reasons

+ are:

+ 

+ -  resources, since SSSD has to run anyway on the FreeIPA server and is

+    capable of the AD user and group lookups, winbind does not have to

+    run anymore

+ -  avoid double caching, to work efficiently winbind has to do some

+    caching on its own and as a result users and groups are cached twice

+    on the FreeIPA server

+ -  configuration, winbind uses a separate configuration file while the

+    IPA provider of SSSD can read e.g. the idranges directly from the

+    FreeIPA server, this minimized to configuration effort and avoids

+    conflicting configuration of different components

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ First sssd needs to know that it is running on an IPA server and should

+ not look up trusted users and groups with the help of the extdom plugin

+ but do the lookups on its own. For this a new boolean configuration

+ option, e.g. ipa\_server\_mode, should be introduced (SSSD ticket

+ `#1993 <https://pagure.io/SSSD/sssd/issue/1993>`__) which defaults to

+ *false* but is set to *true* during ipa-server-install or during updates

+ of the FreeIPA server

+ (`https://pagure.io/freeipa/issue/3652 <https://pagure.io/freeipa/issue/3652>`__)

+ if it is not already set.

+ 

+ Since AD by default requires an authenticate LDAP bind to do searches

+ SSSD needs credentials which are accepted by a trusted AD server.

+ Because if the trust relationship this can even be credentials from the

+ FreeIPA domain if Kerberos is user for authentication. So the easiest

+ way is just to use the local keytab which requires no changes on the

+ SSSD side, because the generic LDAP provider already knows how to handle

+ SASL bind with the local keytab. But currently AD LDAP server does not

+ accept the Kerberos ticket from a FreeIPA host, because the FreeIPA KDC

+ does not attach a PAC to the TGTs of host/ principals

+ (`https://pagure.io/freeipa/issue/3651 <https://pagure.io/freeipa/issue/3651>`__,

+ until this is fixed some dummy credentials, e.g. a keytab for a dummy

+ user, can be used).

+ 

+ Now the AD provider code can be used to lookup up the users and group of

+ the trusted AD domain. Only the ID-mapping logic should be refactored so

+ that the same code can be used in the standalone AD provider where the

+ configuration is read form ssd.conf and as part of the IPA provider

+ where the idrange objects read from the IPA server dictates the mapping.

+ Maybe libsss\_idmap can be extended to handle idranges for mappings in

+ AD as well, e.g. a specific error code can be used to indicate to the

+ caller that for this domain no algorithmic mapping is available and the

+ value from the corresponding AD attribute should be use (**SSSD

+ ticket?**).

+ 

+ A task (or a separate process) must be created to handle enumerations

+ efficiently without having to much impact on parallel running requests

+ (**SSSD ticket#**). Maybe we can find a scheme which allows to read only

+ a limited (configurable) number of users with their group memberships at

+ a time. This way the cache might not be complete at once but always

+ consistent with respect to group memberships of the caches users. If

+ eventually all users are read, the task will periodically look for new

+ users and update old entries.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Add ipa\_server\_mode option

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ A new boolean option ipa\_server\_mode which defaults to false should be

+ added to the IPA provider. ipa\_get\_subdom\_acct\_send() should only be

+ called if ipa\_server\_mode is false. If ipa\_server\_mode is true

+ ipa\_account\_info\_handler() should return ENOSYS for subdomain

+ requests. A suitable tevent request will be handled in a different

+ ticket.

+ 

+ Enhance libsss\_idmap

+ ^^^^^^^^^^^^^^^^^^^^^

+ 

+ #. Allow algorithmic mapping where the first RID is not 0 Currently it

+    is implicitly assumed that the first POSIX ID of a range is mapped to

+    the RID 0. To support multiple ranges for a single domain a different

+    first RID must handled as well.

+    Ticket: `#1938 <https://pagure.io/SSSD/sssd/issue/1938>`__

+ #. Add a range type to handle mappings in AD The idea is that ranges for

+    IDs from AD can be used in libsss\_idmap as well, but whenever a

+    mapping is requested for this range a specific error code like

+    IDMAP\_ASK\_AD\_FOR\_MAPPING is returned to tell SSSD to do an AD

+    lookup. This way SSSD does not need to inspect the ranges itself but

+    all is done inside if libsss\_idmap. Additionally a new call is

+    needed to check whether the returned externally managed ID belongs to

+    a configured range, if not the ID cannot be mapped in the given

+    configuration and the related object should be ignored.

+    Ticket: `#1960 <https://pagure.io/SSSD/sssd/issue/1960>`__

+ #. Add an optional unique range identifier To be able to detect

+    configuration changes in idranges managed by FreeIPA an identifier

+    should be stored on the client together with the other idrange

+    related data. For simplicity the DN of the related LDAP object on the

+    FreeIPA server can be used here. The identifier should be optional,

+    but if it is missing the range cannot be updated or deleted at

+    runtime.

+ #. Allow updates and removal of ranges To support configuration changes

+    at runtime, it must be possible to update and remove ranges. As a

+    first step I would recommend that the changes will only affect new

+    requests and not the cached data, because in general changes to

+    centrally manages ranges should be done with care to avoid conflicts.

+    In a later release we can decided if we just want to invalidate all

+    cached entries of the domain which idrange was modified or if a

+    smarter check is needed to invalidate only objects which are affected

+    by the change.

+ 

+ Add plugin to LDAP provider to find new ranges

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Currently the range management code is in the generic LDAP provider and

+ can be used by the LDAP and AD provider. New ranges are allocated with

+ the help of a hash value of the domain SID.

+ 

+ If the IPA provider cannot find a range for a given domain it cannot

+ allocate a new range on its own but has to look up the idrange objects

+ on the FreeIPA server and use them accordingly. To allow the LDAP, AD

+ and IPA provider to use as much common code as possible I think a plugin

+ interface, similar to the one used to find the DNS site, to find a

+ missing range would be useful. The default plugin will be used by the

+ LDAP and the AD provider and the IPA provider will implement a plugin to

+ read the data from the server.

+ 

+ Remove assumption that subdomain users always have a primary user-private-group (UPG)

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Currently the PAC responder assumes that subdomain users always have a

+ UPG as primary group. This will be only true for domains with

+ algorithmic mappings because here the POSIX IDs are managed by the

+ FreeIPA server and we are free to choose. But if the POSIX IDs are

+ manged externally we have to use what we get from external sources. E.g.

+ in the case where the POSIX IDs are managed by AD UIDs and GIDs are

+ separate name spaces and assuming the UPGs can be used would most

+ certainly lead to GID conflicts. The PAC responder has to respect the

+ idrange type or the mpg flag of the sss\_domain\_info struct and act

+ accordingly.

+ 

+ Additional the code paths where new subdomains are created must be

+ reviewed and whereever the mpg flag is set code must be added so that it

+ is set according to the range type.

+ 

+ Although I think that the code path where an IPA client (i.e.

+ ipa\_server-mode = false) looks up a trusted domain user adds the user

+ to the cache with the data it receives from the extdom plugin, it should

+ be verified that UPGs are not implicitly assumed here as well.

+ 

+ Integrate AD provider lookup code into IPA subdomain user lookup

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If the ipa\_server\_mode is selected IPA subdomain user and group

+ lookups should not be done with the help of the extdom plugin but

+ directly against AD with the help of LDAP of GC lookups. For this the

+ IPA provider must be able to call the related functions from the AD

+ provider. Since by default the POSIX attributes are not replicated to

+ the global catalog and supporting them is a requirement, I think it

+ would be sufficient make sure LDAP lookups are working as expected.

+ Additionally FreeIPA currently supports only one trusted domain global

+ catalog lookups for users and groups from the forest or different

+ forests can be added later.

+ 

+ Since the Kerberos hosts keys from the host keytab should be used as

+ credentials to access AD no changes are expected here.

+ 

+ It should be taken care that not accidentally the the AD SRV plugin is

+ loaded, see next section as well.

+ 

+ Enhance IPA SRV plugin to do AD site lookups as well

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ From the AD point of view trusted domains do not belong to a specific

+ site. But recent version of AD return the next\_closest\_site for host

+ which do not belong to a site. To make sure that SSSD is communication

+ with an AD server which is network-wise reasonably near it would be

+ useful if the IPA SRV plugin can be enhanced to do CLDAP pings and AD

+ site lookups as well. Additionally the plugin must know when to use IPA

+ style and when AD style lookups.

+ 

+ This is a nice to have feature.

+ 

+ Implement or Improve enumeration

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If enumeration is enable SSSD tries to update all users and groups at

+ startup. As a result the startup time where SSSD is basically blocked

+ and cannot serve requests even for data in the cache can be quite long.

+ A new tevent\_req task should be created which can read users and groups

+ from the AD domain in smaller chunks so that other request can always

+ slip in between. Ticket

+ `#1829 <https://pagure.io/SSSD/sssd/issue/1829>`__ contains a similar

+ request for the general use in SSSD. If we find a good scheme here, it

+ might be used for the general enumerations as well.

+ 

+ The task should make sure all users and groups are read after a while

+ without reading objects twice in a single run. Maybe it is possible to

+ add a special paged-search tevent request which returns after the first

+ page is read to the caller (instead of doing the paging behind the

+ scenes) which the results and a handle which would allow to continue the

+ the search with the next page? If this is a way to go creating this new

+ request would be another development subtask.

+ 

+ Additionally it has to be considered how to handle large groups. But

+ since we have to read all user as well it might be possible to just read

+ the group memberships of the user and build up the groups in the SSSD

+ cache and let the getgrp\*() calls only return entries from the cache

+ and never go to the server directly.

+ 

+ This new enumeration task will work independently of the NSS responder

+ in the IPA provider. It should be started at startup but should

+ terminate if there are no trusted domains. If later during a sub-domain

+ lookup trusted domains are found it should be started again.

+ 

+ How to test

+ ~~~~~~~~~~~

+ 

+ If the ipa\_server\_mode is enable on a FreeIPA server which trusts an

+ AD server, *getent passwd AD\\username* or *id AD\\username* should

+ return the expected results for users and groups.

+ 

+ *getent group AD\\groupname* should return results depending the state

+ of enumeration. Immediately after startup with an empty cache e.g. the

+ 'Domain User' group should only have a few members if any. After some

+ time more and more members should be displayed until the enumeration is

+ complete and all users and groups are in the SSSD cache.

+ 

+ Author(s)

+ ~~~~~~~~~

+ 

+ Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

file added
+253
@@ -0,0 +1,253 @@ 

+ Inter-process communication between SSSD processes

+ ==================================================

+ 

+ This document describes how the different SSSD processes communicate

+ between one another with a special epmhasis on the Sbus protocol. In

+ addition, the document describes the interfaces SSSD might use to

+ communicate with external libraries or programs, such as libkrb5 or how

+ are the requests from NSS and PAM modules received.

+ 

+ An overview of the SSSD architecture

+ ------------------------------------

+ 

+ The SSSD consists of several processes, each of them has its own

+ function. The SSSD processes can be one of the following:

+ 

+ #. the *monitor* - The purpose of the monitor process is to spawn the

+    other processes, periodically ping them if to check if they are still

+    running and respawn them if not. There is only one instance of the

+    monitor process at a given time.

+ #. a *data provider* - The data provider process communicates with the

+    remote server (i.e. queries the remote server for a user) and updates

+    the cache (i.e. writes the user entry. There is one Data Provider

+    process per remote server.

+ #. a *responder* - The system libraries (such as the Name Service Switch

+    module or the PAM module) communicate with the corresponding

+    responder process. When the responder process receives a query, it

+    checks the cache first and attempts to return the requested data from

+    cache. If the data is not cached (or is expired), the responder sends

+    a message to the Data Provider requesting the cache to be updated.

+    When the Data Provider is done updating the cache, the responder

+    process checks the cache again and returns the updated data. It is

+    important to note that the responder process never returns the data

+    directly from the server, the data is always written to the cache by

+    the Data Provider Process and returned to the calling library in the

+    responder process.

+ #. a *helper process* - The SSSD performs some operations that would be

+    blocking, such as kinit in a special helper sub-process. The

+    sub-processes are forked from the Data Provider processes again for

+    each operation, there is no preforked pool of helper processes. The

+    SSSD establishes pipes towards the processes' standard input and

+    output to communicate with the child using an ad-hoc wire protocol.

+ 

+ DBus and SBUS

+ -------------

+ 

+ The SSSD uses the DBus protocol to pass messages between the SSSD

+ processes. It should be noted that the core SSSD does NOT listen on or

+ use the system bus. The SSSD only uses the DBus protocol to pass

+ messages between the processes. There is a public SSSD DBus interface,

+ starting with the 1.12 upstream release, but that resides in a separate

+ sub-package and is not related to the interprocess communication between

+ the different components.

+ 

+ A quick overview of the DBus concepts

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The DBus protocol consists of several primary components:

+ 

+ #. The *D-BUS Server* - Rather than accepting connections and listening

+    for requests on that connection, the D-BUS server is instead used

+    only for establishing connections. A DBus server is identified by its

+    address. Server addresses consist of a transport name followed by a

+    colon, and then an optional, comma-separated list of keys and values

+    in the form key=value, for example ``unix:path=/tmp/dbus-test``.

+ #. The *D-BUS Connection* - Once a D-BUS connection has been made to a

+    D-BUS server, it becomes a peer-to-peer connection. Either end of the

+    connection can listen for method calls or signals, and either end can

+    initiate them.

+ #. The *D-BUS Message* - D-BUS messages come in three primary forms.

+    These are D-BUS method calls, D-BUS Signals and D-BUS errors. D-BUS

+    signals and D-BUS errors are one-way messages from one end of a D-BUS

+    connection to the other, intended to carry a brief message (such as a

+    signal to start or stop a service, or a notification that an error

+    has occurred in the connection). D-BUS errors are usually generated

+    by the internal D-BUS API itself, though they can be generated by

+    your own code as well. D-BUS method calls are the bread-and-butter of

+    the D-BUS protocol. The purpose of these calls is to essentially run

+    a method on a remote process and treat it as if it had been run

+    locally. These calls may (or may not) receive replies from the other

+    end of the connection.

+ #. The *D-BUS System Bus* - The system bus is a special implementation

+    of the D-BUS protocol. It was designed by the freedesktop project to

+    handle communication between the many system daemons. We DO NOT use

+    the system bus in the SSSD.

+ 

+ DBus and S-Bus

+ ~~~~~~~~~~~~~~

+ 

+ For performance reasons, the SSSD works in a completely non-blocking way

+ using the tevent event loop library from the Samba project. To integrate

+ the DBus API with the event loop and provide a level of abstraction, the

+ SSSD uses a wrapper around the D-Bus library called the S-Bus. The S-Bus

+ code can be found in the

+ `src/sbus <https://pagure.io/SSSD/sssd/blob/master/f/src/sbus>`__

+ subdirectory. In particular, the wrappers and tevent integration can be

+ found in

+ `sssd\_dbus\_common.c <https://pagure.io/SSSD/sssd/blob/master/f/src/sbus/sssd_dbus_common.c>`__

+ and

+ `sssd\_dbus\_connection.c <https://pagure.io/SSSD/sssd/blob/master/f/src/sbus/sssd_dbus_connection.c>`__

+ files. When the message is received, the tevent loop invokes the

+ ``sbus_message_handler`` function which is located in

+ `sssd\_dbus\_connection.c <http://https://pagure.io/SSSD/sssd/blob/master/f/src/sbus/sssd_dbus_connection.c>`__.

+ The handler selects the SSSD internal interface that receives the

+ message and invokes the appropriate handler.

+ 

+ Describing the SBUS and public DBus interfaces

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Starting with upstream version 1.12, when the SSSD implemented its

+ public DBus interface, the SSSD switched from hardcoding interface

+ names, methods etc in the source files directly to only describing the

+ interfaces in XML files using the `introspection

+ format <http://dbus.freedesktop.org/doc/dbus-specification.html#introspection-format>`__,

+ which are then used to autogenerate message handlers, property getters

+ and similar. While using generated code might sound odd at first, using

+ a code generator removes a large amount of code duplication, packing and

+ unpacking from DBus types to C types or vice versa, or unpacking DBus

+ message properties (if needed).

+ 

+ The code generator and the generated code are currently used for both

+ the DBus public interface (which is outside the scope of this page) and

+ the internal SBUS communication. The internal SBUS code, however, uses

+ the generated code in a 'raw' mode mostly and still does

+ packing/unpacking of the parameters on its own. The reason is that the

+ 'raw' code in SSSD predates the code generator, is quite stable and

+ tested and converting it to the easier handlers with unpacked parameters

+ might cause functional regressions.

+ 

+ One example of the canonical XML code might be found in the `unit

+ tests <https://pagure.io/SSSD/sssd/blob/master/f/src/tests/sbus_codegen_tests.xml>`__,

+ along with the `corresponding autogenerated

+ code <https://pagure.io/SSSD/sssd/blob/master/f/src/tests/sbus_codegen_tests_generated.c>`__.

+ The XML files for the internal interfaces, such as the `Data

+ Provider <https://pagure.io/SSSD/sssd/blob/master/f/src/providers/data_provider_iface.xml>`__

+ can also be inspected. Since all the internal interfaces use the raw

+ approach, the autogenerated code is `quite

+ terse <https://pagure.io/SSSD/sssd/blob/master/f/src/providers/data_provider_iface_generated.c>`__

+ and the `interface

+ handlers <https://pagure.io/SSSD/sssd/blob/master/f/src/providers/data_provider_be.c>`__

+ do the packing and unpacking on their own.

+ 

+ An SBus server

+ ~~~~~~~~~~~~~~

+ 

+ An S-Bus server is an abstraction of the DBus server. An S-Bus server is

+ always identified with an UNIX socket located in the directory

+ ``/var/lib/sss/pipes/private``. Two processes act as an S-Bus server in

+ the SSSD:

+ 

+ #. The monitor - Both the responders and the Data providers establish

+    connection to the monitor after startup. The monitor then

+    periodically sends "pings" to the worker processes to check if they

+    are still up and running. The other S-Bus methods the monitor can

+    invoke include "rotateLogs" to force log rotation or "resetOffline"

+    to force that the next operation attempts to contact the remote

+    server regardless of the connection status. The complete list of

+    methods is in

+    `src/monitor/monitor\_interfaces.h <https://pagure.io/SSSD/sssd/blob/master/f/src/monitor/monitor_interfaces.h#n30>`__

+ 

+    -  listens on ``/var/lib/sss/pipes/private/sbus-monitor``.

+ 

+ #. The Data Providers - The Responder processes connect to the Data

+    Provider processes with a cache update request. The Data Provider

+    then communicates with the remote server, updates the cache and sends

+    a message back to the responder, indicating that the cache was

+    updated. Each responder calls a different DBus method depending on

+    the data type the cache should be updated with. For example, the

+    ``getAccountInfo`` method is called from the NSS.

+ 

+    -  listens on ``/var/lib/sss/pipes/private/sbus-dp_$domain_name``

+ 

+ Two kinds of messages

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ The SSSD sends an SBUS message between two of its components under two

+ different circumstances:

+ 

+ #. When a request is received, completing the request might require

+    communicating with another subprocess. An example of this is when a

+    ``getpwnam()`` call triggers an LDAP search - the NSS responder sends

+    an SBUS message to the Data Provider to update the cache.

+ #. Control messages sent by the monitor. The monitor process (aka the

+    sssd process) sends periodical ``ping`` messages to all subprocesses

+    it controls. If a subprocess doesn't respond with a ``pong`` message

+    in time, it gets killed and restarted.

+ 

+ This means, there is ongoing sbus communication even though the sssd is

+ otherwise idle.

+ 

+ Unix signals

+ ------------

+ 

+ Apart from the internal SBUS communication, SSSD also uses UNIX signals

+ for certain functionality - either for communication with external

+ utilities or for cases where the SBUS communication might not work, such

+ as an unresponsive worker process. Below is an overview of the supported

+ signals and their use. The singal handlers are typically integrated with

+ the tevent event loop using its ``tevent_add_signal`` call.

+ 

+ SIGTERM

+     If a responder or a provider process fails to send a ``pong``

+     message to the monitor process after receiving the ``ping`` message,

+     the monitor terminates the unresponsive process with a SIGTERM. Also

+     used to terminate helper processes (such as the krb5\_child process)

+     in case of a timeout.

+ SIGKILL

+     In cases where an unresponsive worker process does not terminate

+     after receiving SIGTERM, the monitor forcibly kills it with SIGILL

+ SIGUSR1

+     Can be handled a sssd\_be process individually or the monitor

+     process (in that case, the monitor re-sends the signal to all

+     sssd\_be processes it handles). Upon receiving this signal, the

+     sssd\_be process transitions into the 'offline' state. This signal

+     is mostly useful for testing.

+ SIGUSR2

+     Similar to the SIGUSR1 signal, the SIGUSR2 would cause an sssd\_be

+     process to reset the offline status and retry the next request it

+     receives against a remote server.

+ SIGHUP

+     Can be delivered to the sssd process. After receiving SIGHUP, the

+     monitor rotates its logfile and sends a ``reset`` method to the

+     managed processes. The managed processes also rotate logfiles. In

+     addition, the sssd\_be processes re-read resolv.conf and the

+     sssd\_nss process clears the fast in-memory cache.

+ 

+ Local sockets

+ -------------

+ 

+ After startup, the SSSD also creates several local (AF\_UNIX) sockets to

+ listen on. These sockets are used by the NSS and PAM modules and also

+ the external programs SSSD integrates with, such as sudo, OpenSSH or

+ autofs. All consumers use the sockets in a similar fashion, so they can

+ be commonly called SSS clients.

+ 

+ The clients all employ a request/response protocol. using its own

+ TLV-encoding. Note that the SSS clients only support synchronous I/O, so

+ it may block (e.g. while waiting for a response). On the other hand, the

+ responders supports asynchronous I/O using its tevent main loop, so it

+ will not block (e.g. while waiting to read from a client).

+ 

+ KDCInfo files

+ -------------

+ 

+ The SSSD might discover additional KDC or Kadmin servers that are not

+ defined in krb5.conf. However, it would still be prudent if tools like

+ kinit or kpasswd could talk to the same servers the SSSD talks to. To

+ this end, the SSSD implements a plugin for libkrb5, located in the

+ `sssd\_krb5\_locator\_plugin.c <https://pagure.io/SSSD/sssd/blob/master/f/src/krb5_plugin/sssd_krb5_locator_plugin.c>`__

+ file. When a new KDC is discovered, the sssd\_be process writes the IP

+ address of this KDC into a file under the /var/lib/sss/pubconf

+ directory. With the help of the locator plugin, libkrb5 is able to read

+ these files in the pubconf directory and use the KDC servers discovered

+ by the SSSD.

@@ -0,0 +1,146 @@ 

+ Kerberos Locator Plugin

+ =======================

+ 

+ Old Design

+ ----------

+ 

+ The current Kerberos locator plugin functions as follows:

+ 

+ #. The krb5 provider would write a file,

+    ``/var/lib/sss/pubconf/kdcinfo.REALM`` consisting of a single line,

+    the IP address and port of the KDC being used.

+ #. The kerberos locator plugin would read in this file and return the IP

+    address and port to the requesting application.

+ 

+ Limitations

+ ~~~~~~~~~~~

+ 

+ -  The locator provides only a single address and is only updated when

+    the krb5 provider in the SSSD service connects to a new KDC.

+ -  If the KDC we last authenticated against becomes unreachable, but

+    other KDCs in a failover configuration remain up, applications

+    relying on libkrb5 will fail to access those failover servers.

+ 

+ New Design

+ ----------

+ 

+ Locator Plugin

+ ~~~~~~~~~~~~~~

+ 

+ The Kerberos locator plugin will be reinvented as an sss\_client object.

+ It will retain minimal logic in itself except as follows:

+ 

+ #. It will communicate across a socket to the Locator Responder

+    (sssd\_locator).

+ #. We will optionally implement an in-memory cache similar to that of

+    the NSS clients.

+ #. This communication and caching will support retrieving multiple

+    address/port pairs in a single transaction.

+ 

+ Locator Responder

+ ~~~~~~~~~~~~~~~~~

+ 

+ Locator Responder-Client Wire Protocol

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Request

+ '''''''

+ 

+ +---------+-----------------------+

+ | Bytes   | Function              |

+ +---------+-----------------------+

+ | 0-4     | Message Length        |

+ +---------+-----------------------+

+ | 4-7     | Locate Service Type   |

+ +---------+-----------------------+

+ | 9-11    | Socket Type           |

+ +---------+-----------------------+

+ | 12-15   | Protocol Family       |

+ +---------+-----------------------+

+ | 16-19   | Length of realm       |

+ +---------+-----------------------+

+ | 20-XX   | Realm                 |

+ +---------+-----------------------+

+ 

+ Replies

+ '''''''

+ 

+ +---------+-------------------------+

+ | Bytes   | Function                |

+ +---------+-------------------------+

+ | 0-3     | Length of the message   |

+ +---------+-------------------------+

+ | 4-7     | Number of addresses     |

+ +---------+-------------------------+

+ | 8-X     | Address Data            |

+ +---------+-------------------------+

+ 

+ A reply with zero addresses should be interpreted by the client as SSSD

+ not being aware of the requested realm. This should be treated as

+ KRB5\_PLUGIN\_NO\_HANDLE.

+ 

+ Reply Address Data

+ 

+ +---------+------------------------------------+

+ | Bytes   | Function                           |

+ +---------+------------------------------------+

+ | 0-3     | Port                               |

+ +---------+------------------------------------+

+ | 4-7     | Size of address                    |

+ +---------+------------------------------------+

+ | 8-X     | String representation of address   |

+ +---------+------------------------------------+

+ 

+ Reply Address Data String representation

+ 

+ -  IPv4: numbers-and-dots notation as supported by inet\_aton(3)

+ -  IPv6: hexadecimal string format as supported by inet\_pton(3)

+ -  This is done so the string can be passed directly to getaddrinfo() in

+    the locator plugin.

+ 

+ Responder Behavior

+ ^^^^^^^^^^^^^^^^^^

+ 

+ The responder will receive requests from the locator plugin. It will

+ parse the realm from the wire protocol and use that to determine which

+ (if any) KRB5 auth or chpass provider is available for that realm. It

+ will then query that provider via the SBUS to get an appropriate list of

+ addresses. Once this reply is made, the responder will answer the

+ locator plugin client.

+ 

+ The responder MUST enqueue multiple requests for the same realm

+ together, so that fewer trips to the provider will be required.

+ 

+ Once the responder returns, the results SHOULD be added to an in-memory

+ cache similar to that offered by the NSS responder.

+ 

+ For compatibility with applications that have older versions of the

+ locator still loaded, the responder MUST write out a

+ ``/var/lib/sss/pubconf/kdcinfo.REALM`` file containing the first address

+ in the set returned from the provider.

+ 

+ Kerberos Provider Extensions

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ SBUS Protocol

+ ^^^^^^^^^^^^^

+ 

+ TBD

+ 

+ Address Resolution

+ ^^^^^^^^^^^^^^^^^^

+ 

+ The provider SHOULD reply with potential addresses even if the provider

+ is in offline mode. This is to ensure that the locator plugin can

+ accurately return data if the KDCs become available again before the

+ SSSD notices (by performing an online authentication or password-change)

+ 

+ If the provider is online, it MUST return the KDC it last communicated

+ with as the first address in the response list. The provider MUST return

+ up to N-1 (configurable) total addresses from its available pool. For

+ example, if the configuration specifies ``krb5_locator_addresses = 3``

+ and the KDC is online, it MUST return the KDC it is connected to,

+ followed by the next two servers in the failover list, or fewer if only

+ one or two exist in total.

+ 

+ TBD

@@ -0,0 +1,83 @@ 

+ Mapping ID provider names to Kerberos principals

+ ================================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2509 <https://pagure.io/SSSD/sssd/issue/2509>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Some users are migrating to SSSD from a legacy configuration that

+ consisted of a traditional UNIX user stored in ``/etc/passwd`` and

+ managing their Kerberos tickets either with the use of some GUI tool or

+ just command-line ``kinit``. While these users can use SSSD by

+ configuring the ``id_provider`` proxy, very often the name of their UNIX

+ user is different from the name of their company-wide Kerberos

+ credentials.

+ 

+ This feature helps the above use-case by mapping their UNIX user name to

+ the Kerberos principal name.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ Joe User has a company laptop where his UNIX user has been traditionally

+ named ``joe``. At the same time, his company Kerberos principal is

+ called ``juser@EXAMPLE.COM``. Joe would like to start using SSSD to

+ leverage features like offline kinit without having to rename his UNIX

+ user and chown all his local files to the corporate user ID.

+ 

+ While most of this design page describes setup using the proxy provider,

+ which would be the typical case, this option can be used along with any

+ ``id_provider``.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The Kerberos provider will acquire a new option that describes how are

+ the user names from the ID provider mapped onto the user part of the

+ Kerberos principal. The user would then add the appropriate mapping to

+ the ``domain`` section of ``sssd.conf``.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ A new option ``krb5_map_user`` would be added to the Kerberos auth code.

+ This option would have form similar to how we map the LDAP extra

+ attributes, that is ``local_name:krb5_name``. When mapping exists for

+ the user who is authenticating, the krb5\_auth module would use that

+ user name for calls like ``find_or_guess_upn`` instead of ``pd->name``.

+ We should consider whether to keep using ``pd->name`` or introduce

+ another attribute to the ``krb5_child_req`` structure.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ A new configuration option tentatively called ``krb5_map_user`` would be

+ added. This option is unset by default, which means whatever user name

+ the ID provider stores will be used.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ #. Prepare a Kerberos KDC, add a user principal (``juser@EXAMPLE.COM``)

+ #. Add a local user using ``useradd`` with name that differs from

+    Kerberos principal in the name portion. (``joe``)

+ #. Configure SSSD with ``id_provider=proxy`` with

+    ``proxy_lib_name=files`` and ``auth_provider=krb5`` pointing to our

+    test KDC

+ #. Attempt to authenticate using a PAM service. The authentication

+    should fail and the logs would show authentication as

+    ``joe@EXAMPLE.COM``

+ #. Set ``krb5_map_user`` to ``joe:juser`` and restart SSSD.

+ #. Authenticate again. This time, authentication should succeed and the

+    user's klist output should list ``juser@EXAMPLE.COM``. The ``id(1)``

+    output should still list ``joe``, though.

+ #. Test that Kerberos ticket renewals still work

+ #. Test that delayed kinit still works.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

@@ -0,0 +1,129 @@ 

+ LDAP Referrals

+ --------------

+ 

+ Pre-requisites

+ ~~~~~~~~~~~~~~

+ 

+ sdap\_id\_op enhancements

+ ^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ #. Disentangle sdap\_id\_op setup from failover configuration

+ #. Handle async resolver needs for referrals. We need to look up

+    referred servers and take the first IP returned by DNS.

+ #. Add idle disconnection timer for connections (see Ticket

+    `#1036 <https://pagure.io/SSSD/sssd/issue/1036>`__, needs to have

+    its priority bumped up). We don't want to be hanging onto referred

+    servers forever.

+ 

+ Single-entry lookup

+ ~~~~~~~~~~~~~~~~~~~

+ 

+ #. Perform lookup on standard server connection

+ #. Get referral reply

+ #. Acquire sdap connection to the referred server

+ #. Perform lookup on referred server

+ #. Repeat as needed until referral depth limit is reached

+ 

+ Multiple-entry lookup

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ First approximation: just process each referral as a series of

+ single-entry lookups, gathering all results at the end.

+ 

+ Optimizations

+ ~~~~~~~~~~~~~

+ 

+ #. Keep lookup cache/hashtable of entries pointing to the same referred

+    entry (I suspect the value is low here, as the chance of multiple

+    replies referring to the same entry is unlikely).

+ #. In the case of multiple referred entries to the same LDAP server, can

+    we bundle them into single requests? (Probably not. Referrals will

+    end up requiring BASE searches. Most LDAP servers don't support

+    subtree searches on DN)

+ #. Keep a hash/lookup table of sdap\_id\_op links. Don't reconnect

+    unless we have to (such as when performing auth via LDAP simple

+    bind).

+ 

+    #. Keep separate sdap\_id\_op links for ID and AUTH. ID always uses

+       the default bind credentials, AUTH can drop the bind and

+       reconnect.

+ 

+ Relationship to multiple search bases

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Only the primary server will need multiple bases. The referrals will end

+ up as base searches, thereby ignoring the multiple search base values.

+ 

+ Referrals should *ignore* the base filtering of ticket

+ `#960 <https://pagure.io/SSSD/sssd/issue/960>`__.

+ 

+ How do we handle originalDN? I think we need to save originalDN as it

+ would have appeared on the primary server, not the referred server.

+ 

+ Research: how are we doing this now? I remember that we hit this before

+ when dealing with referrals. Did we solve it for all referral types or

+ only some?

+ 

+ Finally, related to the search filtering, ticket

+ `#960 <https://pagure.io/SSSD/sssd/issue/960>`__ should do its

+ filtering based on the originalDN value, not the referred DN.

+ 

+ Questions needing research

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ #. Do all referrals give a complete answer? (i.e. If they refer locally,

+    is it relative?)

+ 

+    -  `http://www.ietf.org/rfc/rfc3296.txt <http://www.ietf.org/rfc/rfc3296.txt>`__

+       says that "The ref attribute values SHOULD NOT be used as a

+       relative name-component of an entry's DN [RFC2253]."

+ 

+ #. Can we keep a connection open to rebind? i.e. If we're performing

+    AUTH, do we have to open a new socket connection to perform a new

+    simple bind, or can we drop and bind again?)

+ #. How do we treat unreachable referral servers?

+ 

+    #. As missing entries. This might cause cache issues with flaky

+       networks, as we always treat missing entries as definitive

+       deletion of the entry for our cache. I believe this is how things

+       are handled now with the openldap internal referral chasing, but I

+       need to research this.

+    #. Any unreachable referral server results in SSSD going offline.

+       This is potentially chaotic, as it introduces multiple points of

+       failure resulting in offline operation.

+    #. Flag unreachable entries as "complete", thereby having SSSD rely

+       on their presence or abscence in the cache. While this sounds nice

+       in theory, I think this would probably be very difficult to get

+       right, especially with enumeration. I recommend deferring this as

+       a future optimization and going with one of the other approaches

+       (or possibly make the other approaches into an sssd.conf option).

+ 

+ #. How do we handle nested referrals?

+ 

+    -  Option: Handle all referrals at a particular depth before

+       descending further. This can help avoid attempts to create

+       duplicate sdap\_id\_ops. The downside to this approach is that

+       situations where entries are coming from multiple servers will

+       only ever function as quickly as the slowest server in the set.

+    -  Option: Track nestings as additional subreq levels. Add careful

+       sdap\_id\_op acquisition locking and proceed into nestings as

+       quickly as they are available. This is more complicated to get

+       right, but probably will provide a noticeable gain in complex

+       setups.

+ 

+ Stuff to Test

+ ~~~~~~~~~~~~~

+ 

+ #. Entry referrals

+ 

+    #. Same server different DN

+    #. Different server same DN

+    #. Different server different DN

+ 

+ #. Subtree referrals

+ 

+    #. Same server different DN

+    #. Different server different DN

+ 

+ #. Referral on bind attempt (referred AUTH)

+ #. Referred password change

@@ -0,0 +1,182 @@ 

+ Feature Name

+ ============

+ 

+ Config file validation

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2269 <https://pagure.io/SSSD/sssd/issue/2269>`__

+ -  `https://pagure.io/SSSD/sssd/issue/133 <https://pagure.io/SSSD/sssd/issue/133>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Typos in option names are not detected. If conflicting options are used

+ or required options are missing, SSSD should produce easy to understand

+ error/debug message so that administrators can fix the problem easier.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Application developers that use libini from Ding libs will be able to

+ specify constraints that the configuration must respect. These

+ constraints will be written using INI format in form of rules. Each rule

+ will use one validator. The validator can be internal (provided by

+ libini) or external (provided by applications). The rules will be

+ written in a separate file and their usage will be optional. Validators

+ will generate errors in form of strings that should be readable for

+ users. Applications that use libini will then be able to write these

+ strings somewhere appropriate (for example log files or stderr).

+ 

+ Format of rules

+ ~~~~~~~~~~~~~~~

+ 

+ ::

+ 

+     [rule/NAME]

+     validator = validator_name

+     validator_specific_parameter1 = ...

+     validator_specific_parameter2 = ...

+     .

+     .

+     .

+     validator_specific_parameterN = ...

+ 

+ Each rule needs to specify validator that will be used, other parameters

+ depend on the validator. Some validators may not require any additional

+ parameters.

+ 

+ Using the rules

+ ~~~~~~~~~~~~~~~

+ 

+ The rules are used from applications in the following way:

+ 

+ #. rules are loaded from file using the function

+    ``int ini_read_rules_from_file(const char *filename, struct ini_cfgobj **_rules_obj);``

+ #. rules and configuration are passed as two fist parameters to the

+    function ::

+ 

+        int ini_rules_check(struct ini_cfgobj *rules_obj,

+                               struct ini_cfgobj *config_obj,

+                               struct ini_validator *extra_validators,

+                               int num_extra_validators,

+                               struct ini_errobj *errobj);

+ 

+ The last parameter is special structure used to hold all errors

+ generated by the validators. libini will provide API to create, destroy,

+ read errors from and write errors into this structure. The

+ extra\_validators and num\_extra validators are used to specify external

+ validators (see section 'External validators' below).

+ 

+ Internal validators

+ ~~~~~~~~~~~~~~~~~~~

+ 

+ Internal validators will be simple validators that may be used by

+ projects outside SSSD. More complicated and application specific

+ validators will be written as external validators. First two internal

+ validators will be ini\_allowed\_options and ini\_allowed\_sections.

+ 

+ Validator ini\_allowed\_options

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Example: ::

+ 

+     [rule/allowed_options_for_section_foo]

+     validator = ini_allowed_options

+     section_re = ^foo$

+     option = foo

+     option = bar

+     option = baz

+ 

+ The rule above uses the ini\_allowed\_option validator and enumerates

+ all allowed options for sections with names that match regular

+ expression :sup:\`foo$. The options allowed here are foo, bar and baz.

+ Config file like this: ::

+ 

+     [foo]

+     bar = 1

+     baz = 1

+     foo = 1

+ 

+ will generate no errors, because all options in section foo are allowed.

+ Config file ::

+ 

+     [foo]

+     baaaar = 1

+     baz = 1

+     foo = 1

+ 

+ will result in errors being generated because there is an unknown option

+ baaaar used in section foo.

+ 

+ The ini\_allowed\_options validator controls only sections that match

+ regular expression specified in section\_re. Other sections are ignored

+ by the validator.

+ 

+ Validator ini\_allowed\_sections

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ This validator is used to enumerate all allowed sections. The format is

+ following. ::

+ 

+     [rule/enumerate_sections]

+     validator = ini_allowed_sections

+     section_re = regex1

+     section_re = regex2

+     .

+     .

+     .

+ 

+     section_re = regexN

+ 

+ The validator will generate error if the config file contains section

+ that is not matched by any of the regular expressions specified by one

+ of the section\_re parameters.

+ 

+ External validators

+ ~~~~~~~~~~~~~~~~~~~

+ 

+ External validators are specified using following structures: ::

+ 

+     struct ini_validator {

+          const char *name;

+          ini_validator_func *func;

+     };

+ 

+ The name attribute is sting that is used inside rules in the validator

+ parameter. The func attribute is pointer to function of type

+ ini\_validator\_func which is defined using typedef as following: ::

+ 

+     typedef int (ini_validator_func)(const char *rule_name,

+                                      struct ini_cfgobj *rules_obj,

+                                      struct ini_cfgobj *config_obj,

+                                      struct ini_errobj *errobj);

+ 

+ This function has following parameters:

+ 

+ -  rule\_name - this is name of rule that uses this validator (for

+    example "rule/myrule")

+ -  rules\_obj - this is config object with all the rules

+ -  config\_obj - this is config object with the actual configuration

+    that is being checked by the rules

+ -  errobj - this ini\_errobj structure used to propagate errors

+ 

+ Users of libini can specify array of struct ini\_validator structures

+ and pass them to ini\_rules\_check() function. After this they can be

+ used in the same way as internal validators.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ In order to take advantage of this feature in SSSD the constraint file

+ will have to be created.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ Unit tests in Ding libs. Integration and unit tests for SSSD.

+ 

+ Authors

+ ~~~~~~~

+ 

+ Michal Židek `mzidek@redhat.com <mailto:mzidek@redhat.com>`__

@@ -0,0 +1,62 @@ 

+ Supporting Local Users as members of LDAP Groups for RRFc2307 servers

+ ---------------------------------------------------------------------

+ 

+ Related Tickets:

+ 

+ -  `SSSD does not list local user's group membership defined in

+    LDAP <https://pagure.io/SSSD/sssd/issue/1020>`__

+ 

+ Problem Statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ SSSD Has been built around the concept of self-contained Identity

+ Domains. Because of this all users of a domain must be present in the

+ domain itself to be available as members of the domain groups.

+ 

+ Historically identity providers like nss\_ldap has allowed to include

+ local users in remote LDAP servers that use the RFC2307 (not bis)

+ schema. With that schema group members are identified by the simple user

+ name. So if a user by the same name happened to exist on the local

+ workstation the LDAP group would end up being assigned to the user

+ during operations like initgroups.

+ 

+ This is technically a violation of the Identity domain and works mostly

+ by accident. However in order to keep compatibility with existing

+ deployments it has been requested to allow sssd to honor initgroups

+ request for local users that happen to be referenced in RFC2307 LDAP

+ servers.

+ 

+ Solution

+ ~~~~~~~~

+ 

+ New Option

+ ^^^^^^^^^^

+ 

+ We introduce a new boolean option named

+ ldap\_rfc2307\_fallback\_to\_local\_users This option enables or

+ disables the compatibility behavior. The option is set to 'false' by

+ default.

+ 

+ Behavior

+ ^^^^^^^^

+ 

+ When the above option is enabled the LDAP provider will perform

+ additional local lookups for users only if the schema in use is RFC2307.

+ A simple getpwnam() or getpwuid() call is performed when looking up

+ users if the LDAP server returns no entry. If the a local user by the

+ same name or id exists it is stored in the cache like if it were an LDAP

+ user. The same is done for initgroups calls.

+ 

+ Details

+ ^^^^^^^

+ 

+ Calls like initgroups will not fail anymore if the user is not found in

+ LDAP like they normally would do and groups this user 'belongs to' are

+ returned. The groups returned are the ones found in LDAP that have this

+ user's name in the memberUid attribute.

+ 

+ SSSD backends disable by default recursion from nsswitch calls into SSSD

+ itslef. It is therefore safe to call functions like getpwnam() or

+ getpwuid() from within a backend. These functions will not enter the nss

+ client and will return all users from any other backend listed in

+ nsswitch.conf for the 'passwd' database.

@@ -0,0 +1,209 @@ 

+ Lookup Users by Certificate

+ ===========================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2596 <https://pagure.io/SSSD/sssd/issue/2596>`__

+ -  `https://pagure.io/SSSD/sssd/issue/546 <https://pagure.io/SSSD/sssd/issue/546>`__

+ -  `https://pagure.io/freeipa/issue/4238 <https://pagure.io/freeipa/issue/4238>`__

+    (design page:

+    `http://www.freeipa.org/page/V4/User\_Certificates <http://www.freeipa.org/page/V4/User_Certificates>`__)

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ As stated in ticket

+ `#2596 <https://pagure.io/SSSD/sssd/issue/2596>`__ applications doing

+ user authentication based on certificates, e.g. web servers, need a way

+ to map the certificate presented by the client to a specific user.

+ Although there are various ways to derive a user name from special

+ entries in the certificate so far there is no generally accepted scheme.

+ The most general and in some cases the only possible way is to look up

+ the certificate directly in the LDAP server. This requires that the

+ certificate is stored in the LDAP server which we will assume for this

+ initial design. (In a second part user lookups based on the certificate

+ content will be added, this requires that the syntax for the mapping is

+ specified in

+ `http://www.freeipa.org/page/V4/User\_Certificates#Certificate\_Identity\_Mapping <http://www.freeipa.org/page/V4/User_Certificates#Certificate_Identity_Mapping>`__)

+ 

+ The primary interface to lookup users by certificate would be D-BUS.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ The primary use case is described in ticket

+ `#2596 <https://pagure.io/SSSD/sssd/issue/2596>`__. If Apache is

+ configured to use certificate based client authentication modules like

+ mod\_lookup\_identity has access to the PAM encoded certificate via

+ environment variables. With this data as input mod\_lookup\_identity

+ should call a D-BUS method like

+ *org.freedesktop.sssd.infopipe.GetUserAttrByCert* which will return the

+ data of the user the certificate belongs similar to the *GetUserAttr*

+ method.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Besides adding the D-Bus method to the InfoPipe responder the generic

+ LDAP backend should be able to search and read the certificate data if

+ available from a LDAP server and store it to the cache. The internal

+ sysdb interface must be extended to search cached entries with the

+ certificate as input.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ LDAP backend

+ ^^^^^^^^^^^^

+ 

+ Reading certificate data if available just requires adding a new user

+ attribute which will be requested during LDAP searches for a user. In

+ general the certificate is stored as a DER encoded binary on the LDAP

+ server. **Question: should we add an option like

+ ldap\_user\_cert\_encoding to support other encodings a server might

+ send to us, or shall we add it only when there is a real use case?**

+ Internally the certificate should be stored DER encoded to the cache as

+ well because this encoding is the most unambiguous encoding (e.g. with

+ PEM encoding it is not clear if the base64 blob should have line breaks

+ or not and if the enclosing '-----BEGIN CERTIFICATE-----' and '-----END

+ CERTIFICATE-----' should be stored as well and if line break should be

+ added here or not?)

+ 

+ To search a user with the help of the certificate the DER encoded binary

+ ticket must be transformed into a search filter. In this case it would

+ be something like 'userCertificate=\\23\\a5\\3e......' where each byte

+ from the certificate is is represented by a hex value pre-pendened by a

+ '\\'. The filter should be generated in a subroutine which accepts the

+ DER encoded certificate with base64 ascii armor and returns the search

+ filter. This way the subroutine can later be extended to accept

+ configuration options for the identity mapping and can return different

+ search filters for those cases. Since the requirement for LDAP and sysdb

+ search filters are the same there should be an option indicating if a

+ LDAP or sysdb filter is needed, because the attribute names might be

+ different.

+ 

+ Although it would be possible to handle the binary DER data directly I

+ think using a base64 ascii armor to handle the data as a string is

+ useful to avoid adding code for handling binaries e.g. in the S-BUS

+ requests to the backends.

+ 

+ SYSDB API

+ ^^^^^^^^^

+ 

+ A new call sysdb\_search\_user\_by\_cert() should be added which get the

+ DER encoded certificate with base64 ascii armor as input and use the

+ function described above to get a proper search filter. Currently it

+ will be only the search filter for the binary certificate. Other than

+ that the new call will act like to other sysdb\_search\_user\_by\_\*()

+ calls.

+ 

+ InfoPipe

+ ^^^^^^^^

+ 

+ A new method GetUSerAttrByCert() must be implemented which expected the

+ PEM encoded certificate and an array of attrbute names. **Question:

+ Should we only support PEM here or other formats as well? In this case

+ we need a third parameter indicating the encoding of the certificate

+ data.**.

+ InfoPipe will convert the certificate into DER encoding with base64 ascii

+ armor, search the cache and eventually forward the request to the backend.

+ The request to the backend is processed similar to a request by name,

+ only that a new filter name, e.g. DP\_CERT\_ID "cert", is needed.

+ 

+ Since it is in general not obvious to which domain a certificate

+ belongs, the search must iterate over all domains in case no matching

+ certificate was found. For the cases where there is a strong 1:1

+ relationship between the issuer of a certificate and a domain,

+ configuration options for this can be added later.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ A new user attribute open 'ldap\_user\_certificate' will be added to the

+ LDAP provider. By default only the IPA provider will set a value for it

+ to avoid reading about 1k of data which is not needed in the other

+ providers. **Question: Does this make sense or shall we enable it for

+ other providers as well?**

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ First a certificate must be load to a IPA user entry, it can be any kind

+ of certificate as long as it is valid an DER or PEM encoded. Until IPA

+ has some import utilities ldapmodify should be used. A LDIF file might

+ look like this: ::

+ 

+     dn: uid=cert_user,cn=users,cn=accounts,dc=ipa,dc=devel

+     changetype: modify

+     add: userCertificate;binary

+     userCertificate;binary::MII...=

+ 

+ where MII...= indicates the base64 encoded certificate data. If you have

+ a PEM encoded certificate you can just use the base64 part here. If the

+ certificate is DER encoded it can be transformed to base64 with ::

+ 

+     base64 < ./certificate_file.der | tr -d '\n'

+ 

+ Testing can be done with the help of the dbus-send utility: ::

+ 

+     # dbus-send --system --print-reply  --dest=org.freedesktop.sssd.infopipe \

+                                              /org/freedesktop/sssd/infopipe/Users \

+                                              org.freedesktop.sssd.infopipe.Users.FindByCertificate \

+                                              string:"-----BEGIN CERTIFICATE-----.......-----END CERTIFICATE-----"

+     method return sender=:1.1479 -> dest=:1.1498 reply_serial=2

+        object path "/org/freedesktop/sssd/infopipe/Users/ipa_2edevel/240600004"

+ 

+     # dbus-send --system --print-reply --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe/Users/ipa_2edevel/240600004 org.freedesktop.DBus.Properties.Get string:"org.freedesktop.sssd.infopipe.Users.User" string:"name"

+     method return sender=:1.1479 -> dest=:1.1529 reply_serial=2

+        variant       string "cert_user"

+ 

+     # dbus-send --system --print-reply --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe/Users/ipa_2edevel/240600004 org.freedesktop.DBus.Properties.GetAll string:"org.freedesktop.sssd.infopipe.Users.User"

+     method return sender=:1.1479 -> dest=:1.1530 reply_serial=2

+        array [

+           dict entry(

+              string "name"

+              variant             string "cert_user"

+           )

+           dict entry(

+              string "uidNumber"

+              variant             uint32 240600004

+           )

+           dict entry(

+              string "gidNumber"

+              variant             uint32 240600004

+           )

+           dict entry(

+              string "gecos"

+              variant             string "ipa u1"

+           )

+           dict entry(

+              string "homeDirectory"

+              variant             string "/home/cert_user"

+           )

+           dict entry(

+              string "loginShell"

+              variant             string "/bin/sh"

+           )

+           dict entry(

+              string "groups"

+              variant             array [

+                    object path "/org/freedesktop/sssd/infopipe/Groups/ipa_2edevel/240600004"

+                    object path "/org/freedesktop/sssd/infopipe/Groups/ipa_2edevel/240600005"

+                    object path "/org/freedesktop/sssd/infopipe/Groups/ipa_2edevel/240600006"

+                 ]

+           )

+           dict entry(

+              string "extraAttributes"

+              variant             array [

+                 ]

+           )

+        ]

+ 

+ The first dbus-send command shows the lookup by certificate, the

+ following two just illustrate how a single property or all can be

+ requested from the returned object path.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,138 @@ 

+ Lookup Users by Certificate - Active Directory

+ ==============================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2897 <https://pagure.io/SSSD/sssd/issue/2897>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ So far the main focus of the SSSD certificate and Smartcard

+ authentication support in SSSD was on FreeIPA. Although it is possible

+ to use it with the AD provider as well (see

+ `SmartcardAuthenticationTestingWithAD <https://docs.pagure.org/SSSD.sssd/design_pages/smartcard-authentication-testing-with-ad.html>`__

+ for details) it requires some manual configuration.

+ 

+ On this page we describe the enhanced support for certificates in AD and

+ in override data for the direct (AD provider) and indirect (IPA with

+ trust to AD) integration.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ Apache

+ ^^^^^^

+ 

+ Apache is using *mod\_lookup\_identity* to look up a user who used

+ certificate based authentication with the help of the certificate.

+ Currently, without additional configuration, only IPA users were

+ supported. Now users from AD which have the certificate stored in the

+ user entry as supported as well for both direct and indirect

+ integration. Additionally certificates can be stored in local overrides

+ for the direct integration and in IPA server-side overrides for the

+ indirect integration.

+ 

+ Smartcard authentication

+ ^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If the certificate of the user is stored in the user's entry in AD or in

+ a IPA or local override the user can authenticate with a Smartcard which

+ holds the certificate and the matching private key.

+ 

+ Since both use-case rely in the same common code only the user lookup is

+ discussed later on because it is easier to test and validate.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ General

+ ^^^^^^^

+ 

+ The common override lookup code must be enhanced to allow lookups by

+ certificates as well.

+ 

+ AD provider (direct integration)

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ To support the direct integration

+ 

+ -  the attribute containing the certificate must be read by default

+ -  sss\_override must be enhanced to store certificates in local

+    overrides as well

+ 

+ IPA provider (indirect integration)

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ To support the indirect integration

+ 

+ -  the IPA override lookup code must be enhanced to read certificate

+    overrides from the server and store them in the cache

+ -  the IPA client code to look up AD users via the extdom plugin must be

+    enhanced to allow lookups by certificates

+ 

+ Support for the IPA extdom plugin

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Currently it is only possible to look up users by certificate with the

+ InfoPipe which uses DBus. To avoid to add a DBus requirement to the

+ extdom plugin and the directory server a call similar to

+ sss\_nss\_getnamebysid() should be added to allow easy lookups by

+ certificate via the NSS responder.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Most of the changes are related to adding the new attribute to the

+ various lookup requests.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ For the AD provider the currently unset option *ldap\_user\_certificate*

+ will be set to *userCertificate;binary*. This means that is a

+ certificate is available in the user entry it will be downloaded and

+ written to the cache by default. To avoid this *ldap\_user\_certificate*

+ must be set to a non-existing attribute name like e.g. ::

+ 

+     ldap_user_certificate = nonExistingAttributeName

+ 

+ The *sss\_override user-add* utility has a new option *--certificate*

+ (*-x*) which expects the base64-endode certificate as an argument.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ Testing can be done with *dbus-send* as described in

+ `LookupUsersByCertificate <https://docs.pagure.org/SSSD.sssd/design_pages/lookup_users_by_certificate.html#how-to-test>`__.

+ Instead of storing the certificate in the user object of an IPA user it

+ should be now stored in the user object of an AD user as e.g. described

+ in

+ `WritingthecertificatetoAD <https://docs.pagure.org/SSSD.sssd/design_pages/smartcard_authentication_testing_with_ad.html#writing-the-certificate-to-AD>`__.

+ Additionally certificates overrides can be written with the

+ *sss\_override* utility for the direct integration or the *ipa

+ idoverrideuser\_add\_cert* command for the indirect integration.

+ 

+ If multiple certificate are added it should be noted that a user my have

+ multiple different certificates but a single certificate should be only

+ assigned to a single user. If a certificate is assigned to multiple

+ users no matter if in the user object or in the override the lookup will

+ fail sooner or later.

+ 

+ For the indirect integration the different lookups should be tested

+ independently on the IPA master and an IPA client because different code

+ paths are used since SSSD is running in the ipa-server-mode on the

+ master.

+ 

+ How To Debug

+ ~~~~~~~~~~~~

+ 

+ Explain how to debug this feature if something goes wrong. This section

+ might include examples of additional commands the user might run (such

+ as keytab or certificate sanity checks) or explain what message to look

+ for.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,115 @@ 

+ Proposal to redesign the memberOf plugin

+ ----------------------------------------

+ 

+ Let us start with the following setup:

+ 

+ .. FIXME: This page is missing a image representing nestedgroups

+ 

+ ::

+ 

+     dn: name=Group A, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     member: name=Group D, cn=Groups, cn=default, cn=sysdb

+     member: name=User 1, cn=Users, cn=default, cn=sysdb

+     member: name=User 2, cn=Users, cn=default, cn=sysdb

+     member: name=User 3, cn=Users, cn=default, cn=sysdb

+     member: name=User 4, cn=Users, cn=default, cn=sysdb

+     member: name=User 5, cn=Users, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=Group B, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     member: name=Group D, cn=Groups, cn=default, cn=sysdb

+     member: name=User 1, cn=Users, cn=default, cn=sysdb

+     member: name=User 2, cn=Users, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=Group C, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     member: name=Group A, cn=Groups, cn=default, cn=sysdb

+     member: name=Group B, cn=Groups, cn=default, cn=sysdb

+     member: name=Group F, cn=Groups, cn=default, cn=sysdb

+     member: name=User 3, cn=Users, cn=default, cn=sysdb

+ 

+     dn: name=Group D, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     member: name=User 4, cn=Users, cn=default, cn=sysdb

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=Group E, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     member: name=User 5, cn=Users, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group F, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=Group F, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+ 

+     dn: name=User 1, cn=Users, cn=default, cn=sysdb

+     objectClass: user

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=User 2, cn=Users, cn=default, cn=sysdb

+     objectClass: user

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=User 3, cn=Users, cn=default, cn=sysdb

+     objectClass: user

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=User 4, cn=Users, cn=default, cn=sysdb

+     objectClass: user

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group D, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=User 5, cn=Users, cn=default, cn=sysdb

+     objectClass: user

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group E, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group F, cn=Groups, cn=default, cn=sysdb

+ 

+ Actions

+ -------

+ 

+ Add new member to a group with no parents

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ We send an ldb message to add "User 4" to "Group C"

+ 

+ #. Check whether the member attribute matches the DN of Group C (it does

+    not)

+ #. Examine "Group C" for memberOf attributes.

+ #. No memberOf attributes exist

+ #. Add memberOf(Group C) to "User 4"

+ 

+ Add new member to a group with parents

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ We send an ldb message to add "User 5" to "Group B"

+ 

+ #. Check whether the member attribute matches the DN of Group C (it does

+    not)

+ #. Examine "Group B" for memberOf attributes.

+ #. "Group B" has memberOf attributes: "Group C"

+ #. Check whether any of these memberOf values match "User 5" (none do)

+ #. Add memberOf(Group B) and memberOf(Group C) to "User 4" and return

+ 

+ .. Add new group to a group with no parents (no loops)

+ .. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ .. 

+ .. .. |image0| image:: https://fedorahosted.org/sssd/raw-attachment/wiki/DesignDocs/MemberOfv2/nestedgroups.png

+ ..    :target: https://fedorahosted.org/sssd/attachment/wiki/DesignDocs/MemberOfv2/nestedgroups.png

@@ -0,0 +1,81 @@ 

+ Proposal to redesign the memberOf plugin

+ ----------------------------------------

+ 

+ Let us start with the following setup:

+ 

+ ::

+ 

+     dn: name=Group A, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     member: name=Group D, cn=Groups, cn=default, cn=sysdb

+     member: name=User 1, cn=Users, cn=default, cn=sysdb

+     member: name=User 2, cn=Users, cn=default, cn=sysdb

+     member: name=User 3, cn=Users, cn=default, cn=sysdb

+     member: name=User 4, cn=Users, cn=default, cn=sysdb

+     member: name=User 5, cn=Users, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=Group B, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     member: name=Group D, cn=Groups, cn=default, cn=sysdb

+     member: name=User 1, cn=Users, cn=default, cn=sysdb

+     member: name=User 2, cn=Users, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=Group C, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     member: name=Group A, cn=Groups, cn=default, cn=sysdb

+     member: name=Group B, cn=Groups, cn=default, cn=sysdb

+     member: name=Group F, cn=Groups, cn=default, cn=sysdb

+     member: name=User 3, cn=Users, cn=default, cn=sysdb

+ 

+     dn: name=Group D, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     member: name=User 4, cn=Users, cn=default, cn=sysdb

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=Group E, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     member: name=User 5, cn=Users, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group F, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=Group F, cn=Groups, cn=default, cn=sysdb

+     objectClass: group

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+ 

+     dn: name=User 1, cn=Users, cn=default, cn=sysdb

+     objectClass: user

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=User 2, cn=Users, cn=default, cn=sysdb

+     objectClass: user

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=User 3, cn=Users, cn=default, cn=sysdb

+     objectClass: user

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=User 4, cn=Users, cn=default, cn=sysdb

+     objectClass: user

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group D, cn=Groups, cn=default, cn=sysdb

+ 

+     dn: name=User 5, cn=Users, cn=default, cn=sysdb

+     objectClass: user

+     memberOf: name=Group A, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group B, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group C, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group E, cn=Groups, cn=default, cn=sysdb

+     memberOf: name=Group F, cn=Groups, cn=default, cn=sysdb

@@ -0,0 +1,63 @@ 

+ Purpose

+ -------

+ 

+ Some deployments use search bases to limit or extend the set of users

+ and groups visible to a system.

+ 

+ One common example is for applications granting access only to users in

+ a hard-coded group name. In this case, the group search base would

+ generally be set differently for each machine running this application.

+ Other machines running the same application providing access to other

+ users would receive a different "view" of LDAP through the use of search

+ bases.

+ 

+ Expected Behaviour

+ ------------------

+ 

+ Individual Lookups

+ ~~~~~~~~~~~~~~~~~~

+ 

+ For targeted lookups (e.g. ``getpwuid()``, ``getgrnam()``) we should try

+ each of the search bases in order until one of them returns the entry we

+ are looking for, or we have exhausted all of the search bases. Each

+ search will be performed with the search scope provided.

+ 

+ Enumeration

+ ~~~~~~~~~~~

+ 

+ For enumeration, we will need to iterate through ALL search bases to

+ retrieve users, groups, etc. For each search base, we need to examine

+ each entry retrieved and compare it against the entries received from

+ earlier search bases. If there are conflicts, we will discard the

+ conflicting value from the later search base. (Therefore the entry in

+ the earlier search bases will always win.

+ 

+ Implementation

+ --------------

+ 

+ We will extend the ``ldap_*_search_base`` options to support behavior

+ similar to that of ``nss_base_passwd`` and ``nss_base_group`` from

+ nss-ldapd.

+ 

+ The standard search base (``ldap_search_base`` will be left alone as a

+ single value with scope "subtree".

+ 

+ The new ``ldap_*_search_base`` options will include a new delimiter,

+ '``?``'. If this is present, we will divide the string up into triples

+ as follows: ::

+ 

+     search_base?scope?filter[?search_base?scope?filter...]

+ 

+ Parsing

+ ~~~~~~~

+ 

+ We will split the input string on the '?' delimiter. If the resulting

+ array is exactly one, or is a multiple of three, we will continue.

+ Otherwise it will fail validation.

+ 

+ The scope must be one of 'subtree', 'onelevel' or 'base'

+ (case-insensitive).

+ 

+ The filter will be optional and may be a zero-length string. The filter

+ must be pre-sanitized and must pass filter validation with

+ ``ldb_parse_tree()``

@@ -0,0 +1,177 @@ 

+ Netgroups

+ ---------

+ 

+ Overview of Netgroups

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ Netgroups define network-wide groups used for permission checking when

+ fielding requests for remote mounts, remote logins, and remote shells.

+ For remote mounts, the information in netgroup is used to classify

+ machines; for remote logins and remote shells, it is used to classify

+ users.

+ 

+ Netgroups have a name, and contain one or more of the following members:

+ 

+ -  The name of another netgroup (supporting nested netgroups)

+ -  A three-tuple of (username,hostname,domainname) (parentheses

+    included)

+ 

+ Overview of Netgroups in Name-Service Switch

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The interface and behavior of netgroups in libc is a multi-step

+ procedural interface as follows:

+ 

+ #. The user calls ``setnetgrent(netgroupname)``

+ 

+    -  This sets an internal, global iterator to the start of the list of

+       members for the netgroup specified by netgroupname

+ 

+ #. The user calls ``getnetgrent()`` repeatedly until it returns failure

+ 

+    -  This returns one set of username, hostname and domainname for each

+       call, until there are no more associated with the netgroupname

+ 

+ #. The user calls ``endnetgrent()``

+ 

+    -  This cleans up after itself

+ 

+ Internally, libraries providing netgroups in libc must unroll the nested

+ netgroups so that all results are returned by ``getnetgrent()`` without

+ additional explicit calls.

+ 

+ Overview of Netgroups in LDAP

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Netgroups in LDAP are entries containing the objectClass

+ ``nisNetgroup``. This objectClass specifies two options:

+ 

+ nisNetgroupTriple

+     A netgroup, specified as a literal string. So it would be

+     ``(hostname,username,domainname)``

+ memberNisNetgroup

+     The name of another netgroup whose contents need to be rolled into

+     this entry.

+ 

+ Complete example (taken from

+ `http://directory.fedoraproject.org/wiki/Howto:Netgroups <http://directory.fedoraproject.org/wiki/Howto:Netgroups>`__):

+ 

+ ::

+ 

+     dn: cn=LinuxTeam,ou=Netgroup,dc=example,dc=com

+     objectClass: nisNetgroup

+     objectClass: top

+     cn: LinuxTeam

+     nisNetgroupTriple: (,frank,example.com)

+     nisNetgroupTriple: (,jill,example.com)

+     memberNisNetgroup: QA

+     memberNisNetgroup: Development

+     memberNisNetgroup: Operations

+     description: The Linux Team

+ 

+ SSSD

+ ----

+ 

+ Overview of approach

+ ~~~~~~~~~~~~~~~~~~~~

+ 

+ Netgroups will be processed similarly to how we handle enumerations in

+ SSSD.

+ 

+ High level

+ ^^^^^^^^^^

+ 

+ #. When a ``setnetgrent()`` request arrives, we will first check the LDB

+    cache and then we will go to the backends to update the cache.

+ #. Once the cache is readied, we will then construct a result object

+    that we can iterate through to return the result set.

+ #. Once the result object is ready, we will reply to the

+    ``setgetgrent()`` request to notify the calling application that it

+    can start calling ``getnetgrent()``

+ #. The calling application will issue ``getnetgrent()`` calls until

+    there are no more members available.

+ #. The calling application will call ``endnetgrent()``

+ 

+ Lower-level - setnetgrent

+ ^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ #. Incoming requests to the SSSD will behave similarly to the user and

+    group enumaration code, except that the individual result objects for

+    different netgroup names will be stored in a hash table keyed on the

+    netgroup name.

+ #. During processing, if a netgroup contains nested netgroups, we will

+    need to issue a recursive internal ``setnetgrent()`` request. This

+    means we will need to have a nesting limit (and ideally,

+    loop-detection)

+ #. The response object must contain the complete unrolled results of all

+    of its child netgroups, so that we do not need to maintain multiple

+    iterators for reading through the children.

+ #. The acknowledgement response to the initial ``setnetgrent()`` request

+    will need to happen only after all nested netgroups have been cached.

+ 

+ Handling nested netgroups

+ ^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ During ``setnetgrent()`` processing, we will convert the results into a

+ collection object (see libcollection). For each nested group, we will

+ recurse into ``setnetgrent()`` and create a new collection object that

+ can be added to the parent collection. In this way, we will be able to

+ unroll the groups easily.

+ 

+ Later, in ``getnetgrent()`` processing, we will construct the response

+ from the stored collection object, rather than directly from the

+ ldb\_result object as we do with user and group enumerations.

+ 

+ Public interfaces:

+ 

+ ::

+ 

+     struct tevent_req setnetgrent_send(char *netgroupname, hash_table_t *nesting)

+ 

+ ::

+ 

+     errno_t setnetgrent_recv(tevent_req *req, struct collection **entries)

+ 

+ Internally, the processing for ``setnetgrent_send()`` is expected to

+ recurse into nested netgroups and add the resulting ``entries`` to its

+ own list using the ``col_add_collection_to_collection()`` interface with

+ the ``col_add_mode_clone`` mode.

+ 

+ Tracking nesting limits

+ ^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The biggest danger in nesting is the risk of loops in the memberships.

+ To resolve this, I propose that we keep track of subrequests in a dhash

+ table. This would behave as follows:

+ 

+ #. In ``setnetgrent_send()`` we would first check whether the

+    hash\_count of the hash table is equal to the nesting limit. If it

+    is, we will return completion immediately.

+ #. Next we will check whether netgroupname already exists in the hash

+    table. If it does, then we know we have looped and will simply return

+    completion immediately.

+ #. At this point, we will add the current netgroup name to the hash

+    table (with a NULL associated value) and continue processing this

+    request.

+ #. In ``setnetgrent_recv()`` we will remove the requested netgroupname

+    from the hash table and amend the result collection.

+ 

+ This will allow us to protect against both loops and excessive nesting

+ all at once.

+ 

+ Dangling Questions

+ ------------------

+ 

+ #. Is it permissible for a single client to request multiple different

+    netgroups concurrently?

+ 

+    -  My reading of the documentation for [set\|get\|end]netgrent leads

+       me to believe that this is not permitted by libc.

+ 

+ #. Maybe this is too low-level at this time, but is a cleanup task

+    planned?

+ 

+    -  Netgroups should be handled in the same way that users and groups

+       are handled, so I will probably have to extend the existing

+       cleanup task to also address the netgroups entries in the cache -

+       sgallagh

@@ -0,0 +1,547 @@ 

+ Running SSSD as a non-root user

+ ===============================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2370 <https://pagure.io/SSSD/sssd/issue/2370>`__

+ 

+ Problem statement

+ -----------------

+ 

+ Currently, all SSSD processes run as the root user. However, if one of

+ the processes was compromised, this might lead to compromising the whole

+ system, especially if additional measures like SELinux were not enabled.

+ It would improve security if instead SSSD was running as its own private

+ user, This design page summarizes what would be needed to run sssd as a

+ non-privileged user and all the cases that currently require a root

+ user.

+ 

+ Use case

+ --------

+ 

+ This is a general use-case, following the principle of least privilege.

+ The processes should not run as root unless they really need the root

+ privileges.

+ 

+ Implementation details

+ ----------------------

+ 

+ At a higher level, the changes would amount to:

+ 

+ -  A new system user would be created. This user must be added in

+    sssd.spec during the ``%pre`` section.

+ -  Files that were used by sssd and previously owned by root should now

+    be owned as the sssd user. This includes the LDB databases.

+ -  Responders and back ends would drop privileges and become the sssd

+    user as soon as possible, ideally as the first action after startup.

+ -  Short-lived processes that are spawned by ``sssd_be`` but might still

+    require elevated privileges would be setuid root.

+ 

+ The changes to individual binaries and files are described in more

+ detail below. After the changes are implemented, the code that runs as

+ root will be reduced to the monitor process and the setuid helpers.

+ 

+ A new system user

+ -----------------

+ 

+ The sssd will run as a new system user called simply ``sssd``. We do not

+ need to have the UID fixed across systems as no files owned by SSSD are

+ shared among different systems. The user will be simply added during the

+ ``%pre`` phase: ::

+ 

+     %pre

+     getent group sssd >/dev/null || groupadd -r sssd

+     getent passwd sssd >/dev/null || useradd -r -g sssd -d / -s /sbin/nologin -c "User for sssd" sssd

+ 

+ As it's common practice for system users, the shell will be

+ ``/sbin/nologin`` so the user cannot log in into the system.

+ 

+ Configuration options

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ To be on the safe side, sssd will allow configuring the user to run as.

+ This option will also allow root, to allow users to keep the old

+ behaviour around in case they hit a bug with the unprivileged process.

+ As a first step, we will include these options, but leave the default as

+ 'root'. When we're certain the non-root sssd works for most users as a

+ non-privileged user, we will switch the default to the sssd user.

+ 

+ Dropping privileges of the SSSD processes

+ -----------------------------------------

+ 

+ The goal is for the "worker" processes (that is, both responders and

+ providers) to drop the root privileges as soon as possible - typically

+ right after startup, or alternatively after completing any work that

+ requires root privileges such as opening a file. Because the processes

+ might have to keep the root privileges after startup, the monitor

+ process would still be running as root.

+ 

+ Monitor (sssd)

+ ~~~~~~~~~~~~~~

+ 

+ The monitor process would keep running as root. This is in order to be

+ able to fork and exec processes that are initially privileged without

+ making them all setuid. As a future enhancement, the process management

+ functionality of the monitor will be delegated to systemd (see ticket

+ `#2243 <https://pagure.io/SSSD/sssd/issue/2243>`__).

+ 

+ Responders

+ ~~~~~~~~~~

+ 

+ The responder processes are by nature 'readers' that mostly read data

+ from cache and request cache updates from the back end processes.

+ 

+ NSS responder

+ ^^^^^^^^^^^^^

+ 

+ The NSS responder can drop privileges after startup. The files that the

+ NSS responder reads (sysdb, confdb, NSS pipe) and writes (memory cache,

+ debug logs, NSS pipe) will be owned by the sssd user.

+ 

+ PAM responder

+ ^^^^^^^^^^^^^

+ 

+ The PAM responder can drop privileges after startup. The files that the

+ PAM responder reads (sysdb, confdb, PAM public pipe) and writes (debug

+ logs, PAM pipe) will be owned by the sssd user.

+ 

+ In order to keep the privileged pipe only owned by the root user, we

+ would open the pipe prior to becoming user and pass the file descriptor.

+ 

+ InfoPipe responder

+ ^^^^^^^^^^^^^^^^^^

+ 

+ The InfoPipe responder can drop privileges after startup. The files that

+ the InfoPipe responder reads (sysdb, confdb) and writes (debug logs, PAM

+ pipe) will be owned by the sssd user.

+ 

+ Contrary to other responders, the InfoPipe responder doesn't have a

+ public pipe. The InfoPipe responder also binds to the system bus, we

+ must also convert the bus policy file to allow the sssd user to bind to

+ the bus.

+ 

+ Currently there is also functionality to modify sssd.conf from the

+ InfoPipe. During the feature design review, it was suggested that a

+ configuration interface doesn't belong to the InfoPipe code at all and

+ should be moved to a separate helper. Until that is done, the InfoPipe

+ responder must keep running as root. The work on splitting the InfoPipe

+ is tracked by

+ `https://pagure.io/SSSD/sssd/issue/2395 <https://pagure.io/SSSD/sssd/issue/2395>`__

+ 

+ Autofs, SUDO and SSH responders

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The Autofs, SUDO and SSH responders only read from the sysdb, confdb and

+ their respective UNIX public pipes. These responders also only write to

+ the debug logs and the public pipe, all of which would be owned by the

+ sssd user. This means the Autofs, SUDO and SSH responders can drop

+ privileges right after startup.

+ 

+ Providers

+ ~~~~~~~~~

+ 

+ The providers are dynamically loadable libraries that are loaded by the

+ ``sssd_be`` process. After startup, the sssd\_be process dlopens the

+ provider library and dlsyms the handlers. During sssd operation, the

+ ``sssd_be`` process mostly unpacks requests arriving on the SBUS and

+ calls the provider-specific handlers.

+ 

+ Several areas of the initialization still require elevated privileges:

+ 

+ -  Checking for principals in the keytab

+ -  Checking for user TGTs to be renewed

+ 

+ Therefore, the initialization is still performed with root privileges

+ and sssd\_be drops to a non-root user post initialization. See the

+ "Future Enhancements" section for ideas on reducing the code that runs

+ as root during initialization even further.

+ 

+ Short-lived processes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ The purpose of the short-lived processes is to avoid blocking calls by

+ performing an otherwise blocking action in a completely separate

+ process.

+ 

+ ldap\_child

+ ^^^^^^^^^^^

+ 

+ The ldap\_child subprocess primes the credential cache used to establish

+ GSSAPI-encrypted connection. In order to do so, the ldap\_child process

+ needs to be able to read the keytab, which is readable by root only.

+ Therefore, the ldap\_child process is setuid root, with permissions set

+ to 4750 to make sure only the sssd user can run the ldap\_child process.

+ As soon as the credentials are obtained, the ldap\_child drops

+ privileges and continues running as the sssd user -- hence also the

+ resulting ccache is owned by the sssd user.

+ 

+ krb5\_child

+ ^^^^^^^^^^^

+ 

+ The user krb5\_child runs as depends on how the SSSD back end is set up.

+ In the simplest case, where neither validation nor FAST are used, the

+ krb5\_child can drop privileges to the user who is logging in after

+ startup and runs unprivileged except for the initialization part.

+ 

+ In case either validation or FAST are used, part of the krb5\_child runs

+ as root. Once the resulting ccache is validated using the keytab, the

+ krb5\_child process drops privileges to the user who is logging in.

+ 

+ See the "Future Enhancements section" for discussion of using the MEMORY

+ ccache to reduce the time krb5\_child runs as root.

+ 

+ proxy\_child

+ ^^^^^^^^^^^^

+ 

+ In general, we can't make assumptions on what the PAM module we wrap

+ using the proxy backend requires, so at least the part of proxy child

+ that runs the PAM conversation should run as root. During development,

+ we should consider splitting the proxy\_child into a small setuid helper

+ that would still run privileged and only wrap the PAM module and the

+ rest of the proxy\_child that would run unprivileged.

+ 

+ gpo\_child

+ ^^^^^^^^^^

+ 

+ The gpo\_child process connects to a SMB share, downloads a GPO policy

+ file and stores it locally, by default in ``/var/lib/sss/gpo_cache``.

+ The gpo\_child authenticates to the SMB share using Kerberos; the

+ ccache, as created by ldap\_child is already accessible to the sssd

+ user. Since that directory would be owned by the sssd user, the

+ gpo\_child could run unprivileged.

+ 

+ ssh helpers

+ ^^^^^^^^^^^

+ 

+ The SSH helpers already run non-privileged. ``sss_ssh_knownhostsproxy``

+ runs as the user who initiated the SSH session.

+ ``sss_ssh_authorizedkeys`` runs as the user specified with the

+ AuthorizedKeysCommandUser directive in sshd\_config.

+ 

+ Command line tools

+ ~~~~~~~~~~~~~~~~~~

+ 

+ There are two general kinds of command line tools we ship with the SSSD

+ - tools that manage accounts in the local backend and SSSD management

+ tools. All tools check if they are executed by root currently. I think

+ this check makes sense and should stay because all the tool are intended

+ for administrative purposes only.

+ 

+ Some of the tools can be changed to drop privileges. However, the attach

+ surface of these tools is small, so changing them is not a priority.

+ This effort is rather tracked in the Future Enhancements.

+ 

+ Local back end tools

+ ^^^^^^^^^^^^^^^^^^^^

+ 

+ The tools either write (sss\_useradd, userdel, usermod, sss\_groupadd,

+ groupdel,

+ groupmod) or read  (sss\_groupshow) the sssd.ldb file.

+ But additionally, these tools also set the SELinux context of the user.

+ Since there is no capability to call semanage, setting the context still

+ requires root privileges.

+ 

+ sss\_seed and sss\_cache

+ ^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ These two tools function similarly to the local backend management

+ tools, except they manipulate the domain cache. The cache is also owned

+ and writable by the sssd user, so would be safe to drop privileges here,

+ too.

+ 

+ sss\_debuglevel

+ ^^^^^^^^^^^^^^^

+ 

+ The sss\_debuglevel tool changes the debug level of sssd on the fly. The

+ tool writes new debug level values to the confdb (owned by sssd) and

+ touches sssd.conf (ownership tbd). The tool can drop privileges to sssd

+ after startup.

+ 

+ sss\_obfuscate

+ ^^^^^^^^^^^^^^

+ 

+ The sss\_obfuscate tool is written in Python and manipulates the

+ sssd.conf file by obfuscating the input and using it as a value of the

+ ``ldap_default_authtok`` configuration option. For dropping privileges

+ of the sss\_obfuscate tool, we can use the python bindings of libcap-ng.

+ Again, making this tool non-privileged is not a priority.

+ 

+ External resources currently requiring root access

+ --------------------------------------------------

+ 

+ This part of the design page summarizes which external resources,

+ typically file system objects currently require SSSD to have elevated

+ privileges.

+ 

+ For filesystem objects, we can either change their owner to the sssd

+ local user, add an ACL or open them as the privileged process and pass

+ the file descriptor.

+ 

+ SSSD configuration file

+ ~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Filesystem path: ``/etc/sssd/sssd.conf``

+ -  Current owner and permissions: root.root 0600

+ -  Read by: The monitor process

+ -  Written to by: The InfoPipe responder and users of the configAPI,

+    such as sss\_obfuscate or authconfig

+ -  *Change: Currently the permissions will stay the same as the monitor

+    process and the InfoPipe still run as root*

+ 

+ Debug logs

+ ~~~~~~~~~~

+ 

+ -  Filesystem path: ``/var/log/sssd/*.log``

+ -  Current owner and permissions: root.root 0600

+ -  Read by: N/A, only externally by admin

+ -  Written to by: monitor, providers, responders, child processes

+ -  *New owner and permissions: sssd.sssd 0600*

+ 

+ The configuration database

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Filesystem path: ``/var/lib/sss/db/config.ldb``

+ -  Current owner and permissions: root.root 0600

+ -  Read by: responders, providers, monitor, command-line tools

+ -  Written to by: The monitor process, sssd-ad (a single confdb\_set

+    call), sss\_debuglevel, sssd\_ifp

+ -  *New owner and permissions: sssd.sssd 0600*

+ 

+ The on-disk cache (sysdb)

+ ~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Filesystem path: ``/var/lib/sss/db/cache_$domain.ldb``

+ -  Current owner and permissions: root.root 0600

+ -  Read by: responders, providers, command-line tools

+ -  Written to by: sssd\_be, the CLI tools

+ -  *New owner and permissions: sssd.sssd 0600*

+ 

+ Memory Cache

+ ~~~~~~~~~~~~

+ 

+ -  Filesystem path: ``/var/lib/sss/mc/{passwd,group}``

+ -  Current owner and permissions: root.root 0644

+ -  Read by: The SSS NSS module

+ -  Written to by: The NSS responder

+ -  *New owner: sssd.sssd permissions will stay the same*

+ 

+ Kerberos keytab

+ ~~~~~~~~~~~~~~~

+ 

+ -  Filesystem path: configurable, ``/etc/krb5.keytab`` by default

+ -  Current owner and permissions: root.root 0600

+ -  Read by: LDAP, KRB5, IPA, AD providers, krb5\_child, ldap\_child

+ -  Written to by: sssd\_be, the CLI tools

+ -  *Change: No change at the moment. The keytab will be kept readable by

+    the root user only*

+ 

+ Kerberos user credential cache

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Filesystem path: Configurable, only if FILE or DIR based cache is

+    used, which is not the default anymore

+ -  Current owner and permissions: the user who logged in, 0600

+ -  Read by: KRB5, AD, IPA, krb5\_child, libkrb5 externally

+ -  Written to by: krb5\_child

+ -  *Change: No change, the credential cache will still be written as the

+    user in question*

+ 

+ Kerberos LDAP credential cache

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Filesystem path: ``/var/lib/sss/db/ccache_$domain``

+ -  Current owner and permissions: root.root 0600

+ -  Read by: AD, IPA and LDAP providers (coded up in LDAP provider tree)

+ -  Written to by: ldap\_child

+ -  No change needed since ldap\_child will run as the sssd user in the

+    new design

+ -  *New owner and permissions: sssd.sssd 0600*

+ 

+ Kerberos kdcinfo files

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Filesystem path: ``/var/lib/sss/pubconf/*``

+ -  Current owner and permissions: root.root. The directory has

+    permissions of 0755, the files 0644

+ -  Read by: libkrb5

+ -  Written to by: LDAP, KRB5, IPA, AD providers, krb5\_child,

+    ldap\_child

+ -  *New owner and permissions: Both directory and files will be owned by

+    sssd.sssd, the permissions will stay the same*

+ 

+ SELinux user mappings

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Filesystem path: ``/etc/selinux/targeted/logins``

+ -  Current owner and permissions: root.root. The directory has

+    permissions of 0755, the files 0644

+ -  Read by: pam\_selinux

+ -  Written to by: IPA provider

+ -  *Change: libsemanage will be used to set the labels instead. Since

+    setting the label is a privileged operation and sssd\_be runs

+    unprivileged, setting the label was moved to a separate child

+    process, selinux\_child*

+ 

+ UNIX pipes

+ ~~~~~~~~~~

+ 

+ -  Filesystem path: ``/var/lib/sss/pipes/``

+ -  Current owner and permissions: root.root. The directory has

+    permissions of 0755, the files 0666. There is one pipe per responder.

+ -  Read by: client modules, all responders except InfoPipe

+ -  Written to by: client modules, responders

+ -  *New owner and permissions: Both directory and files will be owned by

+    sssd.sssd, the permissions will stay the same*

+ 

+ UNIX PAM private pipe

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Filesystem path: ``/var/lib/sss/pipes/private/pam``

+ -  Current owner and permissions: root.root. The directory has

+    permissions of 0700, the files 0600. Only the PAM responder uses the

+    private pipe.

+ -  Read by: PAM responder

+ -  Written to by: PAM client module

+ -  *New owner and permissions: The directory will be owned by sssd.sssd,

+    the file will stay the same*

+ 

+ Data Provider private pipes

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Filesystem path: ``/var/lib/sss/pipes/private/sbus-dp_$domain.$PID``

+ -  Current owner and permissions: root.root. The directory has

+    permissions of 0700, the files 0600.

+ -  Read by: Responders

+ -  Written to by: Data Provider

+ -  *New owner and permissions: Both directory and files will be owned by

+    sssd.sssd, the permissions will stay the same*

+ 

+ Kerberos configuration file

+ ---------------------------

+ 

+ -  Filesystem path: ``/etc/krb5.conf``

+ -  Read by: libkrb5

+ -  Written to by: The IPA and AD providers "touch" the file in order to

+    make libkrb5 re-read it

+ -  *Change: The file can be opened before dropping privileges and we can

+    keep the fd around. Alternatively, the modification can be performed

+    with a setuid helper*

+ 

+ Configuration changes

+ ---------------------

+ 

+ There is a new option called ``user`` that allows the administrator to

+ configure the user sssd runs as. Please note that it makes sense to only

+ use either root or the user sssd was configured with.

+ 

+ How to test

+ -----------

+ 

+ Test ordinary SSSD operations. Everything must work as it used to

+ before. Pay special attention to operations that involve the short-lived

+ processes, like GSSAPI LDAP provider authentication or Kerberos user

+ authentication.

+ 

+ Upgrade testing must be performed as well.

+ 

+ Future Enhancements

+ -------------------

+ 

+ During the design or implementation, we identified several ideas for

+ improvement. Even though we don't need to implement these now, it makes

+ sense to keep the description in this design page for future.

+ 

+ Allow the InfoPipe responder to run as sssd user

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ In 1.12.3, sssd.conf is still owned by root, mostly because there is a

+ number of programs like authconfig that generate sssd.conf as root.

+ Moreover, in enterprise setups, the sssd.conf would be pushed to the

+ client with a tool such as puppet that would still use the same

+ privileges.

+ 

+ Therefore, even rootless sssd needs to handle sssd.conf owned by root,

+ at least for the time being. We can even chown the file to sssd user

+ after startup or move the write-operation in

+ `InfoPipe? <https://docs.pagure.org/sssd-test2/InfoPipe.html>`__ to a

+ privileged helper.

+ 

+ -  Tracked by:

+    `https://pagure.io/SSSD/sssd/issue/2395 <https://pagure.io/SSSD/sssd/issue/2395>`__

+ 

+ Allow the command line tools to run unprivileged

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Some command line tools can be run unprivileged - see the section called

+ "Command Line Tools". However, changing them is not a priority as they

+ are short-lived and in general only accept switches, not free-form

+ input.

+ 

+ -  Tracked by:

+    `https://pagure.io/SSSD/sssd/issue/2500 <https://pagure.io/SSSD/sssd/issue/2500>`__

+ 

+ Splitting the back end initialization into privileged and unprivileged part

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ It was proposed on the sssd-devel list that the initialization of the

+ sssd\_be process is split into a privileged and non-privileged function.

+ The back end would open all providers, call the privileged

+ initialization functions and then drop privileges. Currenly all

+ initialization is done as root, which is not strictly required in many

+ setups.

+ 

+ -  Tracked by:

+    `https://pagure.io/SSSD/sssd/issue/2504 <https://pagure.io/SSSD/sssd/issue/2504>`__

+ 

+ Using Kerberos MEMORY cache to avoid further restrict running Kerberos helpers as root

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Sumit proposed that the keytab is read to a MEMORY type after child

+ process startup so krb5\_child and ldap\_child can drop root privileges

+ sooner. There are even some proof-of-concept patches `on sssd-devel

+ <https://lists.fedorahosted.org/archives/list/sssd-devel@lists.fedorahosted.org/message/XREVGCOZ4OP4VM337M5TUQYHUUPS54HH/>`__

+ 

+ -  Tracked by:

+    `https://pagure.io/SSSD/sssd/issue/2503 <https://pagure.io/SSSD/sssd/issue/2503>`__

+ 

+ Using libcap-ng to drop the privileges

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Once we need to not only drop privileges but also retain some capability

+ (CAP\_AUDIT comes to mind), we'll need to use something like

+ `libcap-ng <https://people.redhat.com/sgrubb/libcap-ng/>`__ instead of

+ handling capabilities ourselves with prctl

+ 

+ The downside is obviously the extra dependency, but libcap-ng has a

+ small footprint and is already used by packages that are present on

+ most, if not all, modern Linux installations, such as dbus.

+ 

+ We would keep the existing code around as a fallback for environments

+ that don't have the libcap-ngs library available, such as non-Linux

+ systems or embedded systems. Because the code wouldn't be enabled by

+ default, it's important to have unit tests for the privilege drop. For

+ unit testing both options (libcap-ng and our own code),

+ `uid\_wrapper <http://cwrap.org/uid_wrapper.html>`__ and

+ `nss\_wrapper <http://cwrap.org/nss_wrapper.html>`__ are the best

+ choice.

+ 

+ -  Tracked by:

+    `https://pagure.io/SSSD/sssd/issue/2482 <https://pagure.io/SSSD/sssd/issue/2482>`__

+ 

+ Merge the ldap\_child and krb5\_child processes

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ During design review, it was also proposed to look into merging the

+ ldap\_child and krb5\_child as the code performs similar tasks The new

+ krb5\_child would act as an ldap\_child based on a command line option

+ value.

+ 

+ -  Tracked by:

+    `https://pagure.io/SSSD/sssd/issue/2502 <https://pagure.io/SSSD/sssd/issue/2502>`__

+ 

+ Authors

+ -------

+ 

+ -  Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

+ -  Simo Sorce <`simo@redhat.com <mailto:simo@redhat.com>`__

@@ -0,0 +1,232 @@ 

+ ID Mapping calls for the NSS responder

+ --------------------------------------

+ 

+ Related tickets:

+ 

+ -  `RFE Integrate SSSD with CIFS

+    client <https://pagure.io/SSSD/sssd/issue/1534>`__

+ -  `RFE Use the Global Catalog in SSSD for the AD

+    provider <https://pagure.io/SSSD/sssd/issue/1557>`__

+ -  `RFE Use the getpwnam()/getgrnam() interface as a gateway to resolve

+    SID to Names <https://pagure.io/SSSD/sssd/issue/1559>`__

+ 

+ Related design documents:

+ 

+ -  `Integrate SSSD with CIFS

+    Client <https://docs.pagure.org/SSSD.sssd/design_pages/integrate_sssd_with_cifs_client.html>`__

+ -  `Global Catalog Lookups in

+    SSSD <https://docs.pagure.org/SSSD.sssd/design_pages/global_catalog_lookups.html>`__

+ 

+ Problem Statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ When SSSD is used in environments with AD, either as a member of the AD

+ domain or as a member of a trusted IPA domain, it has to map users and

+ groups managed by AD to a POSIX ID. The AD user and groups are

+ identified by

+ 

+ -  a name which may change

+ -  a unique SID which never changes, i.e. new SID == new object

+ 

+ Applications interacting tightly with users and groups from AD domains

+ e.g.

+ 

+ -  samba

+ -  cifs-client

+ -  FreeIPA

+ 

+ need to know which SID relates to which POSIX ID or name.

+ 

+ Mapping a SID to a user or group would be possible with the current

+ interfaces as described in ticket

+ `#1559 <https://pagure.io/SSSD/sssd/issue/1559>`__. But getting a SID

+ for a user or a group would be at least hard and ugly if not impossible.

+ I think the solution proposed in ticket

+ `#1559 <https://pagure.io/SSSD/sssd/issue/1559>`__ is a good and

+ useful shortcut. But by making the \*bySID lookups also available via a

+ new calls applications will have a more reliable interface, e.g. by

+ allowing more specific error codes (see details below).

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The NSS responder will be extended with four new calls: ::

+ 

+     SSS_NSS_GETSIDBYNAME = 0x0111, /**< Takes a zero terminated fully qualified name

+                                         and returns the zero terminated string representation

+                                         of the SID of the object with the given name.

+                                     */

+     SSS_NSS_GETSIDBYID   = 0x0112, /**< Takes an unsigned 32bit integer (POSIX ID)

+                                         and returns the zero terminated string representation

+                                         of the SID of the object with the given ID.

+                                     */

+     SSS_NSS_GETNAMEBYSID = 0x0113, /**< Takes the zero terminated string representation

+                                         of a SID and returns the zero terminated fully

+                                         qualified name of the related object.

+                                     */

+     SSS_NSS_GETIDBYSID   = 0x0114, /**< Takes the zero terminated string representation

+                                         of a SID and returns and returns the POSIX ID of

+                                         the related object as unsigned 32bit integer value

+                                         and another unsigned 32bit integer value indicating

+                                         the type (unknown, user, group, both) of the object.

+                                     */

+ 

+ Alternatively SSS\_NSS\_GETSIDBYID and SSS\_NSS\_GETIDBYSID could be

+ implemented to take an return arrays SIDs and POSIX IDs respectively as

+ the related cifs\_idmap API calls (see `Integrate SSSD with CIFS

+ Client <https://docs.pagure.org/SSSD.sssd/design_pages/integrate_sssd_with_cifs_client.html>`__).

+ I took the approach mentioned above because it better matches the other

+ NSS responder calls and additionally I do not like the implicit required

+ ordering in the input and output array.

+ 

+ After receiving the request the NSS responder will first check if it can

+ create the response from cached data. If not the request is forwarded to

+ the providers. For the \*byName and \*byID calls the corresponding

+ existing interface can be used. For the \*bySID call a new call must be

+ added to the providers. Currently only the IPA and AD provider will

+ support the new calls. If the provider cannot handle the requests it

+ will return an appropriate error code which it return to the client via

+ the NSS responder.

+ 

+ Additionally on the provider side it must be ensured that the string

+ representation of the SID is saved in the cache. It looks that this is

+ already the case for the AD provider. But I think this is currently not

+ the case for the IPA provider for both IPA and trusted users. Also the

+ PAC responder should add the SID to the cache object.

+ 

+ The sid2name extended operation on the FreeIPA server should get a new

+ request type and corresponding response types so that the SID is

+ returned with the user and group data. (A ticket for this should be

+ opened for FreeIPA if this design is approved.)

+ 

+ C-API

+ ^^^^^

+ 

+ The C-API for those calls will be made available in a new library

+ libsss\_nss\_idmap (other names are welcome). Like the other client

+ libraries this library will just offer an easy way to send the requests

+ to SSSD and receive the responses, all processing is done by SSSD not by

+ the library. In contrast to libnss\_sss loaded via glibc into client

+ programs the new library can be linked with other programs. The new

+ calls will be declared in a header sss\_nss\_idmap.h and can have

+ reasonable return values to make error detection an reporting easier for

+ the clients using the new API.

+ 

+ ::

+ 

+     /** 

+      * @brief Find SID by fully qualified name

+      *

+      * @param[in] fq_name Fully qualified name of a user or a group

+      * @param[out] sid    String representation of the SID of the requested user or group,

+                           must be freed by the caller

+      *

+      * @return

+      *  - 0 (EOK): success, sid contains the requested SID

+      *  - ENOENT: requested object was not found in the domain extracted from the given name

+      *  - ENETUNREACH: SSSD does not know how to handle the domain extracted from the given name

+      *  - ENOSYS: this call is not supported by the configured provider

+      *  - EINVAL: input cannot be parsed

+      *  - EIO: remote servers cannot be reached

+      *  - EFAULT: any other error 

+      */

+     int sss_nss_getsidbyname(const char *fq_name, char **sid);

+ 

+     /** 

+      * @brief Find SID by a POSIX UID or GID

+      *

+      * @param[in] id   POSIX UID or GID

+      * @param[out] sid String representation of the SID of the requested user or group,

+                        must be freed by the caller

+      *

+      * @return

+      *  - see #sss_nss_getsidbyname

+      */

+     int sss_nss_getsidbyid(uint32_t id, char **sid);

+ 

+     /** 

+      * @brief Return the fully qualified name for the given SID

+      *

+      * @param[in] sid      String representation of the SID

+      * @param[out] fq_name Fully qualified name of a user or a group,

+                            must be freed by the caller

+      *

+      * @return

+      *  - see #sss_nss_getsidbyname

+      */

+     int sss_nss_getnamebysid(const char *sid, char **fq_name);

+ 

+     /** 

+      * @brief Return the POSIX ID for the given SID

+      *

+      * @param[in] sid      String representation of the SID

+      * @param[out] id      POSIX ID related to the SID

+      * @param[out] id_type Type of the object related to the ID

+      *

+      * @return

+      *  - see #sss_nss_getsidbyname

+      */

+     int sss_nss_getidbysid(const char *sid, uint32_t *id, enum id_type id_type);

+ 

+ Currently I do not see a strong requirement to allow different kind of

+ memory allocators (e.g. talloc).

+ 

+ I think it is not necessary to add special set of return values/error

+ code but the standard ones are sufficient. Maybe ENETUNREACH it indicate

+ that SSSD could not find the right domain for the request could be

+ replaced by a better one. (EDOM would be a candidate, but imo it should

+ be reserved for mathematical operations.)

+ 

+ Python bindings

+ ^^^^^^^^^^^^^^^

+ 

+ To allow easy usage of the new calls by the FreeIPA python framework,

+ python binding would be useful.

+ 

+ Lookup utility

+ ^^^^^^^^^^^^^^

+ 

+ A small utility sss\_idmap (other names are welcome) will be added which

+ offers access to the new calls via libsss\_nss\_idmap. This utility will

+ make testing easier and might help user and administrators as well.

+ 

+ ::

+ 

+     # sss_idmap --help

+     Usage: sss_idmap [OPTION...]

+       -n, --name-to-sid=NAME      Converts name to sid

+       -s, --sid-to-name=SID       Converts sid to name

+       -S, --sid-to-id=SID         Converts sid to POSIX ID

+       -i, --id-to-sid=ID          Converts POSIX ID to sid

+ 

+     Help options:

+       -?, --help                  Show this help message

+       --usage                     Display brief usage message

+ 

+ The following diagram illustrates the data flow and the communication

+ between the different components (Dmitri Pal kindly provided the

+ diagram).

+ 

+ How to test

+ ~~~~~~~~~~~

+ 

+ The sss\_idmap utility or the python bindings can be used to create

+ tests, e.g.

+ 

+ ::

+ 

+     # sss_idmap --sid-to-name=S-1-5-21-111111-222222-333333-1234

+     DOM\user

+     # sss_idmap --name-to-sid=DOM\user

+     S-1-5-21-111111-222222-333333-1234

+     # sss_idmap --sid-to-name=abcdefg

+     Usage: sss_idmap .......

+     Invalid SID

+ 

+ Author(s)

+ ~~~~~~~~~

+ 

+ Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,188 @@ 

+ Allow Kerberos Principals in getpwnam() calls

+ =============================================

+ 

+ Related ticket(s):

+ 

+ -  `RFE Implement localauth plugin for MIT krb5

+    1.12 <https://pagure.io/SSSD/sssd/issue/1835>`__

+ -  `RFE Allow email-address in

+    ldap\_user\_principal <https://pagure.io/SSSD/sssd/issue/1749>`__

+ -  `RFE: Add a configuration option to specify where a snippet with

+    sssd\_krb5\_localauth\_plugin.so is

+    generated <https://pagure.io/SSSD/sssd/issue/2473>`__

+ 

+ Problem Statement

+ -----------------

+ 

+ When using Kerberos/GSSAPI authentication for a service running on a

+ Linux host strictly speaking not a POSIX user of the Linux system is

+ authenticated but a Kerberos principal. I.e. authentication is

+ successful for every Kerberos principal with a valid ticket for the

+ service running on the Linux host. This is done intentional to keep

+ Kerberos independent of the operating system of the host. But it creates

+ the problem of mapping Kerberos principals to POSIX users.

+ 

+ Basic mappings are integrated in the MIT Kerberos library. By default

+ the realm part of the Kerberos principal is stripped off and what

+ remains is considered as a POSIX user name. The administrator can

+ enhance this by adding some minimal regular-expression operations in

+ /etc/krb5.conf. Addionally the user can create a .k5login file in his

+ home directory and add all Kerberos principals which should be allowed

+ to log in with his identity. All those methods do not scale in

+ environments with multiple realm and cross-realm trust.

+ 

+ To allow more advance mapping schemes a plugin interface was introduced

+ in MIT Kerberos version 1.12.

+ 

+ If the Kerberos principal is available SSSD will store it in its cache

+ in the related user object. The Kerberos principal can be retrieved by

+ looking it up in the central IdM system (LDAP/IPA/AD). If the Kerberos

+ principal is not available but Kerberos authentication is configured

+ SSSD will guess it by adding the configured realm or domain name to the

+ POSIX user name. If authentication is successful with this principal it

+ is stored in the cache as well.

+ 

+ A plugin which looks up the Kerberos principal in SSSD and gets the

+ POSIX user entry back would provide a reliable mapping and scale across

+ multiple realms and trusts because SSSD supports it.

+ 

+ Use case

+ --------

+ 

+ In Windows environments, the user typically logs in using his UPN.

+ Implementing this feature would reach parity with how Windows users are

+ used to log in.

+ 

+ Implementing the localauth plugin will not only help the feature of

+ looking up UPNs but will make it easier for administrator to configure

+ client machines in a trust scenario as mapping will be done inside sssd

+ instead of the ``auth_to_local`` rule in ``krb5.conf`` or a ``.k5login``

+ file.

+ 

+ Overview of the Solution

+ ========================

+ 

+ Implementation details

+ ----------------------

+ 

+ Currently the NSS responder expects that the argument of the getpwnam()

+ call is a user name, either fully qualified or the short version without

+ a domain name. The name is evaluated with the help of regular expression

+ and split into a user and a domain component. By default the '@'

+ character is one of the characters to separate the user and the domain

+ component in a fully qualified user name. This collides with the usage

+ of the '@' character in Kerberos principals, because here it is used to

+ separate the user and the realm component.

+ 

+ One way to solve this is to introduce a special prefix tag, e.g.

+ ':princ:' to indicate that the remainder of the string should be

+ considered as a Kerberos principal and not be split as fully qualified

+ user names. While this would work for the localauth plugin use-case

+ there are other potential use-cases where this would not be possible.

+ E.g if SSSD should allow AD user to use their UPN (see

+ `http://technet.microsoft.com/en-us/library/cc739093(v=WS.10).aspx <http://technet.microsoft.com/en-us/library/cc739093(v=WS.10).aspx>`__

+ for details). This UPN is build by joining the user logon name and a UPN

+ suffix with an '@' character. I think it cannot be expected from the AD

+ users to add a prefix to this name and pam\_sss cannot do it either

+ because it does not know it is a fully qualified name or a UPN.

+ 

+ Especially with the second use-case, we should process the argument of

+ getpwnam() like a fully qualified user name first. If no matching user

+ was found during this pass SSSD can take the orginal input, check if it

+ contains an '@' character and search the configured backends for a

+ matching Kerberos principal or UPN. It has to be noted that in theory

+ there might be a user with the fully qualified name ``abc@domain.name``

+ and the Kerberos principal ``def@domain.com`` and a user with the fully

+ qualified name ``def@domain.name`` and the Kerberos principal

+ ``abc@domain.name``. In this case getpwnam("``abc@domain.name``") will

+ always return the entry for the user with the fully qualified name

+ ``abc@domain.name`` even if the input was meant as Kerberos principal.

+ This is even possible with Active Directory, i.e. the pre-Windows 2000

+ name and the user logon name of different users can be the same. I would

+ say that those cases are rare and can be considered as broken

+ configuration.

+ 

+ If the NSS responder decides that the given argument should be

+ considered as a Kerberos principal and was not able to find a matching

+ principal in the cache it can iterate over the configured backends and

+ send a GETACCOUNTINFO request for a user with a new filter type, e.g.

+ BE\_FILTER\_PRINCIPAL. The LDAP based ID backend which wants to support

+ this new filter type can process it like a any user request but have to

+ use an appropriate search filter.

+ 

+ The localauth plugin will be implemented according to `the Kerberos

+ documentation <http://k5wiki.kerberos.org/wiki/Projects/Plugin_support_improvements>`__.

+ The plugin can be enabled manually by the admin, but it's more

+ user-friendly to enable the plugin automatically. To this end, SSSD will

+ gain a new option, tentatively called ``krb5_localauth_snippet_path``.

+ When this option is enabled, a configuration snippet similar to the

+ following is generated into a ``/var/lib/sss/pubconf/krb5.include.d``

+ directory that is already sourced by krb5.conf: ::

+ 

+     [plugins]

+      localauth = {

+       module = sssd:/usr/lib/sssd/modules/sssd_krb5_localauth_plugin.so

+       enable_only = sssd

+      }

+ 

+ Additional notes

+ ----------------

+ 

+ The SSSD cache knows two attributes for principals "userPrincipalName"

+ and "canonicalUserPrincipalName". The first is used to save the data

+ read from the LDAP server. The second is used if the TGT contains a

+ different principal than the one used to request the TGT, i.e. if the

+ original principal was an alias. While searching principals in the cache

+ both should be respected. Currently "userPrincipalName" is already

+ declared CASE\_INSENSITIVE for searched in the cache.

+ "canonicalUserPrincipalName" should be declared the same way to make

+ searches consistent.

+ 

+ Configuration Changes

+ ---------------------

+ 

+ A new option, tentatively called ``krb5_localauth_snippet_path`` will be

+ added to sssd's Kerberos provider. When this option is set (mostly via

+ SSSD default values, not administrator's change), then SSSD will

+ generate a file that will be automatically included in krb5.conf and the

+ localauth plugin will be enabled.

+ 

+ How to test

+ -----------

+ 

+ To make sure that a ``getent passwd user@domain.name`` search for the

+ Kerberos principal ``user@domain.name`` and not for a fully qualified

+ name the domain name in sssd.conf should differ from the realm name in

+ the principal. Probably the easiest configuration is to use the ldap ID

+ provider and make sure the targeted LDAP server has a Kerberos principal

+ attribute set for the users and the ldap\_user\_principal option points

+ to the corresponding attribute name. ::

+ 

+     ...

+     [domain/default]

+     id_provider = ldap

+     ldap_user_principal = krbPrincipalName

+     ...

+ 

+ Now the fully qualified names end with '@default' while the Kerberos

+ principal are defined by the LDAP entries. E.g. if there is a user

+ 'testabc' with the principal ``testabc@MY.REALM`` the command

+ ``getent passwd testabc@default`` will return the POSIX user entry

+ searched with the fully qualified name.

+ ``getent passwd testabc@MY.REALM`` will return the same entry but now

+ search with the Kerberos principal.

+ 

+ Additionaly, logging in as a Windows user using GSSAPI should succeed

+ without requiring password with stock krb5.conf on an IPA client when

+ IPA-AD trust is established, as the following sequence illustrates: ::

+ 

+         kinit aduser@AD.DOMAIN.COM

+         ssh `hostname` -l aduser@AD.DOMAIN.COM

+ 

+ Previously, this required either a ``.k5login`` file or an elaborate

+ ``auth_to_local`` rule.

+ 

+ Author(s)

+ ~~~~~~~~~

+ 

+ Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,401 @@ 

+ Code refactoring for the 1.15 release

+ =====================================

+ 

+ Related ticket(s):

+ 

+ -  please see inline

+ 

+ Problem statement

+ -----------------

+ 

+ SSSD is being very actively developed, adding several major features in

+ each release. We need to make sure the code stays maintainable and

+ adding new features in the upcoming release won't increase the cost of

+ maintaining SSSD long-term.

+ 

+ Since SSSD releases are primarily being driven by Fedora and RHEL

+ releases, the Red Hat employed developers have a fixed amount of time

+ for code refactoring. Of course, community members and developers are

+ free to submit their patches on their schedule -- although discussion on

+ the list would be needed prior to merging any refactoring to not disrupt

+ SSSD release quality for everyone.

+ 

+ Use cases

+ ---------

+ 

+ A typical use-case would be: "A feature X depends on module Y that

+ either is missing some functionality that is missing or a module that

+ has outlived its initial design. Changing Y in that module would allow

+ us to implement X more easily or with less maintenance effort in the

+ future".

+ 

+ The goal is to prepare the code for upcoming features without

+ regressing, so testing after the refactoring is done is mandatory. We

+ should consider also doing an upstream (pre)release to make it easier to

+ test the changes.

+ 

+ Proposed items to be refactored

+ -------------------------------

+ 

+ This section lists the proposed tickets along with justifications, scope

+ and test impact.

+ 

+ Given the fixed amount of time, each refactoring has a scope, expressed

+ in just three high-level buckets - large (a couple of weeks, might take

+ most of the time of the refactoring "sprint"), medium (a week to two

+ weeks) or small (a couple of days, up to a week). Each item also lists

+ the affected modules or functionality, so that we know where we need to

+ improve tests.

+ 

+ Use the shared ``cache_req`` module for responder look-ups

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Currently each responder (except the InfoPipe responder and several

+ parts of the NSS responder) copy the logic that checks the cache and

+ contacts the Data Provider if needed. In 1.15, we should add the missing

+ functionality into the cache\_req module and convert the existing

+ responders (especially those that look up users and groups, not

+ necessarily other objects like autofs maps or hosts..) to cache\_req.

+ 

+ Benefit to SSSD

+ ^^^^^^^^^^^^^^^

+ 

+ In 1.15, we should look at allowing lookups from trusted domain with a

+ shortname. But we need to take performance into account and avoid

+ cycling over all domains including their LDAP server. Then we could

+ switch to checking the caches of all domains first before checking each

+ domain's cache and then its server.

+ 

+ This goal is tracked by

+ `https://pagure.io/SSSD/sssd/issue/843 <https://pagure.io/SSSD/sssd/issue/843>`__

+ (Login time increases strongly if more than one domain is configured)

+ and ultimately by

+ `https://pagure.io/SSSD/sssd/issue/3001 <https://pagure.io/SSSD/sssd/issue/3001>`__

+ ([RFE] Short name input format with SSSD for users from all domains when

+ domain autodiscovery is used).

+ 

+ Tracking tickets

+ ^^^^^^^^^^^^^^^^

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/3151 <https://pagure.io/SSSD/sssd/issue/3151>`__

+    - cache\_req: complete the needs of NSS responders

+ -  `https://pagure.io/SSSD/sssd/issue/1126 <https://pagure.io/SSSD/sssd/issue/1126>`__

+    - Reuse cache\_req() in responder code

+ 

+ Testing

+ ^^^^^^^

+ 

+ We already have NSS and PAM responder tests. We need to extend them

+ further to make sure all codepaths we change are tested.

+ 

+ Scope

+ ^^^^^

+ 

+ Large

+ 

+ Refactor group lookups for better performance

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The ``sdap_async_groups.c`` module grew organically over time. At the

+ moment, the module is quite hard to read and repeats some potentially

+ expensive operations (like looping over all attributes or all members)

+ several times.

+ 

+ In order to improve performance, we should refactor this module and test

+ it extensively.

+ 

+ Benefit to SSSD

+ ^^^^^^^^^^^^^^^

+ 

+ The ``sdap_async_groups.c`` module would be better maintainable and we

+ would remove some performance bottlenecks from the code.

+ 

+ Tracking tickets

+ ^^^^^^^^^^^^^^^^

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/3211 <https://pagure.io/SSSD/sssd/issue/3211>`__

+    - Refactor the sdap\_async\_groups.c module

+ 

+ Testing

+ ^^^^^^^

+ 

+ LDAP group lookups can be tested using integration tests, "just" all

+ cases we change must have corresponding test cases.

+ 

+ Scope

+ ^^^^^

+ 

+ Medium

+ 

+ Refactor the sdap\_id\_ops.c module

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The ``sdap_id_ops.c`` module was written in time where SSSD only

+ supported a single domain. One of the things that are repeatedly biting

+ us is that the module can set the fail over status of the whole domain

+ to offline. Moreover, the module has no tests and is not easy to read.

+ 

+ At this time, it's not clear whether the refactoring would just result

+ in documenting and testing the module or if it would be worth for

+ example making the module return error codes for connection errors and

+ let the caller handle the errors. Alternatively, we might decide to do

+ even more work and let the fail over code work per-domain, not per-back

+ end, which probably wouldn't be doable in the given scope. More research

+ is needed.

+ 

+ Benefit to SSSD

+ ^^^^^^^^^^^^^^^

+ 

+ The module would be better maintainable (currently there are some

+ codepaths where we even don't know why they were added anymore..), have

+ tests and we would work towards removing issues with trusted domains

+ setting SSSD to the offline mode.

+ 

+ Tracking tickets

+ ^^^^^^^^^^^^^^^^

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/1507 <https://pagure.io/SSSD/sssd/issue/1507>`__

+    - Investigate terminating connections in sdap\_ops.c and comment the

+    code some more

+ 

+ Other related tickets include:

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2767 <https://pagure.io/SSSD/sssd/issue/2767>`__

+    - The sdap\_op code always ends request with EAGAIN

+ -  `https://pagure.io/SSSD/sssd/issue/2976 <https://pagure.io/SSSD/sssd/issue/2976>`__

+    - sdap code can mark the whole sssd\_be offline

+ 

+ Testing

+ ^^^^^^^

+ 

+ Currently upstream has only basic tests with the integration tests.

+ Downstream has tests for fail over as well.

+ 

+ Scope

+ ^^^^^

+ 

+ Medium to large, depending on what changes we decide to do.

+ 

+ Provide a way to pass intermediate data between requests

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ As long as a request is confined within a single domain, we can pass

+ around ``sysdb_attrs`` or a similar data structure between different

+ requests and avoid a costly cache writes. However, when a request must

+ include processing in two different domain types, for example an IPA

+ domain that includes overrides, the only way to pass intermediate data

+ is to call a sysdb transaction and save the data to cache so that

+ another request can read them.

+ 

+ Benefit to SSSD

+ ^^^^^^^^^^^^^^^

+ 

+ Performance benefit in case SSSD must call identity lookup requests from

+ different domains.

+ 

+ Tracking tickets

+ ^^^^^^^^^^^^^^^^

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2943 <https://pagure.io/SSSD/sssd/issue/2943>`__

+    - Split out the requests for IPA users and groups that include

+    overrides into reusable requests

+ 

+ Testing

+ ^^^^^^^

+ 

+ Unfortunately, there are no upstream tests for requests that include

+ overrides. Testing would be provided by downstream tests.

+ 

+ Scope

+ ^^^^^

+ 

+ Medium to large

+ 

+ Upstream the PoC tests that utilize Samba AD for AD provider testing

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ At the moment, we don't have any upstream tests for the AD provider and

+ we rely on downstream and manual testing completely. Nikolai Kondrashov

+ wrote a proof-of-concept patches that provisions an AD DC server

+ provided by the Samba project using the cwrap wrapper libraries. The

+ scope of this effort would be to upstream this work.

+ 

+ Benefit to SSSD

+ ^^^^^^^^^^^^^^^

+ 

+ SSSD integration tests would allow us to write tests for the AD

+ provider.

+ 

+ Tracking tickets

+ ^^^^^^^^^^^^^^^^

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2818 <https://pagure.io/SSSD/sssd/issue/2818>`__

+    - Investigate if Samba4 in Fedora can be used for SSSD CI

+ 

+ Testing

+ ^^^^^^^

+ 

+ Some basic tests like looking up a user or a group would be part of this

+ effort.

+ 

+ Scope

+ ^^^^^

+ 

+ Medium

+ 

+ Decrease the functionality of the monitor process

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ SSSD is gradually moving to socket-activated services and in general

+ more self-contained services rather than implementing a service manager

+ in the monitor process. The scope of this work would be to further

+ decrease the dependence of services on the monitor process, such as

+ moving the register functionality to the services themselves.

+ Ultimately, the monitor process would perform one-time tasks such as

+ converting the configuration from INI to confdb and exit.

+ 

+ Other work might include a fallback configuration or starting the

+ services and domains even without having them explicitly enumerated in

+ the services section.

+ 

+ Benefit to SSSD

+ ^^^^^^^^^^^^^^^

+ 

+ Socket-activatable services would be better manageable by SSSD.

+ 

+ Tracking tickets

+ ^^^^^^^^^^^^^^^^

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2231 <https://pagure.io/SSSD/sssd/issue/2231>`__

+    - RFE: Drop the monitor process

+ 

+ Testing

+ ^^^^^^^

+ 

+ There are no upstream test for this functionality at the moment. Some

+ service restart tests exist in downstream, though.

+ 

+ Scope

+ ^^^^^

+ 

+ Medium to large, but hopefully this task could be split into several

+ smaller tasks.

+ 

+ Memory cache changes

+ ~~~~~~~~~~~~~~~~~~~~

+ 

+ There are several improvements to the memory cache that we have been

+ discussing lately, including a memory cache for by-SID lookups or having

+ the memory cache respect case insensitive domains. The goal of this task

+ would be to investigate what needs to be changed in the memory cache in

+ order to implement these improvements.

+ 

+ Benefit to SSSD

+ ^^^^^^^^^^^^^^^

+ 

+ Better performance through leveraging memory cache for SID lookups and

+ lookups in case insensitive domains.

+ 

+ Tracking tickets

+ ^^^^^^^^^^^^^^^^

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/3193 <https://pagure.io/SSSD/sssd/issue/3193>`__

+    - [RFE] Support aliases in the memory cache

+ -  `https://pagure.io/SSSD/sssd/issue/2727 <https://pagure.io/SSSD/sssd/issue/2727>`__

+    - Add a memcache for SID-by-id lookups

+ 

+ Testing

+ ^^^^^^^

+ 

+ We already have tests for memory cache which could be extended. Tests

+ for by-SID lookups would probably require us to add the Samba-based

+ tests first.

+ 

+ Scope

+ ^^^^^

+ 

+ Probably large, but more investigation is needed.

+ 

+ SBUS API Improvements

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ Our internal d-bus interface got a lot of new functionality to properly

+ support D-Bus on public level. The InfoPipe responder has grown and

+ also our internal communication between responders and providers has

+ become more advanced.

+ 

+ The more we use it, the more it seems that the API that takes care of

+ managing/terminating/error sbus requests is not optimal, since it

+ requires a lot of glue code and often requires several output places and

+ return code.

+ 

+ We should base sbus handlers on tevent to make sure there is only one

+ output place and return code (when tevent request finishes) and we

+ should also improve and simplify API that is used by handlers.

+ 

+ Benefit to SSSD

+ ^^^^^^^^^^^^^^^

+ 

+ SSSD depends on D-Bus (and thus on sbus) more and more and we will keep

+ adding new functionality. Reducing the amount of code that needs to be

+ added and simplified logic will helps us to develop more stable code

+ more quickly.

+ 

+ Tracking tickets

+ ^^^^^^^^^^^^^^^^

+ 

+ -  none currently

+ 

+ Testing

+ ^^^^^^^

+ 

+ Sbus is currently heavily tested. We may want to add new tests for

+ new/changed API.

+ 

+ Scope

+ ^^^^^

+ 

+ Small.

+ 

+ Failover refactoring

+ ~~~~~~~~~~~~~~~~~~~~

+ 

+ Failover mechanism wasn't prepared for subdomains and we run into

+ troubles every now and then. We added several workarounds for cases

+ where the original code wasn't sufficient but it made the code just more

+ confused. At this moment nobody understands it but bugs keeps comming.

+ 

+ We should have a separate failover context for each domain, instead of

+ one per whole backend. It must be possible to set different srv

+ mechanism for each context. DNS resolver and cache should be shared

+ between contexts.

+ 

+ Benefit to SSSD

+ ^^^^^^^^^^^^^^^

+ 

+ We remove old and problematic code that knowbody understands. We can

+ improve site discovery for trusted domains and we can have better

+ control over subdomain server resolution.

+ 

+ Tracking tickets

+ ^^^^^^^^^^^^^^^^

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2393 <https://pagure.io/SSSD/sssd/issue/2393>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2394 <https://pagure.io/SSSD/sssd/issue/2394>`__

+ 

+ Testing

+ ^^^^^^^

+ 

+ Downstream tests should remain functional, but upstream test will become

+ invalid.

+ 

+ Scope

+ ^^^^^

+ 

+ Probably out of four week scope.

+ 

+ How To Test

+ -----------

+ 

+ Run all available upstream and downstream tests, if possible, extend the

+ upstream tests.

@@ -0,0 +1,170 @@ 

+ Feature Name

+ ============

+ 

+ SSSD Performance enhancements for the 1.14 release

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2602 <https://pagure.io/SSSD/sssd/issue/2602>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2062 <https://pagure.io/SSSD/sssd/issue/2062>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ At the moment SSSD doesn't perform well in large environments. Most of

+ the use-cases we've had reported revolved around logins of users who are

+ members of large groups or a large amount of groups. Another reported

+ use-case was the time it takes to resolve a large group.

+ 

+ While workarounds are available for some of the issues (such as using

+ ``ignore_group_members`` for resolution of large groups), our goal is to

+ be able to perform well without these workarounds.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ -  User who is a member of a large amount of AD groups logs in to a

+    Linux server that is a member of the AD domain.

+ -  User who is a member of a large amount of AD or IPA groups logs in to

+    a Linux server that is a member of an IPA domain with a trust

+    relationship to an AD domain

+ -  Administrator of a Linux server runs "ls -l" in a directory where

+    files are owned by a large group. An example would be group called

+    "students" in an university setup

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ During performance analysis with systemtap, we found out that the

+ biggest delay happens when SSSD writes an entry to the cache, especially

+ for large group entries. This is also confirmed by empirical evidence

+ from our users, where most deployments were OK with SSSD performance

+ once the cache was moved to tmpfs or even when ``ignore_group_members``

+ option was enabled.

+ 

+ We can't skip cache writes completely, even if no attributes changed,

+ because we store also the expiration timestamps in the cache. Also, even

+ if a single attribute (like the timestamp) changes, ldb would need to

+ unpack the whole entry, change the record, pack it back and then write

+ the whole blob.

+ 

+ In order to mitigate the costly cache writes, we should avoid writing

+ the whole cache entry on every cache update, but only write the entries

+ if something actually changed.

+ 

+ To avoid this, we will split the monolithic ldb cache representing the

+ sysdb cache into two ldb files. One would contain the entry itself and

+ would be fully synchronous. The other (new one) would only contain the

+ timestamps and would be open using the ``LDB_FLG_NOSYNC`` to avoid

+ synchronous cache writes.

+ 

+ This would have two advantages:

+ 

+ #. If we detect that the entry hasn't changed on the LDAP server at all,

+    we could avoid writing into the main ldb cache which would still be

+    costly. We would use the value of the ``modifyTimestamp`` attribute

+    of the LDAP entry to see if the entry had changed or not.

+ #. The writes to the new async ldb cache would be much faster, because

+    the entry is smaller and because the writes wouldn't call ``fsync()``

+    due to using the async flag, but rather rely on the underlying

+    filesystem to sync the data to the disk.

+ 

+ On SSSD shutdown, we would write a canary to both the timestamp cache

+ and the main sysdb cache, denoting graceful shutdown. On SSSD startup,

+ if the canary wasn't found or if the values differ, we would just ditch

+ the timestamp cache, which would result in refresh and write of the

+ entry on the next lookup.

+ 

+ The basic idea is to use a combination of the operational

+ ``modifyTimestamp`` attribute and checking the entry itself to see if

+ the entry changed at all and if not, avoid writing to the cache.

+ 

+ Checking the value of ``modifyTimestamp`` would be enough for group

+ entries, which should be the first iteration of this work. For checking

+ if other entries (mostly users) have changed, we need to compare the

+ value of the attributes in the cache with what we are about to store in

+ the cache.

+ 

+ Therefore, these enhancements are proposed for the 1.14 versions, sorted

+ by the importance as observed with systemtap testing:

+ 

+ -  only write the cache entry if the ``modifyTimestamp`` of the original

+    entry had changed. If it hasn't changed, only the timestamps would be

+    written to the timestamp cache

+ -  if the ``modifyTimestamp`` had changed, compare the attributes of the

+    cache entry with the attributes we are about to write. If there are

+    no differences, only write to the timestamp cache

+ -  refactor the nested group processing to make sure expensive lookups

+    (such as lookups of all members of the group, there can potentially

+    be thousands of these) are only performed once and intermediate

+    results are stored in-memory.

+ -  attempt to shortcut parsing the attributes of the entry returned from

+    LDAP sooner. The idea behind this is that if the ``modifyTimestamp``

+    did not change, we can reuse the entry we already cached.

+ 

+ Minor enhancements in later versions might include:

+ 

+ -  using syncrepl in the server mode for HBAC rules and external groups

+    in refreshAndPersistMode. This would provide performance benefit for

+    legacy clients that rely on server's HBAC rules for access control.

+ -  using syncrepl in the server mode for external groups in

+    refreshAndPersistMode. This would mainly simplify the external groups

+    handling, rather than improve performance

+ -  A lot of time is spent looking up attributes in the ``sysdb_attrs``

+    array. This is something we might want to optimize after we're done

+    with the cache writes.

+ -  We might even consider offering syncrepl in refreshOnly mode as an

+    client-side option for enumeration. However, this would have to be an

+    opt-in because every refresh causes the server to walk the changelog

+    since the last refresh operation. Enabling this option on all clients

+    would trash the server performance.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The ``sysdb_ctx`` already contains a handle of the main sysdb cache. We

+ would add another ldb file that only contains the timestamp and the DN

+ of an entry. This ldb file would be opened in the nosync mode.

+ Attributes used for lookups, like ``dataExpireTimestamp`` must be

+ indexed in this database as well.

+ 

+ When storing a user or a group to sysdb using functions like

+ ``sysdb_store_user``, we first check the difference between

+ ``modifyTimestamp`` attributes. If there are no differences, only the

+ timestamp attributes, such as ``lastUpdate`` or ``dataExpireTimestamp``

+ would be updated in the timestamp cache. We need to do this check in the

+ lower-level sysdb calls to make sure this enhancement also works for

+ users and groups retrieved through the extop plugin.

+ 

+ If the value of ``modifyTimestamp`` differs, we proceed to checking the

+ diff between values in the cache and the values read from LDAP.

+ 

+ Details about shortcut of attribute parsing will be added to this design

+ page later.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ Currently no configuration changes are expected. We might add some if we

+ decide to implement on-demand syncrepl.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ If the entries on the server did not change (except timestamps), then

+ actions like user and group lookups and logins should be considerably

+ faster.

+ 

+ The SSSD should also correctly detect when the entries in fact did

+ change on the server. In this case, a full cache write will be

+ performed.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

+    with the kind help of

+ -  Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

+ -  Ludwig Krispenz

+    <`lkrispen@redhat.com <mailto:lkrispen@redhat.com>`__>

+ -  Simo Sorce <`simo@redhat.com <mailto:simo@redhat.com>`__>

@@ -0,0 +1,241 @@ 

+ Feature Name

+ ============

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2579 <https://pagure.io/SSSD/sssd/issue/2579>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ The next IPA release will support one-way trusts. SSSD needs to add

+ support for this feature in its server mode.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ One-way trust to Active Directory where FreeIPA realm trusts Active

+ Directory forest using cross-forest trust feature of AD but the AD

+ forest does not trust FreeIPA realm. Users from AD forest can access

+ resources in FreeIPA realm.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ At a high level, SSSD needs to examine the trust objects whether they

+ are one-way or two way trusts. For each one-way trust, SSSD needs to

+ fetch and store the keytab and use the keytab to secure the connection.

+ For two-way trusts, we can keep using the existing code that reuses the

+ IPA realm and the system keytab for both IPA and AD connectins. Care

+ must be taken to remove keytabs of trusts that were removed as well.

+ 

+ Fetching the keytab would be done by calling the ``ipa-getkeytab``

+ utility for every one-way trust. The keytab would only be (re)fetched if

+ it's missing or if attempting to use the keytab failed. On the IPA

+ server, we must make sure that the IPA server identity is allowed to

+ read the keytab.

+ 

+ Because handling multiple keytabs increases the risk of failing

+ connections in case the trust wasn't setup correctly, we need to modify

+ the failover code to not set the whole back end offline in case

+ connecting to a subdomain AD server fails. Instead, the subdomain will

+ be marked as inactive for a short period of time, during which it would

+ act as offline. The proper way of solving this problem would be to

+ rework the failover module so that it can act per domain, not only per

+ back end, however, that change is out of scope for this release.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ This section describes all the required changes in detail.

+ 

+ Reading the subdomains in the IPA subdomain handler

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The IPA subdomain handler will include the attribute

+ ``ipaNTTrustDirection`` when reading the trust objects. Currently this

+ attribute is not readable by the host principal, so the IPA ACIs must be

+ relaxed (ipa ticket?). If the trust direction is set to an OR of

+ ``lsa.LSA_TRUST_DIRECTION_OUTBOUND`` and

+ ``lsa.LSA_TRUST_DIRECTION_INBOUND``, then it's a two-way trust and we'll

+ just use the existing code that re-uses the IPA keytab for the AD

+ trusted domain as well. If the attribute is only

+ ``lsa.LSA_TRUST_DIRECTION_OUTBOUND``, we handle the trust as a one-way

+ trust. The trust type can be stored in ``ipa_ad_server_ctx``. If the

+ trust direction is set to ``lsa.LSA_TRUST_DIRECTION_INBOUND`` only, then

+ we would log this trust object as unsupported and continue.

+ 

+ Each ``sss_domain_info`` structure will be created as ``inactive`` in

+ the subdomain code. After enumerating the trusted domains, the subdomain

+ handler will check if a keytab already exists for every one-way trusted

+ domain. If yes, the domain is ready to use and can be enabled. If there

+ is no keytab, the subdomain handler will fork out a call to

+ ``ipa-getkeytab``, fetch the keytab and store it under

+ ``/var/lib/sss/keytabs``. The ``ipa-getkeytab`` call will be done using

+ Kerberos credentials the host has. IPA ACIs must be modified accordingly

+ to allow the IPA server principals to fetch the trust keytabs, but noone

+ else. The SSSD invocation of ``ipa-getkeytab`` will not limit the

+ enctypes in any way, we just rely on IPA creating the objects in LDAP in

+ the correct manner.

+ 

+ The directory ``/var/lib/sss/keytabs`` must only be accessible to the

+ sssd user. As an additional security measure, the directory will also

+ receive a SELinux context stricter than the default ``sssd_var_lib_t``.

+ That way, processes that are able to access the sssd state directory,

+ which is public, will not be able to access the keytabs. If fetching the

+ keytab succeeds, the domain would be enabled. The SELinux policy must

+ also be adjusted to allow calling ``ipa-getkeytab`` by the ``sssd_be``

+ process.

+ 

+ If any trust relationships were removed, the corresponding keytabs must

+ be removed from the disk as well.

+ 

+ Changes to the AD id\_ctx instantiation

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ With two-way trust, we can keep using the default IPA principal and

+ keytab.

+ 

+ With one-way trust, the keytab retrieved from the IPA server must be

+ used. Also, the principal must be passed into the

+ ``ad_create_default_options`` function. The custom values must be set

+ before we proceed to instantiate LDAP provider options. The only AD

+ provider option we need to set is AD\_KRB5\_REALM.

+ 

+ In the LDAP provider, we must take care that the following sdap options

+ are set correctly:

+ 

+ -  SDAP\_SASL\_AUTHID - must be set to the NetBIOS name of the IPA

+    domain. (A domain ``TRUST.COM`` would set this value to ``TRUST$``.

+    We would use the ``IPA_FLATNAME`` attribute, not truncate the DNS

+    domain).

+ -  SDAP\_SASL\_REALM - must be set to the AD realm

+ -  SDAP\_KRB5\_KEYTAB - must be set to the per-domain keytab retrieved

+    from IPA

+ 

+ The AD provider eventually calls ``sdap_set_sasl_options()`` from the

+ LDAP provider, we need to make sure this function receives the correct

+ values. During experimentation we were able to show that using multiple

+ different SASL users and different realms doesn't cause any problems in

+ SASL or LDAP libraries.

+ 

+ The only place that will keep using the IPA realm is the failover

+ instantiation. We need to keep using this hack until failover is

+ per-backend.

+ 

+ Subdomain offline status changes

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ At the moment, the whole back end can be either online or offline and

+ the status applies to both the main domain and the subdomains alike. As

+ an effect, a failure to connect to a subdomain server would also make

+ the main domain operate offline. In many subdomain setups, it's actually

+ more convenient not to, because the subdomain server might be on a

+ different network segment, behind a different firewall etc. Instead, the

+ domain should only be made inactive.

+ 

+ The ``sss_domain_info`` structure would convert the 'bool disabled'

+ parameter into an ``enum sss_domain_state``. The supported values would

+ be:

+ 

+ -  *disabled* - the domain should not be used by either responder or

+    provider. It was removed or disabled on the server.

+ -  *active* - the domain can be used by a responder and the data

+    provider would forward its request to the backend

+ -  *inactive* - the domain can be used by a responder, but the data

+    provider would just shortcut as if the domain was offline. For now,

+    this option will be used by subdomains only.

+ 

+ The implementation would include renaming the existing

+ ``be_mark_offline()`` function to be called ``be_mark_dom_offline()``

+ and modifying its behaviour. The existing code that sets the offline

+ status and runs the offline callbacks would be called for parent domains

+ only. For subdomains, we would mark the subdomain as inactive and

+ schedule a tevent request that would unconditionally reset inactive

+ domain to active. The request would be scheduled after

+ ``offline_timeout`` seconds to be consistent with main domains from

+ user's perspective. Likewise, the ``be_reset_offline()`` function will

+ be extended to reset inactive domains to active as well as the SIGUSR1

+ and SIGUSR2 signal handlers. Finally, all calls to the

+ ``be_is_offline()`` function should be inspected and the invocations

+ that are per-domain should be converted to a new function

+ ``be_dom_is_offline()`` that would check the offline status for parent

+ domains and the offline state for subdomains. We should also make sure

+ the backend offline status structure is opaque as currently its

+ internals are readable by external users as well. Making the offline

+ status opaque would make it safer to perform modifications to the

+ offline code.

+ 

+ In both offline and inactive cases, the ID handlers would reply with

+ ``DP_ERR_OFFLINE``. The crucial difference between offline and inactive

+ at this point would be that inactive domains are re-activated

+ undonditionally. When we modify the failover code to handle domains

+ separately, we'll be able to leverage per-domain online checks or

+ online/offline callbacks as well.

+ 

+ Detecting re-established trusts and re-fetching the keytabs

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The trust keytabs would be fetched on each SSSD restart. This may seem

+ like a bit of a churn, but retrieving the keytab should be relatively

+ cheap since the SSSD instance runs on the local server. The advantage of

+ retrieving the keytabs again is that a simple sssd service restart would

+ provide an option for the admin to start from a clean slate. Either way,

+ SSSD service restarts on the server should be quite rare.

+ 

+ In cases the ``sdap_kinit_send()`` request fails, the sdap code would

+ return a special error code instead of blindly returning ``EIO`` as it

+ does at the moment. When the ``ipa_get_ad_acct`` request receives this

+ error code, it would re-run the subdomain request in order to check if

+ the trust relationship still exists and in order to re-fetch the keytab

+ again. In order to be able to run the subdomain request separately from

+ the subdomain back end handler, the subdomain code must be wrapped into

+ a separate tevent request as the code currently assumes it's being

+ called from the subdomain backend handler only.

+ 

+ After the keytabs are fetched again, we would attempt to detect if the

+ trust has been re-established by comparing the keys in the keytab. Using

+ krb5 calls to read the keytab is fine in the back end code, because the

+ keytabs will be readable by the SSSD user and could be accessed from the

+ provider code without elevating privileges. We can't rely on ``kvno``

+ here, because it is generally always 1. In case the keys differ, then

+ trust was re-established. In that case would re-set the inactive domain

+ status and re-run the account request. If the keys are the same, we just

+ leave the domain as inactive. The ``ipa_ad_trust_ctx`` structure for

+ each trust would contain a flag that would track that we already tried

+ refreshing the keytab so that we don't download them on each failed

+ attempt. This flag would be cleared by the online callbacks (either

+ periodical or with SIGUSR2).

+ 

+ In case the trust went away, the subdomain code should remove the

+ trusted domain already with the existing code (however, this must be

+ tested). In this case, also the keytab must be removed.

+ 

+ Future work

+ ~~~~~~~~~~~

+ 

+ -  Handling failover and offline status on per-domain basis instead of

+    per-backend basis should be done in the next release.

+ -  If we ever need to store the keytabs in the database instead of on

+    the filesystem, we might want to switch from calling ipa-getkeytab to

+    calling the LDAP extended operation ourselves. However, this is not

+    planned at the moment.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ none

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ Establish a one-way trust relationship with an AD domain. Make sure both

+ IPA and AD users are resolvable. It's prudent to test combinations of

+ one-way and two-way trusts with different forests. Make sure removing a

+ trust relationship removes the keytab from the filesystem. Make sure

+ that SSSD handles re-establishing a trust relationship.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

@@ -0,0 +1,313 @@ 

+ OpenLMI provider design

+ =======================

+ 

+ Problem Statement

+ -----------------

+ 

+ SSSD provider for OpenLMI will allow administrators to retrieve

+ information from SSSD and modify the configuration through OpemLMI

+ tools.

+ 

+ First iteration

+ ---------------

+ 

+ The first iteration will implement the following features:

+ 

+ -  provide basic information about domains

+ -  provide methods for changing debug level

+ -  provide methods for enable/disable services

+ -  provide methods for enable/disable domains

+ 

+ Implementation design

+ =====================

+ 

+ Overview

+ --------

+ 

+ The OpenLMI provide will use `​D-Bus

+ responder <https://docs.pagure.org/SSSD.sssd/design_pages/dbus_responder.html>`__

+ for communication with SSSD. The provider will implement SSSD CIM schema,

+ which describes the objects and their properties and methods. The schema

+ should define a low level model of SSSD architecture. To simplify the

+ most common tasks, we will also implement several python scripts

+ that will follow the OpenLMI scripts interface. That will allow to run

+ these scripts as single command from the command line via *lmi* tool.

+ 

+ LMI scripts design

+ ------------------

+ 

+ The LMI scripts are python scripts that should provide a high level

+ approach to the most common task of the LMI provider. These scripts are

+ executed from command line using *lmi* tool with *lmi command subcommand

+ --arg* syntax.

+ 

+ -  **lmi sssd** ::

+ 

+        # print SSSD service status using LMI_Service provider

+        lmi sssd status

+ 

+        # restart SSSD service using LMI_Service provider

+        lmi sssd restart

+ 

+        # change debug level for selected components (all by default), either permanently (default) or temporarily until SSSD is restarted

+        lmi sssd set-debug-level $level [--permanently|--until-restart] [--monitor] [--services=svc1,svc2,...] [--domains=dom1,dom2,...]

+ 

+ -  **lmi sssd-domain** command will provide information about SSSD

+    domains ::

+ 

+        # list SSSD domain with basic information (name, enabled/disabled, id provider), by default only top level domains are listed

+        lmi sssd-domain list [--enabled|--disabled] [--with-subdomains]

+        lmi sssd-domain enable $domain

+        lmi sssd-domain disable $domain

+ 

+        # prints all available information about the $domain

+        lmi sssd-domain info $domain

+ 

+ -  **lmi sssd-service** command will provide information about SSSD

+    services ::

+ 

+        # list SSSD services with basic information (name, enabled/disabled), by default only top level domains are listed

+        lmi sssd-service list [--enabled|--disabled]

+        lmi sssd-service enable $service

+        lmi sssd-service disable $service

+ 

+        # prints all available information about the $service

+        lmi sssd-service info $service

+ 

+ CIM schema

+ ----------

+ 

+ The CIM schema described the interface of OpenLMI SSSD provider. It

+ defines a low level conceptual model that follows SSSD architecture.

+ 

+ .. FIXME: The image of the UML diagram is missing.

+ .. UML diagram

+ .. ~~~~~~~~~~~

+ 

+ MOF

+ ~~~

+ 

+ ::

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"),

+      Description("System Security Services Daemon")]

+     class LMI_SSSDService : CIM_Service

+     {

+ 

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"),

+      Abstract, Description("Base class for SSSD's components.")]

+     class LMI_SSSDComponent : CIM_ManagedElement

+     {

+         [Key, Description("Name of the SSSD component.")]

+         string Name;

+ 

+         [Description("Type of the SSSD component."),

+          ValueMap { "0", "1", "2" },

+          Values { "Monitor", "Responder", "Backend" }]

+         uint16 Type;

+ 

+         [BitValues{"Reserved",

+                    "Reserved",

+                    "Reserved",

+                    "Reserved",

+                    "Fatal failures",

+                    "Critical failures",

+                    "Operation failures",

+                    "Minor failures",

+                    "Configuration settings",

+                    "Function data",

+                    "Trace function",

+                    "Reserved",

+                    "Trace libraries",

+                    "Trace internal",

+                    "Trace all",

+                    "Reserved"},

+          Description("Debug level used within this component.")]

+         uint16 DebugLevel;

+ 

+         [Description("True if this process is enabled at SSSD startup and false "

+                      "otherwise.")]

+         boolean IsEnabled;

+ 

+         [Description("Permanently change debug level of this component."),

+          ValueMap { "0", "1", "2", "3" },

+          Values { "Success", "Failed", "Invalid arguments", "I/O error" }]

+         uint32 SetDebugLevelPermanently([In] uint16 debug_level);

+         

+         [Description("Change debug level of this component but switch it back "

+                      "to the original value when SSSD is restarted."),

+          ValueMap { "0", "1", "2", "3" },

+          Values { "Success", "Failed", "Invalid arguments", "I/O error" }]

+         uint32 SetDebugLevelTemporarily([In] uint16 debug_level);

+ 

+         [Description("Enable this component. SSSD has to be restarted in order "

+                      "this change to take any effect."),

+          ValueMap { "0", "1", "3" },

+          Values { "Success", "Failed", "I/O error" }]

+         uint32 Enable();

+         

+         [Description("Disable this component. SSSD has to be restarted in order "

+                      "this change to take any effect."),

+          ValueMap { "0", "1", "3" },

+          Values { "Success", "Failed", "I/O error" }]

+         uint32 Disable();

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"),

+      Description("SSSD monitor. An SSSD component that executes the other "

+                  "components and makes sure they stay running. This component "

+                  "can not be disabled.")]

+     class LMI_SSSDMonitor : LMI_SSSDComponent

+     {

+ 

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"),

+      Description("SSSD responder. An SSSD component that implements one of the "

+                  "supported services and provides data to clients.")]

+     class LMI_SSSDResponder : LMI_SSSDComponent

+     {

+ 

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"),

+      Description("SSSD backend. An SSSD component that manages data from one "

+                  "domain and its subdomains.")]

+     class LMI_SSSDBackend : LMI_SSSDComponent

+     {

+ 

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"),

+      Description("Data provider module information.")]

+     class LMI_SSSDProvider

+     {

+         [Key, Description("Name of data class handled by the provider.")]

+         string Type;

+ 

+         [Key, Description("Name of the module that provides the desired data.")]

+         string Module;

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"),

+      Description("SSSD domain.")]

+     class LMI_SSSDDomain : CIM_ManagedElement

+     {

+         [Key, Description("Name of the domain.")]

+         string Name;

+ 

+         [Description("List of primary servers for this domain.")]

+         string PrimaryServers[];

+ 

+         [Description("List of backup servers for this domain.")]

+         string BackupServers[];

+ 

+         [Description("The Kerberos realm this domain is configured with.")]

+         string Realm;

+ 

+         [Description("The domain forest this domain belongs to.")]

+         string Forest;

+ 

+         [Description("Name of the parent domain. It is not set if this "

+                      "domain is on top of the domain hierarchy.")]

+         string ParentDomain;

+ 

+         [Description("True if this is an autodiscovered subdomain.")]

+         boolean IsSubdomain;

+ 

+         [Description("Minimum UID and GID value for this domain.")]

+         uint32 MinId;

+ 

+         [Description("Maximum UID and GID value for this domain.")]

+         uint32 MaxId;

+ 

+         [Description("True if this domain supports enumeration.")]

+         boolean Enumerate;

+ 

+         [Description("True if objects from this domain can be accessed only via "

+                      "fully qualified name.")]

+         boolean UseFullyQualifiedNames;

+ 

+         [Description("The login format this domain expects.")]

+         string LoginFormat;

+ 

+         [Description("Format of fully qualified name this domain uses.")]

+         string FullyQualifiedNameFormat;

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"), Association,

+      Description("All available SSSD components.")]

+     class LMI_SSSDAvailableComponent

+     {

+         [Key, Min(1), Max(1)]

+         LMI_SSSDService REF SSSD;

+         

+         [Key]

+         LMI_SSSDComponent REF Component;

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"), Association,

+      Description("Data provider modules configured for given backend.")]

+     class LMI_SSSDBackendProvider

+     {

+         [Key, Max(1)]

+         LMI_SSSDBackend REF Backend;

+         

+         [Key]

+         LMI_SSSDProvider REF Provider;

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"), Association,

+      Description("All domains managed by SSSD.")]

+     class LMI_SSSDAvailableDomain

+     {

+         [Key, Min(1), Max(1)]

+         LMI_SSSDService REF SSSD;

+         

+         [Key]

+         LMI_SSSDDomain REF Domain;

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"), Association,

+      Description("All top level domains associated with given backend.")]

+     class LMI_SSSDBackendDomain

+     {

+         [Key, Max(1)]

+         LMI_SSSDBackend REF Backend;

+         

+         [Key, Max(1)]

+         LMI_SSSDDomain REF Domain;

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"), Association,

+      Description("All subdomains associated with given parent domain.")]

+     class LMI_SSSDDomainSubdomain

+     {

+         [Key, Max(1)]

+         LMI_SSSDDomain REF ParentDomain;

+         

+         [Key]

+         LMI_SSSDDomain REF Subdomain;

+     };

+ 

+     [Version("0.1.0"), Provider("cmpi:cmpiLMI_SSSD"), Association]

+     class LMI_HostedSSSDService: CIM_HostedService

+     {

+       [Override("Antecedent"),

+        Description("The hosting System.") ]

+       CIM_ComputerSystem REF Antecedent;

+ 

+       [Override("Dependent"),

+        Description("Instance of SSSD service.")]

+       LMI_SSSDService REF Dependent;

+     };

+ 

+ Authors

+ -------

+ 

+ -  Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

+ 

+ .. .. |image0| image:: https://fedorahosted.org/sssd/raw-attachment/wiki/DesignDocs/OpenLMIProvider/uml.png

+ ..    :target: https://fedorahosted.org/sssd/attachment/wiki/DesignDocs/OpenLMIProvider/uml.png

@@ -0,0 +1,223 @@ 

+ OTP Related Improvements

+ ========================

+ 

+ Related Ticket(s):

+ 

+ -  `Investigate using the krb5 responder for driving the PAM

+    conversation with OTPs <https://pagure.io/SSSD/sssd/issue/2335>`__

+ -  `Interaction with SSSD, GDM, OTP and GNOME

+    Keyring <https://pagure.io/SSSD/sssd/issue/2278>`__

+ 

+ Problem Statement

+ -----------------

+ 

+ One-Time-Passwords (OTP) are typically used as one part of a Two-Factor

+ authentication (2FA). In most cases the second factor is a long term

+ password of the user. In general the combined two factors are seen by

+ the client as an opaque blob which is send together with the user name

+ to an authentication service with decides if the authentication is

+ correct or not and returns the result to the client.

+ 

+ In modern environments there are a number of use cases where only the

+ long term password factor is needed:

+ 

+ -  offline authentication: 2FA authentication service is not available

+    and long term password should be compared with the hashed copy

+ -  unlocking key-rings, encrypted devices: the long term password is

+    used to protect key-rings, files or file-systems; changes of the long

+    term password should change the encryption key for the other uses as

+    well

+ 

+ The most obvious way to get the long term password is to prompt the user

+ separately for the long term password and the OTP. But for historical

+ reasons most user interfaces and more important most network protocols

+ expect a single string as password. While it would be possible to modify

+ the local user interfaces (graphical and command line) to handle the two

+ factors separately it is next to impossible to cover all network

+ protocols. This means we always have to handle the case where both

+ factors are only available in a single string as a fallback and having

+ both factors already split will just be a special case.

+ 

+ It is common practice that when using 2FA with a long term password and

+ an OTP (mostly generated by a hardware token) the long term password

+ factor is entered first at the password prompt and then the OTP. In

+ enterprise environments typically one brand of hardware tokens is used

+ which means that the OTP factor has a known number of characters. With

+ this kind of information the combined strings can be split in long term

+ and OTP factor heuristically. Additionally if the combined string was

+ split successfully once the size of the OTP factor can be stored in the

+ cache because in general it will not change and long as the same

+ hardware token is used.

+ 

+ If splitting is not possible other consumers of the long term password

+ should be made aware that they have to request the password on their own

+ if needed.

+ 

+ Since OTPs can only be used once SSSD must avoid to use it a second

+ time. This currently is the case when changing the long term password

+ via Kerberos. After the password is changed successfully SSSD tries to

+ get a fresh TGT with the new password. This should not happen in the

+ case an OTP is used instead the user should be asked to enter a fresh

+ password (with the new long term password and a valid OTP).

+ 

+ Overview of the Solution

+ ========================

+ 

+ Implementation details

+ ----------------------

+ 

+ Removing password with OTP factor from the PAM stack

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ If the combined password cannot be split into long term and OTP factor

+ and new PAM response type should be send back to pam\_sss to indicate

+ that the combined password should be removed so that other pam modules

+ (pam-gnome-keyring, pam\_mount) cannot use it anymore and have to

+ request a password on their own. It might be a good idea to allow an

+ optional string in this new PAM response. If the password can be split

+ the string can contain the long term password which should replace the

+ combined password on the PAM stack. As an alternative an unsigned

+ integer which indicated where the long term password ends can be used

+ instead. Then pam\_sss will shorten the combined password to the given

+ length.

+ 

+ In sssd-1.12, we will remove the password from the PAM stack when OTP is

+ used to make sure use-cases like *gnome-keyring* are not broken. We

+ would need more time for implementation of heuristic and proper testing.

+ Currently, the *krb5\_child* returns that an OTP was used during

+ authentication (details in function *parse\_krb5\_child\_response*).

+ This OTP flag is used just in the function *krb5\_auth\_done*. We will

+ pass OTP flag to the pam responder (*sssd\_pam*) and from pam responder

+ to the pam client (*pam\_sss.so*). If the pam client detects that OTP

+ was used it will remove password from auth\_token.

+ 

+ Do not request a new TGT after a successful password change

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ In the OTP case asking for a new TGT can easily be skipped in

+ krb5\_child but this will leave the user with an invalid TGT. A new PAM

+ response type should indicate that this is the case. It has to be

+ evaluated if it is possible with PAM to get a fresh authentication of

+ the user if only a message indicating that the TGT might be invalid and

+ should be refreshed manually can be send to the user.

+ 

+ Heuristics

+ ----------

+ 

+ There are a number of Heuristics that can be employed depending on the

+ type of tokens used and whether the type is known or not.

+ 

+ Hints to split the combined password

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Fixed number of characters in the OTP

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If the token type is known and has a fixed number of characters then the

+ client can be simply configured with a hard number and the string

+ provided by the user simply split counting from the end. knowing the

+ minimum password length for the actual user password can also allow to

+ detect errors in entering the credentials (like forgetting to actually

+ type the OTP) so that a partial input can be discarded immediately.

+ 

+ For example if we know the OTP is 6 chars and the password policy says

+ that a password must be at least 8 chars long then an input of

+ "CoolPassword" would be immediately discarded as it is not at least 14

+ chars long (min 8 + 6 for the OTP), while "CoolPassword123456" would be

+ split in "CoolPassword" and "123456"

+ 

+ Fixed set of characters in the OTP

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If it is know that the token's OTP is always only digits then this fact

+ can be used to split the last part of the string when the exact length

+ is not known. This heuristic alone is not sufficient as the user

+ password may contain trailing digits, however it may be combined with

+ other heuristics to improve them.

+ 

+ If the length of the OTP is know or is within a small range (for example

+ only 6 or 8 digit tokens are available) then strings like

+ "CoolPassword123456" or "CoolPassword1234567" are easy to split. The

+ first is "CoolPassword"+"123456" the second is "CoolPassword1"+"234567".

+ A string like "CoolPassword1234T56" would be easy to discard as faulty

+ as there is a non-digit withing the last 6 chars, however

+ "CoolPassword12345678 may be split both as "CoolPassword12" "345678" or

+ "CoolPassword" "12345678" and would need additional heuristics.

+ 

+ Previous authentication memory

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If the one shot heuristic fails we can store hints that may allow us to

+ succeed in successive authentication attempts. If we do not know what is

+ the token type, length or constants on character types used we can

+ perform a wild guess as the first authentication attempt by applying a

+ "most common" guess set and then store a number of hashes that will aid

+ us in a follow-up attempt.

+ 

+ For example, we have no knowledge of the token and the user enters

+ "CoolPassword12345678". We can assume a default heuristic of "6 digits

+ OTP" and this would split the string in "CoolPassword12" + "345678",

+ however if we got it wrong and the token was 8 digits long ("12345678")

+ then we would fail auth and be none the wiser.

+ 

+ Therefore before sending out the authentication request we gather and

+ store heuristics of our own in the form of hashes. We will assume that

+ in a 2FA environment there exist reasonable minimum limits to both the

+ Password and the OTP length, for example we assume that passwords are

+ minimum 6 chars long and OTPs are minimum 6 chars long.

+ 

+ with this assumption we store a hints list of salted hashes of the

+ following strings: ::

+ 

+      "CoolPassword12"

+      "CoolPassword1"

+      "CoolPassword"

+      "CoolPasswor"

+      "CoolPasswo"

+      "CoolPassw"

+      "CoolPass"

+      "CoolPa"

+ 

+ The order in which the strings are stored on the system may be

+ intentionally scrambled to prevent faster offline attacks on the shorter

+ hash.

+ 

+ If auth succeeds we discard the hints and store only "CoolPassword12" as

+ an offline password hash. If auth fails we keep the hints for the next

+ try and just fail authentication (yes even if the Password+OTP was

+ right).

+ 

+ On the following authentication attempt we can use the hints to aid us

+ in properly splitting the OTP. If the user provides us

+ "CoolPassword19283745" we can try to match it against the hints first

+ splitting and hashing backwards from longest to shortest. We'll try

+ "CoolPassword19" and it will fail to match then we'll try

+ "CoolPassword1" and it will match one of the hints, so we will assume

+ that as the password and take the remainder (9283745) as the OTP.

+ 

+ A user mistyping the password on the first attempt may end up causing a

+ mismatch in a later attempt, we can only clear the previous hints and

+ fail the auth until the user gets 2 consecutive attempts with different

+ OTPs right. Once one authentication attempt succeed and we store the

+ offline password hash we'll have a stronger hint for the future as we'll

+ have a known good hash. We can also save, as a hint the OTP length, and

+ check it does not vary in following successful authentication attempts,

+ if ti varies then we'll change the hint to explicitly list the known

+ good length used so far as future hints.

+ 

+ If the user changes its password on a different system or uses multiple

+ OTP tokens of varying type the hints may not work well. So if an offline

+ password hash does not match what the user types we need to start from

+ scratch, and try our best guess as well as save a list of hints.

+ 

+ This process is not fool proof, but given enough hints (either

+ discovered or provided as known facts) we could have a system that works

+ reasonably well.

+ 

+ How to test

+ -----------

+ 

+ Author(s)

+ ~~~~~~~~~

+ 

+ Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,346 @@ 

+ PAM Conversation for OTP/Two-Factor-Authentication

+ ==================================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2335 <https://pagure.io/SSSD/sssd/issue/2335>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Two-Factor-Authentication (2FA) typically uses a long term password or

+ PIN and an One-Time-Password (OTP) which in general is generated by a

+ small device. To be backward compatible and to allow 2FA in existing

+ application both factors can be entered one after the other in a single

+ password prompt. This single string is then evaluated by the

+ authentication backend.

+ 

+ On desktop systems where a single password (One-Factor-Authentication -

+ 1FA) is used this password is often used for other purposes to improve

+ the user experience. For example to

+ 

+ -  unlock a keyring

+ -  un-encrypt files or the complete home-directory

+ -  allow authentication even if the authentication backend is not

+    reachable (offline authentication).

+ 

+ These convenience features are not available if 2FA is used with both

+ factors combined in a single string because there is no general rule

+ which would allow to split the long-term and the one-time part. Only the

+ authentication backend knows how to handle it. To make these features

+ possible again with 2FA the user must enter the two factors individually

+ into two separate prompts. This design page will show how this can be

+ achieved with the help of PAM and SSSD.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ Unlock user's keyring

+ ^^^^^^^^^^^^^^^^^^^^^

+ 

+ Assuming a user with 2FA enabled which log into a desktop session. If

+ the two factors are entered into a single prompt the resulting string

+ cannot be used as a password for the desktop keyring application because

+ it will change at every login. To not create issues SSSD will

+ automatically remove the password item from the PAM environment in this

+ case (see `#2287 <https://pagure.io/SSSD/sssd/issue/2287>`__ for

+ details).

+ 

+ If the two factors are entered separately the first factor, the

+ long-term password, can but used as a password for the keyring

+ application because changes will be rare. In this case SSS can put the

+ long term password into the PAM environment so that the PAM modules of

+ the keyring application can pick it up at login time so that there is no

+ need to unlock the keyring in a separate step.

+ 

+ Offline-authentication

+ ^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Assuming a user with 2FA enabled which log into a desktop session which

+ currently does not have a connection to the central authentication

+ server. If the two factors are entered into a single prompt the

+ resulting string can only be processed by the central authentication

+ server and a loss of connection will make authentication and login

+ impossible. Even the SSSD offline-authentication feature won't help

+ because SSSD will only store a hash of the password used for the last

+ successful authentication and compare it with the hash of the current

+ password. Since the combined password will change at every login the

+ current combined password cannot be validated against any previously

+ used password.

+ 

+ If the two factors are entered separately SSSD can save a hash of the

+ first factor and can compare the hashes for the first factor when

+ offline to allow at least access to the local machine. It has to be

+ noted that the even if krb5\_store\_password\_if\_offline is set to true

+ SSSD will not try to get a TGT when going online again because in the

+ general case the second factor (OTP) might be already invalid.

+ 

+ Both use-cases mentioned above might only be working if the first factor

+ (long-term password) is sufficiently long. A 4-digit PIN used by some

+ OTP systems is not secure enough to be uses as a password for a keyring

+ or to allow local access.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ One of the design principles of SSSD's PAM module pam\_sss was that it

+ should not do any decisions on its own but let SSSD do them. As a

+ consequence, pam\_sss cannot decide which type of password prompt should

+ be shown to the user but must ask SSSD first. Currently the first

+ communication between pam\_sss and SSSD's PAM responder happens after

+ the user entered the password. Hence a new request, a pre-authentication

+ request, to the PAM responder must be added before the user is prompted

+ for the password. The pam responder can then relay the request to a

+ suitable backend where it can be evaluated which type of prompt should

+ be shown to the user. The result can be send back to pam\_sss in a PAM

+ response message which is already use to return other types of messages

+ to pam\_sss. This message can be used to send additional hints like e.g.

+ type or vendor of the expected OTP hardware token to the user.

+ 

+ Based on the response pam\_sss will ask the user for a single password

+ or for the two factors in individual prompts. If only the first factor

+ is entered and the second is empty the input will be treated as a single

+ password. This might happen is the user accidentally entered both factor

+ together or if applications or protocols (ssh, ftp) are configured or

+ can handle only a single password prompt.

+ 

+ To keep the delays due to the new request to a minimum pam\_sss should

+ only run it, if the backend really supports it. Additionally is should

+ be possilbe to disable the pre-authentication request completely with a

+ new option for pam\_sss.

+ 

+ In addition to the authentication dialog the password change dialog

+ should respect the splitting of the two factors as well.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Making sure the PAM calls SSSD the ask for credentials for SSSD users

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ PAM allows to configure multiple different authentication methods but

+ ideally only ask the user once for a password (or other credentials).

+ Typically the first configure authentication method will ask for a

+ password and if the user is not know to the authentication method the

+ password is passed on to the next authentication method. Obviously this

+ only works well with a single type of credentials.

+ 

+ In our case we want to prompt the user differently depending on whether

+ 1FA or 2FA is configured for the user. Typically pam\_unix is the first

+ authentication module to make sure the authentication of local users

+ (especially root) is not affected by other modules. But since pam\_unix

+ does not know anything about SSSD users or 2FA we have to make sure that

+ pam\_unix will not ask for a password for SSSD users. Instead of putting

+ pam\_sss in front of pam\_unix we would like to use pam\_localuser to

+ skip pam\_unix for non-local users. A PAM auth configuration might look

+ like this ::

+ 

+     auth        required               pam_env.so

+     auth        [default=1 success=ok] pam_localuser.so

+     auth        sufficient             pam_unix.so nullok try_first_pass

+     auth        requisite              pam_succeed_if.so uid >= 1000 quiet_success

+     auth        sufficient             pam_sss.so

+     auth        required               pam_deny.so

+ 

+ If the user is in /etc/passwd pam\_localuser will return success and

+ pam\_unix will be called. Otherwise the next entry (default=1) will be

+ skipped which is pam\_unix in this case. The next module for a user

+ which does not come from /etc/passwd if pam\_succeed\_if. I think it is

+ a good idea to keep the pam\_succeed\_if to keep the separation between

+ local users (uid < 1000) and remote users (uid >= 1000).

+ 

+ For the time being we keep the PAM password section (for password

+ changes) as is because it is already used to handle changing the

+ long-term password.

+ 

+ Handling the two authentication factors

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Both the wire protocol between pam\_ssd and the pam responder and SSSD

+ internally with the sss\_auth\_token struct handle the credentials as a

+ blob with a length and a type. Currently the blob contains either the

+ password or is NULL in case of no password (there is a special usage

+ where it contains a Kerberos credential cache identification).

+ 

+ Adding the two authentication factor to those structures can be achieve

+ without modifying them by using a new type for 2FA and creating a blob

+ which starts with two 32bit unsigned integer value containing the size

+ of the first and second authentication factor respectively followed by

+ the first factor and finally the second factor. ::

+ 

+     uint32_t | uint32_t | uint8_t[6] | uint8_t[5]

+     ---------|----------|------------|-----------

+     0x06     | 0x06     | abcdef     | 12345\0

+ 

+ As shown the first and second factor may or may not include a trailing

+ \\0 in the blob. But calls which decompose the blob into its component

+ must assure that that there is a trailing \\0 if strings are expected.

+ 

+ With this scheme only packaging and un-packaging the two factors has to

+ be added to existing or new calls but all other internal handling like

+ sending the data from the responder to the backends can be left

+ unchanged.

+ 

+ Backends which should handle 2FA must be made aware of the new

+ authentication token type.

+ 

+ The pre-authentication request

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The pre-authentication request will follow the same path as the

+ authentication request with an empty password and with type

+ SSS\_PAM\_PREAUTH instead of SSS\_PAM\_AUTHENTICATE. It is up to the

+ backend if and how this request will be handled. Currently only the IPA

+ auth provider will support the pre-auth request in the sense that it can

+ send different results base on the expected authentication type (1FA,

+ 2FA) back to the client. Since the IPA provider will basically use the

+ generic krb5 auth provider the krb5 auth provider will support the

+ pre-auth request as well.

+ 

+ The IPA provider will send a back a PAM response of the type

+ SSS\_PAM\_OTP\_INFO in case of 2FA with optional token\_id, vendor name

+ and challenge so that pam\_sss can give additional hints to the user and

+ a unsigned 32bit integer value indication the type of the optional data.

+ This indicator will make it more easy to add more data in future or just

+ indicate that the user uses 2FA but the backend is offline.

+ 

+ If 2FA is not enabled for the user or errors occur just a PAM\_SUCCESS

+ will be returned. In this case pam\_sss will just ask for a single

+ password.

+ 

+ If the backend is offline the PAM responder will tell the client that

+ only the first factor is needed for local authentication with the help

+ of a special SSS\_PAM\_OTP\_INFO message. To achieve this the type of

+ the hashed authentication token in the cache must be saved. Additionally

+ the length of the second factor should be saved in the cache to allow

+ splitting a combined password which might be entered by the user

+ accidentally or via services where special prompting might not be

+ available like e.g. ssh. If the second factor varies in size this scheme

+ will fail but saving the length of the first factor will make an offline

+ attack against the hashed password much easier.

+ 

+ Since the pre-auth request is an additional round-trip from pam\_sss to

+ the KDC and back it might delay the logon process a bit. To avoid this

+ in environments where only 1FA is used and option the pam\_sss,

+ *disable\_preauth*, can disable the pre\_auth request completely.

+ Additionally I would suggest a more dynamic solution where is pre-auth

+ request is only send if a special file, e.g.

+ /var/lib/sssd/pubconf/do\_pam\_preauth, exits. The IPA provider can

+ create this file at startup if 2FA is supported.

+ 

+ Special use of the first factor (long term password)

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Cached password hash for offline-authentication

+ '''''''''''''''''''''''''''''''''''''''''''''''

+ 

+ If authentication was successful, the *cache\_credentials* option is set

+ to *true* and the first factor has at least

+ *`minimal\_password\_length <https://fedorahosted.org/sssd#minimal_password_length>`__*

+ SSSD will saved a hashed version of the first factor to the user's cache

+ entry as it is done for the 1FA password.

+ 

+ PAM

+ '''

+ 

+ If authentication is successful and the *forward\_pass* option is given

+ for pam\_sss the first factor will be saved in the PAM\_AUTHTOK item so

+ that other modules in the PAM stack can use it. **QUESTION: shall

+ we(authconfig) add forward\_pass by default? Currently is it not.**

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ pam\_sss

+ ^^^^^^^^

+ 

+ New options:

+ 

+ -  *disable\_preauth* will unconditionally disable the

+    pre-authentication request

+ -  *use\_2fa* will always ask for two authentication factor, might be

+    only useful for testing

+ 

+ sssd.conf

+ ^^^^^^^^^

+ 

+ New option:

+ 

+ -  *minimal\_password\_length* will let the pam responder only save a

+    hash of the password if it has a minimal length. Additionally it

+    might indicate to pam\_sss to remove passwords from the PAM

+    environment which are shorter. **Question: We can limit this option

+    to the first factor of a 2FA authentication. Although it might be

+    useful for 1FA passwords as well it might introduce a regression to

+    existing installations.**

+ 

+ 

+ Changing the first factor

+ ^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ It is already possible to change the long-term password (first factor),

+ the current scheme will not be changed here.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ Prerequisites

+ ^^^^^^^^^^^^^

+ 

+ Create an user with 2FA/OTP authentication as e.g. described in

+ `http://www.freeipa.org/page/V4/OTP#Configuration <http://www.freeipa.org/page/V4/OTP#Configuration>`__

+ 

+ Login prompt

+ ^^^^^^^^^^^^

+ 

+ If 2FA is enabled for a user there should be two separate prompts for

+ the two authentication factors for services which can support special

+ prompting. This includes e.g. gdm and su. ssh can only support this if

+ ChallengeResponseAuthentication is enabled on the server side.

+ Nevertheless even if ChallengeResponseAuthentication is not enabled

+ ssh should allow login if both factors are given at the password prompt

+ in a single string.

+ 

+ For users without 2FA the single password prompt should be seen.

+ 

+ ::

+ 

+     # su - otpuser

+     First factor:

+     Second factor:

+     sh$

+ 

+ ::

+ 

+     # su - user

+     Password:

+     sh$

+ 

+ If both factors are entered at the *First factor* prompt and the second

+ factor prompt is empty, authentication should be successful but it

+ cannot be expected that the user's keyring is unlocked or that

+ offline-authentication will be available.

+ 

+ Unlocking user's keyring

+ ^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If an otpuser logs in with gdm and enters the two authentication factors

+ separately in the expected prompts the keyring of the user should be

+ unlocked automatically and no additional password prompt should be seen

+ after logging in.

+ 

+ Offline authentication

+ ^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If an otpuser logs in with an application which supports special

+ prompting, e.g. gdm or su, and the SSSD configuration option

+ *cache\_credentials* is set to *True* SSSD will save a hash of the first

+ factor in the cache to allow offline authentication. If later on the

+ system goes offline authentication should still be possible with the

+ first authentication factor. Only a prompt for the first factor should

+ be shown by application which supports special prompting.

+ 

+ Authors

+ ~~~~~~~

+ 

+ Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,126 @@ 

+ Periodic task API

+ =================

+ 

+ Related ticket(s):

+ 

+ -  `unite periodic refresh

+    API <https://pagure.io/SSSD/sssd/issue/1891>`__

+ 

+ Problem Statement

+ -----------------

+ 

+ SSSD contains several periodic tasks that implements custom periodic

+ API. These APIs are more or less sophisticated but it does the same

+ thing.

+ 

+ Current periodic tasks are:

+ 

+ -  Enumeration

+ -  Dynamic DNS updates

+ -  SUDO - full and smart refresh

+ -  Refresh of expired NSS entries

+ 

+ We want to replace these individual implementation with one back end

+ wise API.

+ 

+ Implementation details

+ ----------------------

+ 

+ ::

+ 

+     New error code:

+     - ERR_STOP_PERIODIC_TASK

+ 

+     struct be_ptask;

+ 

+     typedef struct tevent_req *

+     (*be_ptask_send_t)(TALLOC_CTX *mem_ctx,

+                        struct be_ctx *be_ctx,

+                        struct be_ptask *be_ptask,

+                        void *pvt);

+ 

+     typedef errno_t

+     (*be_ptask_recv_t)(struct tevent_req *req);

+ 

+     enum be_ptask_offline {

+         BE_PTASK_OFFLINE_SKIP,

+         BE_PTASK_OFFLINE_DISABLE,

+         BE_PTASK_OFFLINE_EXECUTE

+     };

+ 

+     errno_t be_ptask_create(TALLOC_CTX *mem_ctx,

+                             struct be_ctx *be_ctx,

+                             time_t period,

+                             time_t first_delay,

+                             time_t enabled_delay,

+                             time_t timeout,

+                             enum be_ptask_offline offline,

+                             be_ptask_send_t send,

+                             be_ptask_recv_t recv,

+                             void *pvt,

+                             const char *name,

+                             struct be_ptask **_task);

+ 

+     void be_ptask_enable(struct be_ptask *task);

+     void be_ptask_disable(struct be_ptask *task);

+     void be_ptask_destroy(struct be_ptask **task);

+ 

+ Terminology

+ ~~~~~~~~~~~

+ 

+ -  task: object of type be\_ptask

+ -  request: tevent request that is fired periodically and is managed by

+    task

+ 

+ API

+ ~~~

+ 

+ -  *struct be\_ptask\_task* is encapsulated.

+ -  *be\_ptask\_create()* creates and starts new periodic task

+ -  *be\_ptask\_enable(task)* enable *task* and schedule next execution

+    *enabled\_delay* from now

+ -  *be\_ptask\_disable(task)* disable *task*, cancel current timer and

+    wait until it is enabled again

+ -  *be\_ptask\_destroy(task)* destroys *task* and sets it to *NULL*

+ 

+ Schedule rules

+ ~~~~~~~~~~~~~~

+ 

+ -  the first execution is scheduled *first\_delay* seconds after the

+    task is created

+ -  if request returns EOK, it will be scheduled again to

+    'last\_execution\_time + period'

+ -  if request returns ERR\_STOP\_PERIODIC\_TASK, the task will be

+    terminated

+ -  if request returns other error code (i.e. non fatal failure), it will

+    be rescheduled to 'now + period'

+ -  if request does not complete in *timeout* seconds, it will be

+    cancelled and rescheduled to 'now + period'

+ -  if the task is reenabled, it will be scheduled again to 'now +

+    enabled\_delay'

+ 

+ When offline

+ ~~~~~~~~~~~~

+ 

+ Offline behaviour is controlled by *offline* parameter.

+ 

+ -  If *offline* is *BE\_PTASK\_OFFLINE\_EXECUTE* and back end is

+    offline, current request will be executed as planned.

+ -  If *offline* is *BE\_PTASK\_OFFLINE\_SKIP* and back end is offline,

+    current request will be skipped and rescheduled to 'now + period'.

+ -  If *offline* is *BE\_PTASK\_OFFLINE\_DISABLE*, an offline and online

+    callback is registered. The task is disabled immediately when back

+    end goes offline and then enabled again when back end goes back

+    online.

+ 

+ Debugging

+ ~~~~~~~~~

+ 

+ Task will provide enough debugging information so we can know exactly

+ when a task is created and destroy, when it is executed and finished and

+ when it will be executed in the future.

+ 

+ Author(s)

+ ---------

+ 

+ Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

@@ -0,0 +1,91 @@ 

+ Periodical refresh of expired entries

+ =====================================

+ 

+ Related ticket(s):

+ 

+ -  `Add a task to the SSSD to periodically refresh cached

+    entries <https://pagure.io/SSSD/sssd/issue/1713>`__

+ 

+ Problem Statement

+ -----------------

+ 

+ Large deployments may suffer from latency when refreshing a big number

+ of expired entries, for instance during logins that involve refreshing

+ netgroups.

+ 

+ Overview of the solution

+ ------------------------

+ 

+ We will create a back end task, that will periodically search and update

+ all expired NSS entries. The periodic task it self is provider

+ independent and it leverage new `periodic tasks

+ API <https://docs.pagure.org/SSSD.sssd/design_pages/periodic_tasks.html>`__.

+ The task will fetch all expired entries and invoke a provider specific

+ callback to update those entries.

+ 

+ Implementation details

+ ----------------------

+ 

+ ::

+ 

+     typedef struct tevent_req *

+     (*nss_refresh_records_send_t)(TALLOC_CTX *mem_ctx,

+                                   struct be_ctx *be_ctx,

+                                   const char **dn,

+                                   void *pvt);

+ 

+     typedef errno_t

+     (*nss_refresh_records_recv_t)(struct tevent_req *req);

+ 

+     struct nss_refresh_records_cb {

+         bool enabled;

+         nss_refresh_records_send_t send;

+         nss_refresh_records_recv_t recv;

+         void *pvt;

+     }

+ 

+     enum nss_refresh_type {

+         NSS_REFRESH_TYPE_USERS,

+         NSS_REFRESH_TYPE_GROUPS,

+         ... for all NSS objects

+ 

+         NSS_REFRESH_TYPE_SENTINEL

+     };

+ 

+     struct nss_refresh_records_ctx {

+         struct nss_refresh_records_cb callbacks[NSS_REFRESH_TYPE_SENTINEL];

+     };

+ 

+     struct nss_refresh_records_init();

+ 

+     errno_t

+     nss_refresh_records_add_cb(struct nss_refresh_records_ctx *ctx,

+                                enum nss_refresh_type type,

+                                nss_refresh_records_send_t send,

+                                nss_refresh_records_recv_t recv,

+                                void *pvt);

+ 

+     struct tevent_req *

+     nss_refresh_records_send(TALLOC_CTX *mem_ctx,

+                              struct be_ctx *be_ctx,

+                              void *pvt /* struct nss_refresh_records_ctx */

+ 

+     errno_t

+     nss_refresh_records_recv(struct tevent_req *req);

+ 

+ A new nss\_refresh\_records\_ctx is created during back end start up and

+ it is made a member of be\_ctx. Every ID provider can install an update

+ function during its initialization via

+ *nss\_refresh\_records\_add\_cb()*. Every callback can be installed only

+ once. After all providers are initialized, back end creates a new

+ periodic task for refreshing NSS expired entries.

+ 

+ *nss\_refresh\_records\_send()* will go through the callback list. When

+ a callback is enabled it will acquire a list of all expired entries

+ distinguish names and call the provider-specific request to refresh

+ them.

+ 

+ Author(s)

+ ---------

+ 

+ Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

@@ -0,0 +1,294 @@ 

+ Prompting For Multiple Authentication Types

+ ===========================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2988 <https://pagure.io/SSSD/sssd/issue/2988>`__

+ 

+ Problem statement

+ -----------------

+ 

+ Currently FreeIPA only allows one authentication type at a given time

+ for a user. Even if both authentication types 'password' and 'otp' were

+ configured for the user only 'otp' was allowed in that case. Because of

+ this SSSD only had to prompt for either 'Password' or 'First factor' and

+ 'Second factor'.

+ 

+ New version of FreeIPA will allow that the sue can authentication with

+ different authentication types at the same time

+ (`https://pagure.io/freeipa/issue/433 <https://pagure.io/freeipa/issue/433>`__).

+ SSSD now must prompt the user differently to make clear which

+ authentication types are available for the user ideally without making

+ the login process more complicated.

+ 

+ Use cases

+ ---------

+ 

+ (taken from

+ `http://www.freeipa.org/page/V4/Authentication\_Indicators <http://www.freeipa.org/page/V4/Authentication_Indicators>`__)

+ 

+ Strong Authentication on Selected System

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ User story

+ ''''''''''

+ 

+ As an Administrator, I want to setup authentication to a critical system

+ in my infrastructure (gateway VPN, accounting system) to only allow IdM

+ users authenticated via strong authentication methods (2FA). I do not

+ want to require strong authentication on other systems.

+ 

+ Description

+ '''''''''''

+ 

+ A realm has two servers configured for ssh which use the following

+ principals configured with authentication indicators: ::

+ 

+         host/lowsecurity.example.com []

+         host/highesecurity.example.com [otp radius]

+ 

+ When the administrator logs in using both his password and an OTP token,

+ he can access both systems via ssh. However, when the administrator logs

+ in using just a password, he can only access lowsecurity.example.com.

+ 

+ Overview of the solution

+ ------------------------

+ 

+ Depending on the available authentication types SSSD will show different

+ login prompts:

+ 

+ +------------------------+------------------------------------------------------------+

+ | Authentication types   | Login Prompt                                               |

+ +========================+============================================================+

+ | password               | Password:                                                  |

+ +------------------------+------------------------------------------------------------+

+ | otp                    | First factor:                                              |

+ |                        | Second factor:                                             |

+ +------------------------+------------------------------------------------------------+

+ | password + otp         | First factor or password:                                  |

+ |                        | Second factor, press return for Password authentication:   |

+ +------------------------+------------------------------------------------------------+

+ 

+ If Smartcard authentication is enabled (*pam\_cert\_auth = True*) and a

+ Smartcard is inserted in a reader with a certificate matching the user

+ who wants to login the login prompt will ask for the Smartcard PIN for

+ local authentication.

+ (`https://docs.pagure.org/SSSD.sssd/design_pages/smartcard_authentication_step1 <https://docs.pagure.org/SSSD.sssd/design_pages/smartcard_authentication_step1.html>`__).

+ Upcoming support for pkinit might lead to another extension of the

+ prompting scheme.

+ 

+ Implementation details

+ ----------------------

+ 

+ First it has to be noted that this feature is related to Kerberos

+ authentication because here the available authentication types are

+ indicated by the server during the authentication request. Other

+ authentication schemes like e.g. LDAP bind based authentication do

+ support multiple different methods as well, e.g. you can bind to the

+ FreeIPA LDAP server with password and 2FA authentication if the user is

+ configured accordingly. But here the client either has to try which

+ authentication might work or figure out possible authentication methods

+ by other means which might be unreliable.

+ 

+ Kerberos indicates the available authentication methods via the

+ available pre-authentication methods listed in the 'Additional

+ pre-authentication required' response. The different pre-authentication

+ methods are implemented as plugins and there are two ways for the

+ plugins to interact with a user. The first one is a prompter callback

+ which can be given as argument to e.g.

+ krb5\_get\_init\_creds\_password(). The second, newer and more advanced

+ method, are responders which can be set with

+ krb5\_get\_init\_creds\_opt\_set\_responder() which is available since

+ version 1.11 of MIT Kerberos.

+ 

+ For older versions of Kerberos where

+ krb5\_get\_init\_creds\_opt\_set\_responder() is not available nothing

+ changes because those versions do not support OTP either, so only

+ password authentication will be available here.

+ 

+ For builds with a newer version of MIT Kerberos all authentication

+ decision will be moved to the responder. This means that calls like

+ krb5\_get\_init\_creds\_password() will not get the password as an

+ argument anymore but the password is set inside of the responder if

+ password based authentication is chosen.

+ 

+ The responder will check which authentication types are available by

+ calling krb5\_responder\_list\_questions(). If password authentication

+ is available, indicated by the value of

+ KRB5\_RESPONDER\_QUESTION\_PASSWORD and the provided authentication

+ token is of type password as well the password is set as answer to the

+ password authentication and no further methods are considered.

+ 

+ If password authentication is not available or the provided

+ authentication token is of type SSS\_AUTHTOK\_TYPE\_2FA the existing OTP

+ responder component is called.

+ 

+ During the SSSD pre-authentication request a new pam response is added

+ which indicate that password authentication is available. Based on this

+ and the OTP related response the pam\_sss PAM modules can choose the

+ right set of prompts.

+ 

+ Configuration changes

+ ---------------------

+ 

+ No configuration changes are needed on the client the available

+ authentication types are determined based the the responses of the

+ server.

+ 

+ How To Test

+ -----------

+ 

+ First create an IPA user and assign an OTP token to the user, see

+ `http://www.freeipa.org/page/V4/OTP#How\_to\_Test <http://www.freeipa.org/page/V4/OTP#How_to_Test>`__

+ for details. Additionally two services with different authentication

+ indicator requirements are useful to test the returned credentials but

+ are not necessary to test the prompting. ::

+ 

+     $ ipa service-add ANY/ipa-client.example.dom

+     $ ipa-getkeytab -p ANY/ipa-client.example.com -k /tmp/any.keytab

+ 

+     $ ipa service-add OTP/ipa-client.example.com --auth-ind=otp

+     $ ipa-getkeytab -p OTP/ipa-client.example.com -k /tmp/otp.keytab

+ 

+ (the keytab files are not needed for further testing).

+ 

+ Password only

+ ^^^^^^^^^^^^^

+ 

+ To test plain password authentication call ::

+ 

+     $ ipa user-mod test_user --user-auth-type=password

+ 

+ and then as an un-privileged user ::

+ 

+     $ su - test_user

+     Password:

+ 

+ after login you can test by calling ::

+ 

+     $ kvno ANY/ipa-client.example.dom@EXAMLE.COM

+     ANY/ipa-client.example.com@EXAMPLE.COM: kvno = 1

+     $ kvno OTP/ipa-client.example.dom@EXAMLE.COM

+     kvno: KDC policy rejects request while getting credentials for OTP/ipa-client.example.dom@EXAMLE.COM

+ 

+ that only a ticket for the ANY service can be requested but not for the

+ OTP service because only the password was used for authentication.

+ Entering Password+TokenValue in a single string at the *Password:* prompt

+ will cause an authentication failure.

+ 

+ OTP only

+ ^^^^^^^^

+ 

+ The second test is for OTP only authentication ::

+ 

+     $ ipa user-mod test_user --user-auth-type=otp

+ 

+ and then as an un-privileged user call ::

+ 

+     $ su - test_user

+     First Factor: 

+     Second Factor:

+ 

+ after login you can test by calling ::

+ 

+     $ kvno ANY/ipa-client.example.dom@EXAMLE.COM

+     ANY/ipa-client.example.com@EXAMPLE.COM: kvno = 1

+     $ kvno OTP/ipa-client.example.dom@EXAMLE.COM

+     OTP/ipa-client.example.com@EXAMPLE.COM: kvno = 1

+ 

+ that tickets for both services can be request successfully because now

+ 2-Factor authentication was used to log in.

+ Entering Password+TokenValue in a single string at the *First Factor:*

+ prompt will authenticate the user successfully as well but features

+ like off-line authentication or unlocking of the user's keyring might

+ not be availble.

+ 

+ Password and OTP

+ ''''''''''''''''

+ 

+ Finally both authentication methods are enabled on the server: ::

+ 

+     $ ipa user-mod test_user --user-auth-type=otp --user-auth-type=password

+ 

+ If now call *su* as an un-privileged user ::

+ 

+     $ su - test_user

+     First Factor or Password: 

+     Second Factor, press return for Password authentication:

+ 

+ you can either just enter the password and press enter at the second

+ prompt or enter the password and the OTP token value at the respective

+ prompt. In the first case only a ticket for the ANY service can be

+ requested: ::

+ 

+     $ kvno ANY/ipa-client.example.dom@EXAMLE.COM

+     ANY/ipa-client.example.com@EXAMPLE.COML: kvno = 1

+     $ kvno OTP/ipa-client.example.dom@EXAMLE.COM

+     kvno: KDC policy rejects request while getting credentials for OTP/ipa-client.example.dom@EXAMLE.COM

+ 

+ If both factor are given tickets for both services can be requested

+ successfully: ::

+ 

+     $ kvno ANY/ipa-client.example.dom@EXAMLE.COM

+     ANY/ipa-client.example.com@EXAMPLE.COM: kvno = 1

+     $ kvno OTP/ipa-client.example.dom@EXAMLE.COM

+     OTP/ipa-client.example.com@EXAMPLE.COM: kvno = 1

+ 

+ Entering Password+TokenValue in a single string at the

+ *First Factor or Password:* prompt will cause an authentication

+ failure.

+ 

+ How To Debug

+ ------------

+ 

+ If password authentication is not working when both password and OTP

+ authentication are enabled you might hit

+ `https://bugzilla.redhat.com//show\_bug.cgi?id=1340304 <https://bugzilla.redhat.com//show_bug.cgi?id=1340304>`__

+ and should update the Kerbeors packages.

+ 

+ Inspecting log files

+ ^^^^^^^^^^^^^^^^^^^^

+ 

+ Setting *debug\_level = 9* in the *[domain/...]* section of *sssd.conf*

+ will add libkrb5 trace messages to the krb5\_child.log file which e.g.

+ will show the pre-authentication methods offered by the KDC. Based on

+ this SSSD will determine which authentication methods are available. In

+ the *Processing preauth types:* line of the trace output *141*

+ represents the OTP authentication while *2* (without FAST) or *138*

+ (with FAST) stand for password authentication.

+ 

+ Manual testing with kinit

+ ^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ If only password authentication or password and OTP authentication are

+ configured for a user kinit should ask for the password: ::

+ 

+     $ kinit test_user

+     Password for test_user@EXAMPLE.COM

+ 

+ OTP authentication is only available if FAST is enabled. The needed

+ armor credential cache must be requested with kinit as well: ::

+ 

+     $ kinit -c ./armor.ccache -k

+ 

+ which will use the default keytab (/etc/krb5.keytab) which is accessible

+ only by root to get a TGT. For easier testing you can create a special

+ service and give suitable permissions to the service keytab. To use it

+ with kinit use the -t option ::

+ 

+     $ kinit -c ./armor.ccache -k -t ./service.keytab

+ 

+ Now you can call ::

+ 

+     $ kinit -T ./armor.ccache test_user

+     Enter OTP Token Value:

+ 

+ If OTP is not enable for the user you should see the password prompt.

+ 

+ As usual, setting *KRB5\_TRACE=/dev/stdout* before calling *kinit* or

+ *kvno* will produce some extra output which might be useful.

+ 

+ Authors

+ -------

+ 

+ -  Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,37 @@ 

+ Recognize trusted domains in AD provider

+ ----------------------------------------

+ 

+ Related tickets:

+ 

+ -  `RFE Recognize trusted domains in AD

+    provider <https://pagure.io/SSSD/sssd/issue/364>`__

+ 

+ Problem Statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ With the current LDAP lookups the SSSD AD provider can only find users

+ and groups in the local domain. With Global Catalog lookups (`Design

+ page <https://docs.pagure.org/SSSD.sssd/design_pages/global_catalog_lookups.html>`__)

+ this will be extended to all users and groups of the local forest. Using

+ the PAC helps to avoid group membership lookups (`RFE Use MS-PAC to

+ retrieve user's group

+ list <https://pagure.io/SSSD/sssd/issue/1558>`__).

+ 

+ What is missing are lookups of users and groups in trusted forests and

+ password based authentication of users from trusted forests. For this

+ the names of the trusted forests and additional suffixes managed by the

+ forest are needed. The names the

+ 

+ Overview view of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ How to test

+ ~~~~~~~~~~~

+ 

+ Author(s)

+ ~~~~~~~~~

+ 

+ Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,161 @@ 

+ Restricting the domains a PAM service can auth against

+ ======================================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/1021 <https://pagure.io/SSSD/sssd/issue/1021>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Some environments require that different PAM applications can use a

+ different set of SSSD domains. The legacy PAM modules, such as

+ ``pam_ldap`` were able to use a different configuration file altogether

+ as a parameter for the PAM module. This wiki page describes a similar

+ feature for the SSSD.

+ 

+ Use case

+ ~~~~~~~~

+ 

+ An example use-case is an environment that allows external users to

+ authenticate to an FTP server. This server is running as a separate

+ non-privileged user and should only be able to authenticate to a

+ selected SSSD domain, separate from the internal company accounts. The

+ administrator is able to leverage this new feature to mark allow the FTP

+ user to only authenticate against one of the domains in the FTP PAM

+ config file.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ On the PAM client side, the PAM module should receive a new option that

+ specifies the SSSD domains to authenticate against. However, the SSSD

+ daemon can't fully trust all PAM services. We can't rely on the PAM

+ service fields either, as the data the PAM client sends to the PAM

+ application can be faked by the client, especially by users who posses

+ shell access or can start custom applications. Instead, there needs to

+ be a list of users who we trust. Typically, this would be a list of

+ users who run the PAM aware applications we wish to restrict (such as

+ ``vsftpd`` or ``openvpn``). This list would default to ``root`` only.

+ 

+ These trusted users would be allowed to authenticate against any domain

+ and would also be able to restrict the domains further using a new

+ pam\_sss option. For the untrusted users, we need to keep a list of

+ domains allowed to authenticate against, too. Since by default there are

+ no restrictions on the allowed domains, this list would default to "all

+ domains are allowed".

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ This section breaks down the Overview of the solution into consumable

+ pieces.

+ 

+ Add a new option ``pam_trusted_users``

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ A new option must be added to the PAM responder. This option will be a

+ list of numerical UIDs or group names that are trusted or a special

+ keyword "ALL". This list will be parsed during PAM responder

+ initialization (``pam_process_init`` call) using the

+ ``csv_string_to_uid_array`` function and stored in the PAM responder

+ context (``struct pam_ctx``). The PAC responder does pretty much the

+ same in the ``pac_process_init`` function.

+ 

+ In the responder, we already have the credentials of the client stored

+ in the ``cli_ctx`` structure. When a new request comes into the

+ ``pam_forwarder`` function, we will match the client UID against the

+ list of trusted IDs and determine whether the client is trusted or not.

+ 

+ The default will be the special keyword ALL, meaning all users are

+ trusted. This is in line with the current behaviour where any user can

+ access any domain.

+ 

+ Add a option to limit the domains for untrusted users

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Another option, called ``pam_allowed_auth_domains`` shall be added to

+ the PAM responder. This option will list the SSSD domains an untrusted

+ client can authenticate against. The option will accept either a

+ comma-separated list of SSSD domains or any of two special values

+ ``all`` and ``none``. The default value will be ``none`` to make sure

+ the administrator is required to spell out the domains that can be

+ contacted by an untrusted client when he starts differentiating trusted

+ and untrusted domains.

+ 

+ The option will be parsed during ``pam_process_init`` and stored in the

+ ``pam_ctx`` structure. An untrusted client will only be allowed to send

+ a request to a domain that matches the list of allowed domains.

+ 

+ In order to keep the implementation simple, the ``all`` keyword would

+ copy all domain names into ``pam_ctx`` and the ``none`` keyword would

+ set the variable holding the names to NULL. Then the check would be a

+ simple loop for all cases.

+ 

+ Care must be taken to ensure a sensible PAM error code for cases where

+ the domain wouldn't match.

+ 

+ Add a new pam module option to limit the domains

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The PAM module will gain a new option, called ``domains`` that will

+ allow the administrator to use a list of domains to authenticate this

+ PAM service against. In the PAM responder, this option will only be in

+ effect for trusted clients. If the client is trusted, only domains

+ listed in this PAM option will be considered for authentication.

+ 

+ Please note that a patch implementing most of the functionality of this

+ PAM module option was contributed to the sssd-devel mailing list by

+ Daniel Gollub already.

+ 

+ Password Changes

+ ^^^^^^^^^^^^^^^^

+ 

+ Password changes should be allowed against all domains, meaning that a

+ user A (recognized via getpeercred) will be allow to perform a password

+ change, ie implicitly allowed to access its own domain even if it is

+ untrusted. Arbitrary password changes for other users should not be

+ allowed.

+ 

+ Configuration Changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ Several new options, described in details in the previous section, will

+ be introduced. No existing options will change defaults or gain new

+ option values.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ #. Prepare an SSSD installation with at least two domains A and B.

+ #. Pick a PAM service that is running by a trusted user. One example

+    might be VPN service ran by the openvpn user or similar. Add this

+    user as a value of ``pam_trusted_users`` option in the ``[pam]``

+    section.

+ #. Add one of the domains (domain A) as a ``domain=`` parameter into the

+    ``auth`` section of your service's PAM config file

+ #. Authenticate using the selected PAM service as a user from domain A.

+    The authentication should succeed.

+ #. Authenticate using the same service as a user from domain B. The

+    authentication should fail and there should be a reasonable (ie not

+    System Error) return code returned to the application

+ #. Authenticate using a different PAM service. Make sure this service is

+    ran by an untrusted user (not root!). Logins against both A and B

+    should fail.

+ #. Set the value of ``pam_allowed_auth_domains`` to A. Login against A

+    should succeed from a service running as untrusted user.

+ #. Change the value of ``pam_allowed_auth_domains`` to all. Login

+    against both domains should succeed from a service running as

+    untrusted user.

+ #. Remove the ``domains=`` option from the PAM config file. The trusted

+    service should now be able to log in against both SSSD domains.

+ #. Perform a password change as an untrusted user against a domain that

+    he should not normally be allowed to use. The password change must

+    succeed.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Daniel Gollub <`dgollub@brocade.com <mailto:dgollub@brocade.com>`__>

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

+ -  Simo Sorce <`simo@redhat.com <mailto:simo@redhat.com>`__>

@@ -0,0 +1,73 @@ 

+ SSS NFS Client (rpc.idmapd plugin)

+ ==================================

+ 

+ The client is named "**sss\_nfs**" (althogh "sss\_idmap" or "idmap"

+ might have been better names, the term "idmap" is already occupied in

+ the SSSD world).

+ 

+ rpc.idmapd - background

+ ~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ rpc.idmapd runs on NFSv4 servers as a userspace daemon (part of

+ nfs-utils). Its role is to assist knfsd by providing the following 6

+ mapping functions:

+ 

+ #. (user) name to uid

+ #. (group) name to gid

+ #. uid to (user) name

+ #. gid to (group) name

+ #. principal (user) name to ids (uid + gid)

+ #. principal (user) name to grouplist (groups which user are member of)

+ 

+ .. FIXME: The last two items had the following note below them

+ .. :sup:`(`(1) <https://fedorahosted.org/sssd#krbnote>`__)`

+ .. What's this about?

+ 

+ rpc.idmapd provides API for developing plugins (loaded by ``dlopen(3)``)

+ which implements the actual mapping process.

+ 

+ On the kernel level, there's a caching mechanism for the responses from

+ the userspace daemon.

+ 

+ \ :sup:`(1)` Items 5 + 6 are only relevant for kerberised NFSv4 servers.

+ At the first stage only there won't be kerberos support.

+ 

+ SSSD - Responder

+ ~~~~~~~~~~~~~~~~

+ 

+ The functionality required from the Responder side is a subset of the

+ functionality provided by existing NSS Responder's commands.

+ 

+ As you can see below (on the client part of the design) - no changes are

+ needed in the NSS Responder.

+ 

+ SSSD - NFS Client

+ ~~~~~~~~~~~~~~~~~

+ 

+ Responder-Facing Interactions (existing NSS Responder commands)

+ 

+ -  ``SSS_NSS_GETPWNAM`` - map (user) name to uid requests

+ -  ``SSS_NSS_GETGRNAM`` - map (group) name to gid requests

+ -  ``SSS_NSS_GETPWUID`` - map uid to (user) name requests

+ -  ``SSS_NSS_GETGRGID`` - map gid to (group) name requests

+ 

+ The request & reply sent to & from the responder is "standard" in terms

+ of the NSS Responder.

+ 

+ The client only needs a portion of the reply. Only this portion will be

+ extracted from the packet (i.e. uid/gid/user name/group name).

+ 

+ Optimisation Techniques

+ ~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The optimisation techniques used for the NSS client will be used here as

+ well. i.e. Fast Cache (memcache) & negative-cache.

+ 

+ It will be possible for the user to disable Fast Cache from the

+ configuration file. (see below)

+ 

+ Configuration File

+ ~~~~~~~~~~~~~~~~~~

+ 

+ The configuration of the client will be part of rpc.idmap config file

+ (``/etc/idmapd.conf``).

@@ -0,0 +1,167 @@ 

+ Secrets Service

+ ===============

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2913 <https://pagure.io/SSSD/sssd/issue/2913>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Many system and user applications need to store secrets such as

+ passwords or service keys and have no good way to properly deal with

+ them. The simple approach is to embed these secrets into configuration

+ files potentially ending up exposing sensitive key material to backups,

+ config management system and in general making it harder to secure data.

+ 

+ The `custodia <https://github.com/simo5/custodia>`__ project was born

+ to deal with this problem in cloud like environments, but we found the

+ idea compelling even at a single system level. As a security service

+ sssd is ideal to host this capability while offering the same

+ `API <https://github.com/simo5/custodia/blob/master/API.md>`__ via a

+ Unix Socket. This will make it possible to use local calls and have them

+ transparently routed to a local or a remote key management store like

+ `IPA Vault <http://www.freeipa.org/page/V4/Password_Vault_1.0>`__ or

+ `HashiCorp's Vault <https://www.vaultproject.io>`__ for storage, escrow

+ and recovery.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ This feature can be used to keep secrets safe in an encrypted database

+ and yet make it easy for application to have access to the clear text

+ form, at the same time protecting access to the secrets by using

+ targeted system policies. Also when remote providers are implemented it

+ will become possible to synchronize application secrets across multiple

+ machines either for system applications like clusters or for user's

+ passwords by providing a simple network keyring that can be shared by

+ multiple clients.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ This feature will be implemented by creating a new responder process

+ that handles the REST API over a Unix Socket, and will route requestes

+ either to a local database separate from the generic ldb caches or to a

+ provider that can implement remote backends like IPA Vault to store some

+ or all the secrets of a user or a system application.

+ 

+ The new responder daemon will be called sssd-secrets and will be socket

+ activated in the default configuration on systemd based environments.

+ 

+ Additionally a client library will be provided with a very simple basic

+ API for simple application needs. The full Custodia API will be provided

+ over the socket and will be accessible via curl or a similar tool.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ TBD

+ 

+ Request flow: application -> libsss-secrets.so ---unix socket--->

+ sssd-secrets -> local store

+ 

+ Or alternatively, for an application that can speak REST itself:

+ application ---unix socket---> sssd-secrets -> local store

+ 

+ The latter would be probably used by applications written in higher

+ level languages such as Java or Python, the former would be better

+ suited for C/C++ applications without requiring additional dependencies.

+ 

+ unix socket in /var/run/secrets.socket local store in

+ /var/lib/sss/secrets/secrets.ldb encrypted using master secret

+ (potentially uses TPM where available ?)

+ 

+ Helper libraries

+ ^^^^^^^^^^^^^^^^

+ 

+ The Custodia REST API uses JSON to encode requests and replies,

+ {provisionally} the `​Jansson <http://www.digip.org/jansson/>`__ library

+ will be used behind a talloc base wrapper and insulated to allow easy

+ replacement, and encoding/decoding into specific API objects.

+ 

+ The REST API uses HTTP 1.1 as transport so we'll need to parse HTTP

+ Requests in the server, {provisionally} the

+ `​http-parser <https://github.com/nodejs/http-parser>`__ library will be

+ used in a tevent wrapper to handle these requests. The library seem to

+ be particularly suitable for use in callback based systems like tevent,

+ and does not handle memory on it's own allowing use to use fully talloc

+ backed objects natively.

+ 

+ Client Library

+ ^^^^^^^^^^^^^^

+ 

+ A simple client library is build to provide easy access to secrets from

+ C applications (or other languages via bindings) by concealing all the

+ communication into a simple API.

+ 

+ The API should be as follow: ::

+ 

+         struct secrets_context;

+ 

+         struct secrets_data {

+             uint8_t *data;

+             size_t *length;

+         };

+ 

+         struct secrets_list {

+             struct secret_data *elements;

+             int count;

+         }

+ 

+         int secrets_init(const char *appname,

+                          struct secrets_context **ctx);

+         int secrets_get(struct secrets_context *ctx, const char *name,

+                         struct secrets_data *data);

+         int secrets_put(struct secrets_context *ctx, const char *name,

+                         struct secrets_data *data);

+         int secrets_list(struct secrets_context *ctx, const char *path,

+                          struct secrets_list *list);

+ 

+         void secrets_context_free(struct secrets_context **ctx);

+         void secrets_list_contents_free(struct secrets_list *list);

+         void secrets_data_contents_free(struct secrets_data *data);

+ 

+ The API uses eclusively the "simple" secret type.

+ 

+ Resource Considerations

+ ^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ TBD user quotas

+ 

+ Security Considerations

+ ^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Access Control SO\_PEERCRED and SELinux.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ A new type of configuration section called "secrets" will be introduced.

+ Like the "domain" sections, secrets session names include a secret name

+ in the section name.

+ 

+ A typical section name to override where an application like the Apache

+ web server will have its secrets stored looks like this: ::

+ 

+      [secrets/system/httpd]

+      provider = xyz

+ 

+ The global secrets configuration will be held in the `` [secrets] `` (no

+ path components) section. Providers may deliver overrides in

+ configuration snippets, use of additional, dynamic configuration

+ snippets will be the primary method to configure overrides and remote

+ backends.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ A test/example binary that implement the functions of the client library

+ will be provided, additional the curl binary should be used to test the

+ wider API, especially once we have a proxy backend to talk to a real

+ custodia server on the network.

+ 

+ Authors

+ ~~~~~~~

+ 

+ Simo Sorce <`simo@redhat.com <mailto:simo@redhat.com>`__>

@@ -0,0 +1,155 @@ 

+ Common SIGCHLD handler

+ ======================

+ 

+ Related ticket(s):

+ -   `https://pagure.io/SSSD/sssd/issue/1004 <https://pagure.io/SSSD/sssd/issue/1004>`__

+ 

+ I took some inspiration in the SIGUSR1 signal handling in

+ data\_provider\_be.c. The SIGUSR1 signal is apparently used to force

+ offline behavior on providers.

+ 

+ DP backend enables providers to register callbacks for the

+ online/offline event. I thought it would be a good idea to make SIGCHLD

+ handling consistent with what is already in place.

+ 

+ For online/offline event, these functions are defined:

+ 

+ be\_add\_online\_cb

+ 

+ be\_run\_online\_cb

+ 

+ be\_add\_offline\_cb

+ 

+ be\_run\_offline\_cb

+ 

+ They give providers the option to register additional callbacks to

+ handle these event in their own way. The list of callbacks is stored on

+ the backend context (struct be\_ctx).

+ 

+ However there is one difference between the SIGCHLD and SIGUSR1

+ scenarios: online/offline callbacks are called serially - always all of

+ them. While the SIGCHLD handler has to invoke callbacks for the

+ appropriate PIDs only. This means we can't use the underlying callbacks

+ handling functions already in place (be\_run\_cb and be\_run\_cb\_step).

+ 

+ I propose creating new similar functions (be\_run\_sigchld\_cb and

+ be\_run\_sigchld\_cb\_step). They would work in a similar manner to the

+ previously mentioned (be\_run\_cb and be\_run\_cb\_step respectively)

+ with the difference that:

+ 

+ #. each step would check with waitpid first and invoke the callback only

+    if the child has exited

+ 

+ 2. we would use tevent\_immediate events instead of timers (as discussed

+    on IRC with Stephen)

+ 

+ Advantages of this approach:

+ 

+ #. consistent with online/offline callbacks for providers

+ 

+ 2. relatively easy to implement

+ 

+ Alternate Proposal

+ ------------------

+ 

+ struct sss\_child\_ctx \*child\_ctx

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ members

+ ^^^^^^^

+ 

+ -  ``pid_t pid``

+ -  ``sss_child_cb_fn cb``

+ -  ``void *pvt``

+ -  ``struct sss_sigchild_ctx *sigchld_ctx``

+ 

+ struct sss\_sigchild\_ctx \*sigchld\_ctx

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ members

+ ^^^^^^^

+ 

+ -  ``struct tevent_context *ev``

+ -  ``hash_table_t *children``

+ -  ``int options``

+ 

+ Function

+ ^^^^^^^^

+ 

+ This object should be initialized at process startup time. The

+ hash\_table should be initialized with ``sss_hash_create()`` to maintain

+ talloc compatibility. This hash should be keyed by integer (the PID) and

+ should contain ``struct sss_child_ctx *`` objects as its values. The

+ ``options`` member should be a bitmask allowing WUNTRACED and/or

+ WCONTINUED. The handler will ALWAYS add WNOHANG.

+ 

+ sss\_child\_register

+ ~~~~~~~~~~~~~~~~~~~~

+ 

+ Prototype

+ ^^^^^^^^^

+ 

+ ::

+ 

+     errno_t sss_child_register(TALLOC_CTX *memctx,

+                                struct sss_sigchild_ctx *sigchld_ctx,

+                                pid_t pid,

+                                sss_child_fn_t cb,

+                                void *pvt,

+                                struct sss_child_ctx **child_ctx);

+ 

+ Function

+ ^^^^^^^^

+ 

+ This function registers a callback with private data in a hash table

+ contained within sigchld\_ctx. It constructs a

+ ``struct sss_child_ctx *`` consisting of the pid, cb and pvt. It will

+ also create a destructor for this object which will remove the entry

+ from the hash. This is so that it the consumer can choose when to stop

+ monitoring the child (such as if the ``waitpid()`` call returned

+ SIGSTOP/SIGCONT or other non-terminating results. It can also be used to

+ programmatically change the callback at need.

+ 

+ sss\_child\_handler

+ ~~~~~~~~~~~~~~~~~~~

+ 

+ Prototype

+ ^^^^^^^^^

+ 

+ ::

+ 

+     void

+     sss_child_handler(struct tevent_context *ev,

+                       struct tevent_signal *se,

+                       int signum,

+                       int count,

+                       void *siginfo,

+                       void *private_data);

+ 

+ Function

+ ^^^^^^^^

+ 

+ This is the master SIGCHLD handler. It would be invoked any time that

+ the process receives a SIGCHLD signal.

+ 

+ When the signal is removed, it should call

+ ``waitpid(-1, &status, WNOHANG & sigchld_ctx->options);`` repeatedly

+ until ``waitpid()`` returns 0. For each child received, the pid should

+ be looked up in the hash table and the matching callback should be

+ invoked.

+ 

+ sss\_child\_fn\_t

+ ~~~~~~~~~~~~~~~~~

+ 

+ Prototype

+ ^^^^^^^^^

+ 

+ ::

+ 

+     typedef void (*sss_child_fn_t)(int pid, int wait_status, void *pvt);

+ 

+ sss\_child\_destructor

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Talloc\_destructor to remove a ``struct sss_child_ctx *`` from the hash

+ table of the ``struct sss_sigchild_ctx *`` that contains it.

@@ -0,0 +1,180 @@ 

+ Smartcard Authentication - PKINIT

+ =================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/3270 <https://pagure.io/SSSD/sssd/issue/3270>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Currently Smartcard Authentication is only used to authenticate against

+ the local system. PKINIT would provide a method to use Kerberos for

+ authentication and get a Kerberos Ticket Granting Ticket (TGT) during

+ the authentication so that network resources can be accessed with

+ Kerberos/GSSAPI.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ Client systems which joined to Kerberos based domains like Active

+ Directory (AD) or FreeIPA can use Smartcard authentication to replace

+ password based authentication and still get full single-sign-on (SSO)

+ access to the resources of the domain.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ SSSD's KRB5 provider will detect the presence of the PKINIT

+ pre-authentication method using the responder interface of recent MIT

+ Kerberos version. This is similar to the current detection of password

+ authentication (single-factor authentication, 1FA) and two-factor

+ authentication (2FA). Based on the available pre-authentication methods

+ and if a Smartcard with a suitable certificate is currently accessible

+ by SSSD the user will be prompted differently about what credentials

+ should be entered for authentication.

+ 

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | Available pre-authentication types   | suitable Smartcard present   | User prompting                                                                                            |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | pkinit                               | no                           | Ask to insert Smartcard and enter PIN                                                                     |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | pkinit                               | yes                          | Ask for PIN                                                                                               |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 1FA, pkinit                          | no                           | Ask for password                                                                                          |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 1FA, pkinit                          | yes                          | Ask for PIN, fallback to password if no PIN is given                                                      |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 2FA, pkinit                          | no                           | Ask for first and second factor                                                                           |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 2FA, pkinit                          | yes                          | Ask for PIN, fallback to first and second factor if no PIN is given                                       |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 1FA, 2FA, pkinit                     | no                           | Ask for first and optional second factor                                                                  |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 1FA, 2FA, pkinit                     | yes                          | Ask for PIN, fallback to first and optional second factor if no PIN is given                              |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 1FA                                  | no                           | Ask for password                                                                                          |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 1FA                                  | yes                          | Ask for PIN (for local authentication), fallback to password if no PIN is given                           |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 2FA                                  | no                           | Ask for first and second factor                                                                           |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 2FA                                  | yes                          | Ask for PIN (for local authentication), fallback to first and second factor if no PIN is given            |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 1FA, 2FA                             | no                           | Ask for first and optional second factor                                                                  |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ | 1FA, 2FA                             | yes                          | Ask for PIN (for local authentication), fallback to first and optional second factor if no PIN is given   |

+ +--------------------------------------+------------------------------+-----------------------------------------------------------------------------------------------------------+

+ 

+ Ideally the prompting will be configurable so that it can be adopted for

+ other non-IPA/non-AD use cases but the primary goal is to have sensible

+ defaults which work well for IPA and AD.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The responder interface indicates the availability of PKINIT if there is

+ KRB5\_RESPONDER\_QUESTION\_PKINIT in the

+ krb5\_responder\_question\_list(). During the SSSD pre-auth run this can

+ be used to signal the availability to the client. During authentication

+ it should be used to set the answer if the Smartcard authentication

+ credentials including the PIN and other details are available.

+ 

+ Since it is possible that there are multiple certificates on the

+ Smartcard and even that multiple Smartcards are accessible at the same

+ time the MIT Kerberos PKINIT plugin must be called in a way to make sure

+ that only the right certificate is used. The right certificate here is

+ the one that was previously selected either by SSSD's PAM responder or

+ the user.

+ 

+ To select a certificate MIT Kerberos provides the "X509\_user\_identity"

+ option which can be set with krb5\_get\_init\_creds\_opt\_set\_pa().

+ This is the same option which can be set with the -X option of the kinit

+ command. For PKCS\ `#11 <https://pagure.io/SSSD/sssd/issue/11>`__ the

+ syntax of the identify string is ::

+ 

+     PKCS11:[module_name=]modname[:slotid=slot-id][:token=token-label][:certid=cert-id][:certlabel=cert-label]

+ 

+ From the krb5.conf man page: "All keyword/values are optional. modname

+ specifies the location of a library implementing

+ PKCS\ `#11 <https://fedorahosted.org/sssd/ticket/11>`__. If a value is

+ encountered with no keyword, it is assumed to be the modname. If no

+ module-name is specified, the default is opensc-pkcs11.so. slotid=

+ and/or token= may be specified to force the use of a particular smard

+ card reader or token if there is more than one available. certid= and/or

+ certlabel= may be specified to force the selection of a particular

+ certificate on the device. See the pkinit\_cert\_match configuration

+ option for more ways to select a particular certificate to use for

+ PKINIT."

+ 

+ Sending the 'modname', 'token-label' and 'certid' would be sufficient to

+ select the certificate on the

+ PKCS\ `#11 <https://pagure.io/SSSD/sssd/issue/11>`__ level. But

+ unfortunate this does not contain any detail of the certificate itself.

+ It is recommended that 'certid' which maps to CKA\_ID

+ PKCS\ `#11 <https://pagure.io/SSSD/sssd/issue/11>`__ attribute is the

+ SHA1 value of the modulus of the RSA key but this is not enforced at any

+ place. To make sure that the PKINIT plugin really users the certificate

+ we expect it to use pkinit\_cert\_match must be used. Unfortunately

+ there is no direct library call to set it the plugin will read it

+ directly from the profile of the krb5\_context. This means that SSSD

+ must modify the profile can create a new krb5\_context with

+ krb5\_init\_context\_profile(). While looking for a value for

+ pkinit\_cert\_match the PKINIT plugin first checks if the option can be

+ found a realm sub-section in the [libdefaults] section where the realm

+ must match the realm of the client principal, i.e. the principal which

+ tries to authenticate.

+ 

+ As matching string

+ "<ISSUER>certificateIssuer<SUBJECT>certificateSubject" can be used.

+ Although there is a NSS implementation for the PKINIT plugin available

+ in the MIT Kerberos source code it is neither used in recent Fedora or

+ RHEL versions but the OpenSSL implementation is used. To avoid when

+ translating the ASN.1 representation of the issuer and subject from the

+ certificate to a DN string krb5\_child should use OpenSSL

+ unconditionally for this translation independent of the setting of the

+ '--with-crypto' configure option.

+ 

+ With "X509\_user\_identity" and "pkinit\_cert\_match" set the available

+ choices for the PKINIT plugin should be sufficiently restricted so that

+ not accidentally a wrong certificate is selected. It should even prevent

+ the scenario where an attacker replaces the Smartcard between mapping

+ the Smartcard to a system user and doing the PKINIT based

+ authentication.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ /etc/krb5.conf

+ ^^^^^^^^^^^^^^

+ 

+ Besides 'pkinit\_anchor' there are two krb5.conf options which might

+ need to be set on the client to make PKINIT work,

+ 'pkinit\_eku\_checking' and 'pkinit\_kdc\_hostname'.

+ 

+ By default the PKINIT plugin of MIT Kerberos expects that the KDC

+ certificate contains the id-pkinit-KPKdc EKU as defined in RFC 4556 and

+ has the kdc's hostname in id-pkinit-san as defined in RFC4556 as well.

+ 

+ If id-pkinit-san is missing 'pkinit\_kdc\_hostname' can be set to the

+ hostname of the kdc as stored in the dNSName in the SAN of the

+ certificate. If the dNSName SAN is missing as well, PKINIT won't work.

+ 

+ If the id-pkinit-KPKdc EKU is not set 'pkinit\_eku\_checking' can be set

+ to 'kpServerAuth' is the certificate of the kdc at least contains the

+ id-kp-serverAuth EKU. If this is missing as well 'pkinit\_eku\_checking'

+ can be set to 'none', but this is not recommended.

+ 

+ See the krb5.conf man page for details about the options.

+ 

+ In theory it would be possible that SSSD sets this options automatically

+ to make PKINIT work without adding options to krb5.conf manually. One

+ way would be to inspect the certificate presented by the KDC and set to

+ options according to the certificate content. But since SSSD does not

+ have any knowledge what content would be expected it might unknowingly

+ lower the security by receiving a spoofed ticket. It would be possible

+ to add now options for SSSD but then it would be more easy to add the

+ options directly to /etc/krb5.conf. With the recently introduced

+ /etc/krb5.conf.d/ drop-in directory for config snippets a suitable

+ snippets must be only created once and added to /etc/krb5.conf.d/ on the

+ clients.

@@ -0,0 +1,212 @@ 

+ Smartcard authentication - Step 1 (local authentication)

+ ========================================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/546 <https://pagure.io/SSSD/sssd/issue/546>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2711 <https://pagure.io/SSSD/sssd/issue/2711>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Smartcard based authentication is another alternative to password based

+ authentication. Other than OTP tokens where all authentication data can

+ be entered at a password prompt Smartcards require special hardware and

+ software to access the credentials stored on the card.

+ 

+ Currently solutions are based on the pam\_pkcs11 module which e.g.

+ requires special configuration to map the certificate stored on a

+ Smartcard to a user. Since SSSD already can map certificates to users

+ (see e.g. `LookupUsersByCertificate

+ <https://docs.pagure.org/SSSD.sssd/design_pages/lookup_users_by_certificate.html>`__)

+ integration would be much easier. Additionally features like different

+ authentication types per user or per service would only be possible with

+ SSSD.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ Local authentication

+ ^^^^^^^^^^^^^^^^^^^^

+ 

+ If there is a Smartcard reader connected to a system the user can

+ authenticate to the system by placing his smartcard into the reader,

+ give his name (might not be needed in some cases) and the Smartcard PIN

+ at the login prompt and is authenticated successfully if the certificate

+ on the Smartcard is valid and satisfies other, configurable criteria.

+ This includes authentication at a text or graphical console but local

+ services like *su* and *sudo* as well.

+ 

+ Remote authentication with ssh

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ To avoid password authentication ssh supports public-private key based

+ authentication from the beginning. Since the certificates on the

+ Smartcard are stored together with the PIN protected private key this

+ key material can be used for ssh authentication as well. On the client

+ side a ssh client program is needed which is able to access the

+ Smartcard. On the server side only the public key from the certificate

+ is needed in a suitable format for ssh. With the help of the

+ *sss\_ssh\_authorized\_keys* utility SSSD can make this information

+ available to the sshd running on the server if the certificate is stored

+ together with the other user data in a central storage, e.g. LDAP.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ To enable certificate based authentication in SSSD *pam\_cert\_auth*

+ must be set to *True* in the *[pam]* section of *sssd.conf*.

+ 

+ Additional option to tune e.g. the certificate validation will be added

+ later.

+ 

+ How to test

+ ~~~~~~~~~~~

+ 

+ In the following it is assumed that SSSD is running on an IPA client.

+ 

+ Hardware reader and card

+ ^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Configuring IPA client for local authentication with a Smartcard

+ ''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''

+ 

+ The most easy way to test is with a Smartcard reader and a Smartcard. If

+ the Smartcard reader is supported by the *coolkey* package the needed

+ PKCS#11 modules is already added to the central NSS database at

+ /etc/pki/nssdb during the installation of the package. In case a different

+ PKCS#11 module is needed it can be added with modutil ::

+ 

+     modutil modutil -dbdir /etc/pki/nssdb -add "My PKCS#11 modules" -libfile libmypkcs11.so

+ 

+ (if the PKCS#11 modules in not in the default library search path and

+ full path is needed).

+ 

+ Now *certutil* should ask for a PIN and show your certificate, if the

+ reader is connected and the card is in ::

+ 

+     certutil -L -d /etc/pki/nssdb -h all

+ 

+ Most probably the certificate on the card is currently not assigned to

+ an IPA user. To do this the certificate can be extracted with ::

+ 

+     certutil -L -d /etc/pki/nssdb -n 'Certificate Nick-Name' -a

+ 

+ which will dump the PEM encoded certificate. Since the *ipa* utility

+ expected the base64 string from the PEM encoding in a single line ::

+ 

+     certutil -L -d /etc/pki/nssdb -n 'Certificate Nick-Name' -a | grep -v -- '----' |tr -d '[\n\r]'

+ 

+ will dump it in a single line. Now *ipa user-mod username

+ --certificate=MIIE......* can be used to load the certificate into the

+ user entry. Please note that the --certificate option is only available

+ with FreeIPA 4.2 or later.

+ 

+ If *pam\_cert\_auth = True* in the *[pam]* section of *sssd.conf*, the

+ card is inserted in the reader and the certificate loaded in the user

+ entry e.g. the console login prompt should now ask for a PIN instead of

+ a password and if the correct PIN is entered the user should be

+ successfully authenticated and logged in.

+ 

+ Runnning a ssh client with Smartcard support

+ ''''''''''''''''''''''''''''''''''''''''''''

+ 

+ The *ssh* client program distributed with Fedora or RHEL contains

+ patches which add Smartcard support to the utility. To activate it the

+ needed PKCS#11 module to talk to the Smartcard reader has to be made

+ available with the *-I* option ::

+ 

+     ssh -I /usr/lib/libmypkcs11.so -l ipauser ipahost.ipa.domain

+ 

+ where the certificate has to be added to the IPA user entry as described

+ above.

+ 

+ Software certificates with libsoftokn3.so

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ First a certificate together with the private key is needed.

+ Instructions how to create certificates with FreeIPA can e.g. be found

+ at

+ `http://www.freeipa.org/page/PKI <http://www.freeipa.org/page/PKI>`__.

+ Please store the certificate in a NSS database. Since in this this first

+ step the user is looked up with the help of the full certificate any

+ certificate valid for client authentication can be used. This means

+ instead of creating a new one an existing certificate can be used.

+ **Please do this only in test environment which will be discarded

+ afterwards. Copying certificates from a production environment is a

+ security breach.**

+ 

+ Configuring IPA client for local authentication with a Smartcard

+ ''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''

+ 

+ Like in the case with a hardware reader a PKCS#11 module to access

+ the certificate in the NSS DB must be added to the systems NSS DB in

+ /etc/pki/nssdb. The PKCS#11 module for accessing certificates and private

+ keys in a NSS database is *libsoftokn3.so*. But unfortunately this

+ module needs some configuration option when loaded. Although I guess

+ *modutil* should work as well I was only able to add the needed

+ parameters with *pk11install* from the coolkey package. In the

+ following we assume that the certificate and the private key is stored

+ in an NSS DB called *my\_cert* in the home directory of the user. ::

+ 

+     pk11install -i -v -p /etc/pki/nssdb 'name=soft parameters="configdir=sql:/home/use/my_cert dbSlotDescription=\"My Slot\" dbTokenDescription=\"My Token\"" library=/usr/lib/libsoftokn3.so'

+ 

+ If *pam\_cert\_auth = True* in the *[pam]* section of *sssd.conf*, and

+ the certificate loaded in the user entry e.g. the console login prompt

+ should now ask for a PIN instead of a password and if the correct PIN is

+ entered the user should be successfully authenticated and logged in.

+ 

+ Running ssh client with Smartcard support

+ '''''''''''''''''''''''''''''''''''''''''

+ 

+ The PKCS#11 module for accessing certificates and private keys in a NSS

+ database is *libsoftokn3.so*. But unfortunately this modules needs some

+ configuration option when loaded and there is (afaik) currently no way

+ to pass them with the *ssh* command. Luckily there is p11-kit which can

+ be used to load *libsoftokn3.so* with options. In the following we

+ assume that the certificate and the private key is stored in an NSS DB

+ call *my\_cert* in the home directory of the user.

+ 

+ To configure p11-kit make sure *~/.config/pkcs11* and

+ *~/.config/pkcs11/modules* exists and create the following two files: ::

+ 

+     cat > ~/.config/pkcs11/pkcs11.conf << EOF_EOF

+     user-config: only

+     EOF_EOF

+ 

+ ::

+ 

+     cat > ~/.config/pkcs11/modules/my_cert.module << EOF_EOF

+     module: /usr/lib/libsoftokn3.so

+     x-init-reserved: configdir='sql:/home/user/my_cert'

+     critical: yes

+     EOF_EOF

+ 

+ On 64bit systems you have to use */usr/lib64/libsoftokn3.so*.

+ 

+ Now *ssh* can be called with */usr/lib/p11-kit-proxy.so* (or the 64bit

+ version) ::

+ 

+     ssh -I /usr/lib/p11-kit-proxy.so -l ipauser ipahost.ipa.domain

+ 

+ Software certificates with libsofthsm2.so

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Since the *libsoftokn3.so* PKCS#11 module requires additional configuration

+ which most consumers like the *ssh* client (see above) or *kinit* do not

+ support and the workaround with *p11-kit-proxy.so* might not always be

+ possible the following section will show how the *libsofthsm2.so*

+ PKCS#11 module from the `OpenDNSSEC <http://www.opendnssec.org/>`__

+ project can be used. As above we assume that the certificate and the

+ corresponding private key are available.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Sumit Bose <`sbose@redhat.com <mailto:sbose@redhat.com>`__>

@@ -0,0 +1,250 @@ 

+ Smartcard authentication - Testing with AD

+ ==========================================

+ 

+ As mentioned on `SmartcardAuthenticationStep1

+ <https://docs.pagure.org/SSSD.sssd/design_pages/smartcard_authentication_step1.html>`__

+ the primary focus of the development was the authentication to an IPA

+ client. Nevertheless, the general authentication code path is the same

+ and when the needed requirements are met it can be used to authenticate

+ on a AD domain client as well. But please note that as with an IPA

+ client this will only be a local authentication, so far no Kerberos

+ tickets will be available after authentication. pkinit will be added in

+ one of the next steps.

+ 

+ As with IPA the current requirement is that the full certificate is

+ stored in the user's LDAP entry in AD. Since the AD CA uses the

+ userCertificate attribute for this as well we will further assume that

+ this attribute is used to store the certificate.

+ 

+ By default the SSSD AD provider does not read certificates, so this must

+ be set in sssd.conf with the option ::

+ 

+     ldap_user_certificate = userCertificate;binary

+ 

+ *(I guess it would make sense to set this by default)*

+ 

+ Additionally, the AD provider will not create the indication file for

+ the pam\_sss client that pre-authentication is available and it has to

+ be created manually ::

+ 

+     touch /var/lib/sss/pubconf/pam_preauth_available

+ 

+ *(I guess it would make sense that the PAM responder creates the file is

+ certificate authentication is enabled.)*

+ 

+ Next, certificate authentication must be enabled in the pam section of

+ sssd.conf by setting ::

+ 

+     pam_cert_auth = True

+ 

+ Finally, CA certificates should be imported in the systems NSS database

+ to be able to verify the certificate. ::

+ 

+     certutil -d /etc/pki/nssdb -A -n 'My Issuer' -t CT,CT,CT -a -i /path/to/cert/in/PEM/format

+ 

+ These steps are needed on the client and now we will discuss how a

+ certificate can be added to the AD user entry and together with the keys

+ to a Smartcard.

+ 

+ Certificates from AD CA

+ -----------------------

+ 

+ If you do not have a Certificate Server in your AD domain you have to

+ install one by enabling the 'Active Directory Certificate Service' on

+ one of the servers in the domain.

+ 

+ To allow users to request certificates follow the steps in

+ `https://msdn.microsoft.com/en-us/library/cc770857.aspx <https://msdn.microsoft.com/en-us/library/cc770857.aspx>`__

+ .

+ 

+ Now AD user should be able to request a user certificate from the AD CA.

+ For this the user should open the Management Console, e.g. via

+ Start->Run->\ *mmc*. In the Management Console the Certificates snap-in

+ can be activated via File->Add/Remove-Snap-ins.

+ In the Certificates snap-in the 'All Tasks' context menu should offer

+ 'Automatically Enroll and Retrieve Certificates' where you can choose

+ new user certificate template which was created in the instructions from

+ MSDN. If no templates are available you should check the steps from the

+ MSDN instructions again or check if there is already a certificate

+ generated for the user by looking at the 'Personal' folder of the

+ Certificates snap-in. Here you will find the freshly created certificate

+ as well.

+ 

+ Now you have to write the certificate and the keys to a Smartcard. You

+ can use a suitable Windows tool for this. Or you can export the data and

+ write it to a Smartcard from a Linux client which will be explained in

+ the following.

+ 

+ To export the certificate select it in the Certificates Snap-in and call

+ 'Export' from the 'All Tasks' context menu. In the export wizard the

+ private key must be exported as well. The generated file can now be

+ copied to a Linux host.

+ 

+ The file created on the AD side is PKCS#12 formatted and can be inspected

+ on the Linux side with the *openssl pkcs12* utility. NSS, which is

+ currently used by SSSD to access the Smartcard, expected that the

+ Smartcard will contain the certificate together with the public and

+ private key in separate objects, connected by the same label and id.

+ We will use pkcs11-tool form the opensc package to write the data to the

+ card. In general p11tool from the gnutls project can be used as well but

+ support for writing public keys was added quite recently (gnutls-3.4.6)

+ so it might no be available on your platform. There might be an issue

+ with pkcs11-tool as well, if after writing to the card the certificate

+ and the public key are only visible after you logged into the card, i.e.

+ entered the PIN, you need a newer version of pkcs11-tool as well.

+ 

+ Extracting keys and certificate from PKCS#12 file

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Extracting the certificate and storing it in DER encoding ::

+ 

+     openssl pkcs12 -in ./ad_user.pfx -nokeys -out ./cert.pem

+     openssl x509 -in ./cert.pem -outform der -out ./cert.der

+ 

+ Extracting the private key and storing it in DER format. Please note

+ that the private key in priv.pem and priv.der is not encrypted, please

+ remove the files as soon as possible ::

+ 

+     openssl pkcs12 -in ./ad_user.pfx -nocerts -nodes -out ./priv.pem

+     openssl rsa -in ./priv.pem -outform der -out ./priv.der

+ 

+ Extracting the public key from the certificate and storing it in DER

+ encoding ::

+ 

+     openssl x509 -in ./cert.pem -pubkey -noout | openssl rsa -pubin -outform der -out ./pubkey.der

+ 

+ Writing certificate and keys to a Smartcard

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ First write the certificate data to the Samrtcard by calling ::

+ 

+     pkcs11-tool --module my_pkcs11_module.so --slot 0 -w ./cert.der -y cert -a 'My Label' --id 0123456789abcdef0123456789abcdef01234567

+ 

+ where *my\_pkcs11\_module.so* and *My Label* shou be replaced by

+ suitable values. The id value is typically the Subject Key Identifier

+ which is typically the sha1 hash value of the public key bit string from

+ the certificate. The value can either obtained from the output of ::

+ 

+     openssl x509 -in ./cert.pem -text | grep -A 1 'Subject Key Identifier:'

+ 

+ or by inspecting the public key with ::

+ 

+     openssl asn1parse -inform der -in ./pubkey.der

+         0:d=0  hl=4 l= 290 cons: SEQUENCE          

+         4:d=1  hl=2 l=  13 cons: SEQUENCE          

+         6:d=2  hl=2 l=   9 prim: OBJECT            :rsaEncryption

+        17:d=2  hl=2 l=   0 prim: NULL              

+        19:d=1  hl=4 l= 271 prim: BIT STRING

+     openssl asn1parse -inform der -in ./pubkey.der -strparse 19 -noout -out /dev/stdout |sha1sum

+ 

+ where the *19* in the second call has to match the offset value shown

+ for the *BIT STRING* component in the output of the first call.

+ 

+ The label and the id should be the same when writing the public and the

+ private key object to indicated to applications that the 3 objects

+ belong to each other.

+ 

+ As a second step the public key is written to the Smartcard by calling ::

+ 

+     pkcs11-tool --module my_pkcs11_module.so --slot 0 -w ./pubkey.der -y pubkey -a 'My Label' --id 0123456789abcdef0123456789abcdef01234567

+ 

+ And finally the private key can be written by calling ::

+ 

+     pkcs11-tool --module my_pkcs11_module.so --slot 0 -w ./priv.der -y privkey -a 'My Label' --id 0123456789abcdef0123456789abcdef01234567 -l

+ 

+ Since the private key must be protected by the PIN you have to login to

+ the Smartcard first, this is done with the help of the *-l* option which

+ instructs *pkcs11-tool* to ask for the PIN and login before writing the

+ certificate.

+ 

+ Now the Smartcard content should look like ::

+ 

+     pkcs11-tool --module my_pkcs11_module.so --slot 0 --list-objects -l

+     Logging in to "My Token".

+     Please enter User PIN:

+     Private Key Object; RSA 

+       label:      My Label

+       ID:         0123456789abcdef0123456789abcdef01234567

+       Usage:      decrypt, sign, unwrap

+     Public Key Object; RSA 2048 bits

+       label:      My Label

+       ID:         0123456789abcdef0123456789abcdef01234567

+       Usage:      encrypt, verify, wrap

+     Certificate Object, type = X.509 cert

+       label:      My Label

+       ID:         0123456789abcdef0123456789abcdef01234567

+ 

+ If the PKCS#11 module is properly added to the system's NSS database (see

+ `https://docs.pagure.org/SSSD.sssd/design_pages/smartcard_authentication_step1#configuring-ipa-client-for-local-authentication-with-a-smartcard <https://docs.pagure.org/SSSD.sssd/design_pages/smartcard_authentication_step1.html#configuring-ipa-client-for-local-authentication-with-a-smartcard>`__

+ for details) p11\_child should be able to return the certificate ::

+ 

+     /usr/libexec/sssd/p11_child --pre --nssdb=/etc/pki/nssdb

+ 

+ If this works well SSSD should now be able to authenticate the AD user

+ with the help of the Smartcard.

+ 

+ Certificate from an external CA

+ -------------------------------

+ 

+ There are various way how to get a certificate from an extrernal CA, see

+ e.g.

+ `https://blog-nkinder.rhcloud.com/?p=179 <https://blog-nkinder.rhcloud.com/?p=179>`__

+ how to generate the keys on a Smartcard, request a certificate form a CA

+ and store it on the Smartcard. As a result the certificate and all the

+ needed keys are already on the Smartcard. In the following we will

+ explain how to make AD aware of it and enable local Smartcard login for

+ an AD user.

+ 

+ In other situations the certificate and the keys might be available as

+ files. The previous section should help to convert the file content into

+ DER encoded objects and write them to a Smartcard.

+ 

+ Reading the certificate from the Smartcard

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The certificate can be read with various tools like *certutil*,

+ *pkcs11-tool* or *p11tool*. But using SSSD'S *p11\_child* has the

+ advantage that it is verified that SSSD can access the certificate as

+ well. ::

+ 

+     /usr/libexec/sssd/p11_child --pre --nssdb=/etc/pki/nssdb | tail -1 | base64 -d > ./cert.der

+ 

+ should write the DER encode certificate data into the file *cert.der*.

+ If there are any issue you can call ::

+ 

+     /usr/libexec/sssd/p11_child --pre -d 10 --debug-fd=2 --nssdb=/etc/pki/nssdb

+ 

+ to see the full debug output which might help to identify what is going

+ wrong.

+ 

+ Writing the certificate to AD

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ For the following operations the permissions of the AD user which should

+ get the certificate are sufficient. So either login as the user or call

+ *kinit `aduser@AD.DOMAIN <mailto:aduser@AD.DOMAIN>`__*.

+ 

+ First the distinguished name (DN) of the user object in AD has to be

+ identified with ::

+ 

+     ldapsearch -Y GSSAPI -H ldap://ad-dc.ad.domain -b 'dc=ad,dc=domain' samAccountName=aduser dn

+ 

+ In the most easy case the DN will look like

+ *CN=aduser,CN=Users,DC=ad,DC=domain*.

+ 

+ With this DN a simple LDIF file can be created ::

+ 

+     dn: CN=aduser,CN=Users,DC=ad,DC=domain

+     changetype: modify

+     add: userCertificate

+     userCertificate:< file:cert.der

+ 

+ With this LDIF file the certificate can be loaded into the aduser entry ::

+ 

+     ldapmodify -Y GSSAPI -H ldap://ad-dc.ad.domain -f file.ldif

+ 

+ Now SSSD can check if the certificate belongs to the aduser and can

+ authenticate the aduser locally with the Smartcard. Please note that

+ SSSD might have a valid user entry in the cache and will not read the

+ freshly added certificate immediately. To force a refresh just call

+ *sss\_cache -u `aduser@ad.domain <mailto:aduser@ad.domain>`__*.

@@ -0,0 +1,423 @@ 

+ Smart Cards

+ ===========

+ 

+ For Fedora 20 (ended up in 21), we proposed adding `support for smart

+ cards <https://fedoraproject.org/wiki/Changes/SSSD_Smart_Card_Support>`__

+ to SSSD. This is where we work out how to do it, or try to, anyway.

+ 

+ Multi-step Authentication

+ -------------------------

+ 

+ Considerations

+ ~~~~~~~~~~~~~~

+ 

+ -  Current sequence of events when a client authenticates:

+ 

+    -  pam\_sss sends a request to the PAM responder, containing

+       parameters:

+ 

+       -  PAM\_USER (the login name)

+       -  PAM\_SERVICE

+       -  PAM\_TTY

+       -  PAM\_RUSER

+       -  PAM\_RHOST

+       -  client PID

+       -  PAM\_AUTHTOK (supplied password)

+       -  PAM\_NEWAUTHTOK (new password, if changing)

+ 

+    -  PAM responder sends a D-Bus method call over a private bus to the

+       domain's provider, containing:

+ 

+       -  user

+       -  domain

+       -  service

+       -  tty

+       -  ruser

+       -  rhost

+       -  authtok\_type (enumerated, one of {password, ccfilename,

+          empty})

+       -  authtok\_data

+       -  newauthtok\_type (enumerated, one of {password, ccfilename,

+          empty})

+       -  newauthtok\_data

+       -  1 if client is on privileged pipe, else 0

+       -  client PID

+ 

+    -  domain provider sends a method reply to the PAM responder

+ 

+       -  PAM status code

+       -  empty array of response messages

+ 

+    -  PAM responder replies to pam\_sss

+ 

+       -  PAM status code

+ 

+ -  PAM modules can prompt for an arbitrary number of answers at once.

+ 

+    -  Sometimes this is a password.

+    -  Sometimes this is a non-password one-off secret.

+    -  Both tend to be stored as the PAM\_AUTHTOK item, without

+       indication of what's been stored there.

+ 

+ -  An LDAP simple bind transmits the user's DN and current password to

+    server; nothing else is required.

+ 

+    -  Conclusion: multi-step needs to be a superset of single-step.

+ 

+ -  An LDAP OTP bind requires a fresh OTP value.

+ 

+    -  Conclusion: we need to be able to distinguish between a cacheable

+       password and a not-cacheable not-a-password.

+ 

+ -  Kerberos can prompt for current password, and/or one of several OTP

+    values, and/or one of several smart card PINs.

+ 

+    -  Conclusion: multi-step is sometimes going to mean having multiple

+       questions at a given each step, only some of which will need to be

+       answered before proceeding.

+ 

+ -  The set of things the user can provide may change during an

+    authentication attempt, for example if the user inserts a smart card

+    or swipes a finger over a scanner.

+ -  The model for the dialog between components should cover these cases.

+ 

+ Proposal

+ ~~~~~~~~

+ 

+ -  Modify request/reply messages from pam\_sss to responder to backends.

+ -  Request and reply messages passed between pam\_sss and responder need

+    to add an identifier to distinguish one ongoing authentication

+    attempt from another.

+ 

+    -  This value is under control of a possibly-not-trustworthy

+       pam\_sss, so if it isn't the value that we designate for

+       requesting a new attempt (either 0 or -1), it should be checked

+       for correspondence with an ongoing authentication attempt before

+       we do anything else for it.

+ 

+ -  Request and reply messages passed between responder and data provider

+    need to add an identifier to distinguish one ongoing authentication

+    attempt from another.

+ 

+    -  Because the responder is multiplexing requests from multiple

+       pam\_sss instances, this identifier should be not be the same as

+       the one received from pam\_sss in a request message.

+ 

+ -  Reply messages over both connections (pam\_sss-to-responder, and

+    responder-to-provider) need to be able to carry multiple questions

+    back to a requester. A request message needs to be able to supply

+    answers for any subset of those questions.

+ -  The initial request also needs to start carrying some flags. If set,

+    the client (which, because PAM conversation callbacks are blocking,

+    probably won't be pam\_sss) is indicating that it can handle updates

+    to the list of questions for a given authentication attempt.

+ 

+    -  The responder can indicate that a smart card was inserted by

+       adding a request for the card's PIN to the list of questions and

+       sending the new list of questions to the responder's client.

+    -  The responder can similarly indicate that input is no longer

+       required if authentication happened out-of-band or has timed out.

+    -  The provider will likely emit signals or issue method calls to the

+       responder to get the responder to forward updated information to

+       the responder's client if that information has been asked for.

+    -  If we get the provider and responder talking at each other in an

+       unsynchronized manner like this, request identifiers are going to

+       have to stay unique over a sufficiently-large period of time that

+       simply discarding previously-queued-but-unprocessed updates will

+       keep clients of the responder from getting confused by a backlog

+       of updates which pertain to a previous authentication attempt.

+    -  **TODO** run this by the GDM folks, to make sure it's the sort of

+       info they're going to be able to use.

+ 

+ -  Reply messages will need to convey status that isn't just a PAM

+    result code: { no-such-auth-attempt-id, got-questions-for-you,

+    auth-failed, auth-succeeded }.

+ -  Suggested prompt layout for reply messages: an array of tuples.

+    [(group,questionid,type,details),...]

+ 

+    -  group: an integer identifier for a group of related questions

+    -  questionid: integer identifying a specific question in a group

+    -  type: enumeration indicating "kind" of information being requested

+ 

+       -  password

+       -  secret (might not be a password, caching not recommended)

+       -  OTP value

+       -  insert smart card (unsynchronized only)

+       -  scan proximity device (unsynchronized only)

+       -  swipe finger (unsynchronized only)

+       -  provide smart card PIN out-of-band (if the token has a

+          protected authentication path)

+       -  smart card PIN (if the token does not have a protected

+          authentication path)

+ 

+    -  detail: structured per-type data:

+ 

+       -  password → empty

+       -  secret → prompt text

+       -  OTP value → modeled after

+          `​krb5\_responder\_otp\_tokeninfo <https://github.com/krb5/krb5/blob/master/src/include/krb5/krb5.hin#L6626>`__

+ 

+          -  service name

+          -  token index (corresponds to "ti" in the krb5 API)

+          -  flags

+          -  format

+          -  length

+          -  vendor

+          -  challenge

+          -  token ID

+          -  algorithm ID

+ 

+       -  insert smart card -> empty

+       -  scan proximity device -> empty

+       -  swipe finger -> empty

+       -  provide smart card PIN out-of-band or smart card PIN

+ 

+          -  either broken-out

+             `​pkinit\_identities <http://web.mit.edu/Kerberos/krb5-1.9/krb5-1.9.1/doc/krb5-admin.html#pkinit%20identity%20syntax>`__

+ 

+             -  module name (shared library file name)

+             -  slot ID (hex string representing byte array)

+             -  token label (string)

+             -  certificate ID (hex string representing byte array)

+             -  certificate label (string)

+ 

+          -  or `​p11-kit

+             URI <https://datatracker.ietf.org/doc/draft-pechanec-pkcs11uri/>`__

+             fields, some of which are:

+ 

+             -  token label

+             -  token manufacturer

+             -  token model

+             -  token serial number

+             -  certificate label

+ 

+          -  We'll have to go with a common subset of the two to mask the

+             differences between what we get when we're doing PKINIT and

+             what we have available when we're calling p11-kit/PKCS#11

+             directly. Medium-to-longer term, we may need to add to what

+             PKINIT provides here.

+ 

+    -  The responder needs to be able to filter the list of questions it

+       gets from the provider to remove questions for kinds of data which

+       the administrator does not want to allow to be used for

+       authenticating users, passing back to the client only a subset of

+       the questions that the provider requested be asked. The list of

+       questions could even be pared down to nothing.

+ 

+ -  Suggested answer form for request messages: another tuple array.

+    [(group,questionid,answer)...]

+ 

+    -  For any group, all questionids are expected to have answers.

+    -  Answer formats:

+ 

+       -  password → text string

+       -  secret → text string

+       -  OTP value → (token index, otp, pin) (or is text string enough?)

+       -  insert smart card (never sent by client - out-of-band action)

+       -  scan proximity device (never sent by client - out-of-band

+          action)

+       -  swipe finger (never sent by client - out-of-band action)

+       -  provide smart card PIN out-of-band (never sent by client -

+          out-of-band action)

+       -  smart card PIN → (token label, text string)

+ 

+ Smart Card support, part 1: load the drivers, find the right reader (if we can).

+ --------------------------------------------------------------------------------

+ 

+ Considerations

+ ~~~~~~~~~~~~~~

+ 

+ -  Readers (*slots* in PKCS#11 jargon), and access to them, haven't

+    historically been tied to a console or seat.

+ -  Cards (*tokens* in PKCS#11 jargon) have therefore been accessible to

+    any user on the system for the purposes of logging in to a card and

+    using it, or attempting to log in to a card.

+ -  Some cards self-destruct after a maximum number of failed login

+    attempts. Some of these cards can expose a flag in their

+    CK\_TOKEN\_INFO to warn that this is about to become a problem.

+    Experimentally, some cards which lock the user out after some number

+    of failed login attempts, however, don't expose this flag.

+ -  Multi-user systems, or software which isn't careful about what it

+    uses to try to log into a card, can make it very easy for one user to

+    destroy someone else's card.

+ 

+ Proposal

+ ~~~~~~~~

+ 

+ -  Use p11-kit to avoid having to tell SSSD specifically about which

+    module or modules to use, and to allow us to share the hardware

+    configuration which will be used by the user during their login

+    session. For PKINIT, this means we'll probably end up using

+    p11-kit-proxy.so by default, as it expects the name of a module to

+    load when using PKCS#11.

+ -  Loading the p11-kit-proxy.so module using NSS's APIs gives it access

+    to the same set of modules that p11-kit's native API provides, and

+    also adds reference counting for module initializations, which should

+    avoid at least one known error case that we've seen with the

+    soft-pkcs11.so module (wherein calling its initialization function a

+    second time nukes any still-being used state).

+ -  **TODO** We can map from a responder client PID to a unit which might

+    have a TTY (before login) or a session with a seat (after login), but

+    can we map from a SLOT\_INFO to a anything that lets us avoid using a

+    slot if it doesn't belong to the seat on which a particular client

+    sits? (This bit has been bugging the author for a while now.)

+ 

+ Smart Card support, part 2: verify the card's contents.

+ -------------------------------------------------------

+ 

+ Considerations

+ ~~~~~~~~~~~~~~

+ 

+ -  We need to log in to the card as a user.

+ -  We need to find a certificate on the card for which the card also

+    holds the corresponding private key.

+ 

+    -  Conceptually, the simplest thing is to just sign some data with

+       the private key, and verify it using the public key in the

+       certificate.

+    -  We *could* alternately read the specific public fields from the

+       private key, pull the public key out of the certificate, decode

+       that public key (in a manner specific to the type of key), and

+       compare the two, but we still wouldn't know for sure that the

+       private parts of the key were correct. And we'd have extend this

+       code for every new type of key pair we wanted to support into the

+       future. So we'll just do the sign/verify, like pam\_pkcs11 does.

+ 

+ -  We need to verify that that certificate is issued by an issuer who is

+    trusted to issue certificates for login. The set of CAs which we

+    trust to do that is almost certainly going to be a much smaller set

+    than the full set of commercial CAs that we trust for issuing SSL

+    server certificates.

+ -  We need to verify that that certificate is suitable for login, i.e.,

+    not just for signing email and/or visiting web sites. On Windows

+    KDCs, and on MIT KDCs by default, this is indicated with a particular

+    value in the extendedKeyUsage extension. Large client rollouts may

+    have deployed cards with certificates containing one or the other or

+    neither, so this needs to be a requirement that we can relax through

+    configuration.

+ 

+ Proposal

+ ~~~~~~~~

+ 

+ -  If configured, if a card is not present, wait for card insertion in

+    to a suitable slot. Note that the slot and token may show up at the

+    same time.

+ -  Verify that user has access to token in slot.

+ 

+    -  PIN is accepted for login to card as CKU\_USER.

+    -  Find certificate and private key where the pair is marked as

+       suitable for signing data.

+    -  Generate random to-be-signed data.

+    -  Sign generated data with private key.

+    -  Verify signature over generated data using public key contained in

+       certificate.

+ 

+ -  Verify that the just-used certificate on the token chains up to an

+    issuer who is trusted to issue certificates which can be used for

+    logging in.

+ 

+    -  Note that this trusted issuer set is tightly controlled beyond the

+       normal set of CAs who are trusted to issue certificates for other

+       servers.

+    -  **TODO** seek guidance and assistance from the p11-kit

+       implementors, who are working on standardizing the storage and

+       expression of trust anchor information. PKINIT expects that we

+       pass in the names of files containing trusted anchors and known

+       intermediates, while p11-kit/PKCS#11 expose the information as

+       certificate and CKO\_NSS\_TRUST/??? objects on a token. We're

+       going to want to use p11-kit as much as possible cut down on the

+       number of places this has to be configured on a given system.

+ 

+ -  Check for either the Windows Smart Card Logon extendedKeyUsage value,

+    or the RFC4556 Kerberos Client value, but keep what exactly we look

+    for configurable for the sake of existing deployments.

+ 

+ Smart Card support, part 3: check that the card matches the account.

+ --------------------------------------------------------------------

+ 

+ Considerations

+ ~~~~~~~~~~~~~~

+ 

+ -  A certificate contains a subject field which identifies the owner of

+    the public key in the certificate. This is a distinguished name,

+    similar in many ways to an LDAP DN, mainly differing in that it's

+    encoded as a DER blob, with attribute types represented by OIDs

+    rather than by name, and with values encoded as DER strings of one of

+    several types.

+ -  A certificate can also contain one or more subjectAlternativeName

+    extension values. If it contains any of these, the subject field

+    becomes optional. Each subjectAlternativeName (SAN) value is

+    considered equally canonical. SAN values can take multiple forms,

+    including but not limited to distinguished names, DNS names, IP

+    addresses, email addresses, and Kerberos principal names.

+ -  Somehow, we have to use this naming information to figure out if the

+    certificate belongs to the account.

+ -  For tech demo situations, we'd like to be able to use this

+    information to discover the user account name without requiring that

+    it be specified by an end-user. It's a secondary concern, however,

+    and some customers legitimately want to be able to turn such a

+    feature off.

+ 

+ Proposal

+ ~~~~~~~~

+ 

+ Per-provider check for binding between certificate and account.

+ 

+ -  Kerberos (likely AD and IPA as well): let the KDC decide - if we get

+    creds for the account's principal, we're done.

+ 

+    -  This *should* handle matching using *altSecurityIdentity*

+       information for us if we're talking to an AD server.

+    -  **TODO** MIT krb5 KDC PKINIT logic currently only accepts a

+       client's certificate if it contains the principal name as a SAN

+       value. We'll want to be able to extend that.

+    -  Don't forget that we need to verify the KDC's certificate, and its

+       suitability to be issuing Kerberos tickets for the realm of the

+       account.

+ 

+ -  LDAP (possibly AD and IPA as well):

+ 

+    -  Let the server decide — connect to server using TLS with client

+       auth and SASL/EXTERNAL, use whoami EXOP to read our entry DN,

+       compare to the account's DN.

+ 

+       -  With NSS, this *should* only require setting

+          LDAP\_OPT\_X\_TLS\_CERTFILE to "tokenname:certnickname",

+          preferably after logging in to token.

+       -  This *should* get the server to handle *altSecurityIdentity*

+          matching for us if we're talking to an AD server.

+       -  When we start issuing client certificates in FreeIPA, we'll

+          need to make sure that FreeIPA's configuring dirsrv's

+          certmap.conf to pull the right attribute from the subject DN

+          and construct a search that will match the user's entry.

+       -  **TODO** figure out if there's a way to take the

+          CK\_FUNCTION\_LIST which p11-kit/PKCS#11 gives us and hand it

+          to the copy of NSS that libldap is using under the covers.

+ 

+    -  Let the server merely hold the info — search for an entry that

+       contains a copy of the certificate that's on the token as a

+       userCertificate value, check results.

+    -  Implement something like certmap.conf logic at the client.

+    -  Compare the subject/SAN DN to the user's entry's DN.

+ 

+       -  This is trickier than it sounds: converting from the binary

+          encoding used in the certificate's fields to the text format

+          used by LDAP requires that we map the OIDs of attributes to the

+          name of the naming attribute used in the LDAP DN. Converting

+          the LDAP DN to binary form may work better, but that's just a

+          guess.

+ 

+    -  Don't forget that we need to verify the server's certificate.

+ 

+ -  Local files:

+ 

+    -  Read certificate Kerberos principal name SAN and perform libkrb5

+       *auth\_to\_local* mapping.

+    -  Check for the certificate in the user's entry in a local data

+       store.

+    -  Check for the public key in user's SSH authorized key list.

+    -  Check for certificate subject/SAN CN matching user GECOS.

+    -  Check for certificate subject/SAN CN matching user name.

+    -  Check for certificate subject/SAN UID matching user name.

+    -  Check for certificate subject/SAN emailAddress matching user name

+       with in a configured email domain.

@@ -0,0 +1,189 @@ 

+ Smartcards and Multiple Identities

+ ==================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/3050 <https://pagure.io/SSSD/sssd/issue/3050>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Although there are other means like e.g. sudo or policy-kit it is still

+ common practice to assign multiple accounts with different privileges to

+ a single person. The typical example is a system administrator who has

+ an ordinary user account for the daily office work and a privileged

+ account for the admin duties. Another example are functional accounts

+ like e.g. a dedicated database administrator account which is used by

+ more than one person.

+ 

+ In the context of Smartcard authentication there are two cases to

+ consider

+ 

+ -  multiple certificates valid for authentication on a single Smartcard

+ -  a single certfificate is mapped to multiple accounts

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ Multiple certificates on a single Smartcard

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ To allow to log in to different accounts a user has multiple different

+ certificates which all match the criteria for authentication on a single

+ Smartcard. The user must a able to log in to each account by either

+ giving a user name, selecting a certificate (or both) depending on the

+ login method.

+ 

+ Single certificate for multiple accounts

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ To allow to log in to different accounts the certificate on the

+ Smartcard of the user is mapped to multiple accounts. The user must a

+ able to log in to each account by giving a user name for the specific

+ account.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ SSSD will read the the certificates from the Smartcard which matches the

+ criteria for authentication and will use optional additional

+ information, like .e.g. the user name, to determine a unique certificate

+ and a unique user name which can be used for authentication. If there is

+ not sufficient information SSSD might ask to user to select a

+ certificate or provider a user name to proceed with authentication.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ In general applications which use PAM for authentication will provide a

+ user name when calling pam\_start(). In this case SSSD has both the

+ certificate and a user name which should be sufficient in most cases to

+ determine if the user can authenticate with the certificate. There are

+ the following cases

+ 

+ -  if multiple users are found with the same name this is an error even

+    without Smartcard authentication

+ -  if the certificate and the user do not map, SSSD will prompt for a

+    password.

+ -  if the certificate and the user map, SSSD will prompt for a PIN

+ -  if there are multiple certificates suitable for authentication are on

+    the Smartcard and more than one map to the user SSSD will prompt to

+    select a certificate before asking for a PIN. **QUESTION: would it be

+    an information leak to only show the certificates which relate to the

+    user or do we have to display all certificate from the card?**

+ 

+ The first and second case already work. For the third case the check if

+ a certificate maps only to a single user must be dropped to support the

+ use case where one certificate is used to log in to different accounts

+ (which of course are identified by different user names). For the forth

+ case a new PAM dialog/conversation is needed.

+ 

+ To my knowlegde there are currently two cases where a user name is not

+ available in the first place, InfoPipe lookups by certificate used e.g.

+ by `mod\_lookup\_identity <https://www.adelton.com/apache/mod_lookup_identity/>`__

+ and the GDM Smartcard module which calls pam\_start() with an empty user

+ name if a Smartcard was inserted when the GDM login screen is running.

+ Here the following cases are possible:

+ 

+ -  the certificate(s) cannot be mapped to any user, SSSD will just

+    return a suitable error code

+ -  there are one or more certificates suitable for authentication on the

+    card but only one maps to a user, SSSD will prompt for a PIN

+ -  there are one or more certificates suitable for authentication on the

+    card but only one maps to multiple users, SSSD will prompt for a user

+    name before asking for a PIN

+ -  there are more certificates suitable for authentication on the card

+    and more than one map to users, SSSD will prompt to select a

+    certificate, if the selected certificate still maps to multiple users

+    SSSD will go to case three and asks for a user name before asking for

+    a PIN.

+ 

+ Since in the InfoPipe case only one certificate is send to SSSD only the

+ first three cases are valid here and SSSD can e.g. indicate with an error

+ code that either none or multiple users match the certificate. In the

+ latter case the application can ask for a user name.

+ 

+ For the gdm case it might be useful to see how Smartcard authentication

+ is handled on Windows. To illustrate this I prepare two short screencast

+ (sorry for the raw state, if time permits I will improve them).

+ 

+ `The

+ first <https://sbose.fedorapeople.org/sc/AD_SC_auth_2certs.webm>`__

+ shows the case where there are two different certificates valid for

+ authentication on the Smartcard. The Windows utility *certutil* can be

+ used with the *-SCInfo* option to check the certificates on the card and

+ if privates keys are available as well. When switching to the logon

+ screen Windows shows Icons for each certificate together with some data

+ from the certificate which should help to identify the certificate. In

+ this example the certificates were generated by the AD CS of the domain

+ and hence the displayed data matches the AD user name. In general this

+ does not have to be the case e.g. if certificates are issued by 3rd

+ party CAs. By selecting a specific certificate and entering the PIN the

+ mapped user is logged in.

+ 

+ `The

+ second <https://sbose.fedorapeople.org/sc/AD_SC_auth_2users.webm>`__

+ shows the case where there is only one certificate on the card but

+ mapped to two different users. The mapping can be done in AD's 'Users

+ and Computers' utility after enabling the 'Advanced Features' in the

+ 'View' menu. Now with a right-click 'Name Mappings' can be selected from

+ the context menu. After switching to the logon screen Smartcard

+ authentication will fail because Windows does not know which user should

+ be used for login. To solve this the 'Allow user name hint' policy

+ setting must be enabled in 'Computer Configuration\\Administrative

+ Templates\\Windows Components\\Smart Card' (see `'Smart Card Group

+ Policy and Registry Settings' on

+ Technet <https://technet.microsoft.com/en-us/library/ff404287%28v=ws.10%29.aspx>`__

+ for details). Now the logon screen displays a 'Username hint' prompt in

+ addition to the PIN prompt. If the certificate on the Smartcard is

+ mapped to multiple users the additional username hint makes

+ authentication possible. Please note that the 'Username hint' is needed

+ as well if the user is in a trusted domain.

+ 

+ Coming back to gdm, as long as only a 'Username hint' is needed pam\_sss

+ can send two messages in the PAM conversation, one for the user name and

+ the second for the PIN. It would be nice if gdm can display both

+ messages and prompts at the same time as Windows does to make it more

+ clear to the user what input is expected and why.

+ 

+ If it is needed to select a certificate the typical PAM service will

+ display specific data from the certificates, e.g. the valie of the most

+ specific RDN of the subject and the full DN of the issuer, in a numbered

+ list asking the user to enter the number of the certificate which should

+ be used for login. If would be nice if gdm can display this list in a

+ more graphical way and make it possible to select the certificate with a

+ mouse-click. Since SSSD knows that it is called by gdm because of the

+ PAM service name, e.g. gdm-smartcard, it would be possible to send the

+ certificate data with a new messages style, e.g.

+ PAM\_SELECTON\_LIST\_ITEM like the Linux specific PAM\_RADIO\_TYPE. Then

+ gdm would be able to display the certificate selection in a more

+ suitable way, e.g. similar to the selection of user names.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ Does your feature involve changes to configuration, like new options or

+ options changing values? Summarize them here. There's no need to go into

+ too many details, that's what man pages are for.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ This section should explain to a person with admin-level of SSSD

+ understanding how this change affects run time behaviour of SSSD and how

+ can an SSSD user test this change. If the feature is internal-only,

+ please list what areas of SSSD are affected so that testers know where

+ to focus.

+ 

+ How To Debug

+ ~~~~~~~~~~~~

+ 

+ Explain how to debug this feature if something goes wrong. This section

+ might include examples of additional commands the user might run (such

+ as keytab or certificate sanity checks) or explain what message to look

+ for.

+ 

+ Authors

+ ~~~~~~~

+ 

+ Give credit to authors of the design in this section.

@@ -0,0 +1,206 @@ 

+ Socket Activatable Responders

+ =============================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2243 <https://pagure.io/SSSD/sssd/issue/2243>`__

+ -  `https://pagure.io/SSSD/sssd/issue/3129 <https://pagure.io/SSSD/sssd/issue/3129>`__

+ -  `https://pagure.io/SSSD/sssd/issue/3245 <https://pagure.io/SSSD/sssd/issue/3245>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ SSSD has some responders which don't have to be running all the time,

+ but could be socket-activated instead in platforms where it's supported.

+ That's the case, for instance, for the IFP, ssh and sudo responders.

+ Making these responders socket-activated would provide a better use

+ experience, as these services could be started on-demand when a client

+ needs them and exist after a period of inactivity. Currently the admin

+ has to explicitly list all the services that might potentially be needed

+ in the ``services`` section and the processes have to be running all the

+ time.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ sssctl

+ ^^^^^^

+ 

+ As more and more features that had been added depending on the IFP

+ responder, we should make sure that the responder is activated on demand

+ and the admins doesn't have to activate it manually.

+ 

+ KCM

+ ^^^

+ 

+ The KCM responder is only seldom needed, when libkrb5 needs to access

+ the credentials store. At the same time, the KCM responder must be

+ running if the Kerberos credentials cache defaults to ``KCM``.

+ Socket-activating the responder would solve both of these cases.

+ 

+ autofs

+ ^^^^^^

+ 

+ The autofs responder is typically only needed when a share is about to

+ be mounted.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The solution agreed on the mailing list is to add a new unit for each

+ one of the responders. Once a responder is started, it will communicate

+ to the monitor in order to let the monitor know that it's up and the

+ monitor will do the registration of the responder, which basically

+ consists in marking the service as started, increasing the services'

+ counter, getting the responder's configuration, adding the responder to

+ the service's list. A configurable idle timeout will be implemented in

+ each responder, as part of this task, in order to exit the responder in

+ case it's not used for a few minutes.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ In order to achieve our goal we will need a small modification in

+ responders' common code in order to make it ready for socket-activation,

+ add some systemd units for each of the responders and finally small

+ changes in the monitor code in order to manage the new activated

+ service.

+ 

+ The change in the responders' common code is quite trivial, just change

+ the sss\_process\_init code to call activate\_unix\_sockets() istead of

+ set\_unix\_socket(). Something like: ::

+ 

+     -    ret = set_unix_socket(rctx, conn_setup);

+     +    ret = activate_unix_sockets(rctx, conn_setup);

+ 

+ The units that have to be added for each responder must look like:

+ 

+ sssd-@responder@.service.in (for services which can be run as

+ unprivileged user): ::

+ 

+     [Unit]

+     Description=SSSD SSH Service responder

+     Documentation=man:sssd.conf(5)

+     After=sssd.service

+     BindsTo=sssd.service

+ 

+     [Install]

+     Also=sssd-ssh.socket

+ 

+     [Service]

+     ExecStartPre=-/bin/chown @SSSD_USER@:@SSSD_USER@ @logpath@/sssd_ssh.log

+     ExecStart=@libexecdir@/sssd/sssd_@responder@ --debug-to-files --socket-activated

+     Restart=on-failure

+     User=@SSSD_USER@

+     Group=@SSSD_USER@

+     PermissionsStartOnly=true

+ 

+ sssd-@responder@.service.in (for services which cannot be run as

+ unprivileged user): ::

+ 

+     [Unit]

+     Description=SSSD NSS Service responder

+     Documentation=man:sssd.conf(5)

+     After=sssd.service

+     BindsTo=sssd.service

+ 

+     [Install]

+     Also=sssd-nss.socket

+ 

+     [Service]

+     ExecStartPre=-/bin/chown root:root @logpath@/sssd_nss.log

+     ExecStart=@libexecdir@/sssd/sssd_nss --debug-to-files --socket-activated

+     Restart=on-failure

+ 

+ sssd-@responder@.socket.in: ::

+ 

+     [Unit]

+     Description=SSSD NSS Service responder socket

+     Documentation=man:sssd.conf(5)

+     BindsTo=sssd.service

+ 

+     [Socket]

+     ListenStream=@pipepath@/@responder@

+     SocketUser=@SSSD_USER@

+     SocketGroup=@SSSD_USER@

+ 

+     [Install]

+     WantedBy=sssd.service

+ 

+ Some responders may have more than one socket, which is the case of PAM,

+ so another unit will be needed.

+ 

+ sssd-@responder@-priv.socket.in: ::

+ 

+     [Unit]

+     Description=SSSD PAM Service responder private socket

+     Documentation=man:sssd.conf(5)

+     BindsTo=sssd.service

+     BindsTo=sssd-@responder@.socket

+ 

+     [Socket]

+     Service=sssd-@responder@.service

+     ListenStream=@pipepath@/private/@responder@

+     SocketUser=root

+     SocketGroup=root

+     SocketMode=0600

+ 

+     [Install]

+     WantedBy=sssd.service

+ 

+ Last but not least, the IFP responder doesn't have a socket. It's going

+ to be D-Bus activated and some small changes will be required on its

+ D-Bus service unit (for platforms where systemd is supported). ::

+ 

+     -Exec=@libexecdir@/sssd/sss_signal

+     +ExecStart=@libexecdir@/sssd/sssd_@responder@ --uid 0 --gid 0 --debug-to-files --dbus-activated

+     +SystemdService=sssd-ifp.service

+     +Restart=on-failure

+ 

+ And, finally, the code in the monitor side will have to have some

+ adjustments in order to properly deal with an empty list of services

+ and, also, to register the service when it's started.

+ 

+ As just the responders' will be socket-activated for now, the service

+ type will have to exposed and passed through sbus when calling the

+ RegistrationService method and the monitor will have to properly do

+ the registration of the service when RegistrationService callback is

+ triggered. As mentioned before, the "registration" that has to be done

+ from the monitor's side is:

+ 

+ -  Mark the service as started;

+ -  Increase the services' counter;

+ -  Get the responders' configuration;

+ -  Set the service's restart number;

+ -  Add the service to the services' list.

+ 

+ "Unregistering" a socket-activated service will be done when the

+ connection between the service and the monitor is closed.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ After this design is implemented, the "services" line in sssd.conf will

+ become optional for platforms where systemd is present. Note that in

+ order to keep backward compatibility, if the "services" line is present,

+ the services will behave exactly as they did before these changes.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ The easiest way to test is removing the "services" line from sssd.conf

+ and try to use SSSD normally. Using sssctl tool without having the ifp

+ responder set in the "services" line is another way to test.

+ 

+ How To Debug

+ ~~~~~~~~~~~~

+ 

+ The easiest way to debug this new feature is taking a look on the

+ responders' common initialization code and in the monitors' client

+ registration code. Is worth to mention that disabling the systemd's

+ services/sockets will prevent the responders' services to be started.

+ 

+ Authors

+ ~~~~~~~

+ 

+ Fabiano Fidêncio <`fidencio@redhat.com <mailto:fidencio@redhat.com>`__>

@@ -0,0 +1,80 @@ 

+ Sockets for domains

+ ===================

+ 

+ Problem statement

+ -----------------

+ 

+ Currently, sssd offers the following types of sockets, one per

+ responder:

+ 

+ -  /var/lib/sss/pipes/nss

+ -  /var/lib/sss/pipes/pac

+ -  /var/lib/sss/pipes/pam

+ -  /var/lib/sss/pipes/ssh

+ 

+ That is good for typical OS-level operation where sssd offers services

+ it has set up (in /etc/sssd/sssd.conf) and offers services about all

+ domains it is IPA-enrolled to or otherwise configured.

+ 

+ However, if sssd is to be used as an identity and authentication

+ services for containerized applications, each container might be run

+ just for one domain or subset of domains configured for host's sssd. For

+ example, sssd might be configured for domains prod.example.com,

+ dev.example.com, cust1.example.com, cust2.example.com. The host might

+ run four containers, and each container should have access to just one

+ of these domains. Plus fifth container (perhaps some monitoring

+ application) should have access to prod.example.com, cust1.example.com,

+ and cust2.example.com.

+ 

+ If we want to use the sssd running on the host and take advantage of

+ caching and common configuration, access to sssd's services would be

+ done by mounting the sockets (or directory with the sockets) to the

+ container. However, the current set of sockets gives access to all

+ domains and sssd does not have any way to distinguish what the identity

+ of the peer requesting the service. An attempt was made to add a kernel

+ call which would allow to determine the cgroups of the peer in non-racey

+ way but it does not look like the call will be added:

+ `https://bugzilla.redhat.com/show\_bug.cgi?id=1063939 <https://bugzilla.redhat.com/show_bug.cgi?id=1063939>`__.

+ 

+ Goals

+ -----

+ 

+ Make it possible for containers to consume services for only a subset of

+ domains.

+ 

+ Make it possible for sssd to provide services only for a subset of

+ domains, based on some criteria.

+ 

+ Proposal

+ --------

+ 

+ Idea

+ ~~~~

+ 

+ Since it is not possible to determine the identity of the peer (which

+ sssd could then use to map to the list of domains it should serve),

+ let's make it possible to create on the fly additional sockets which

+ could then be passed to the container and which would be pre-configured

+ to only serve certain set of domains. By reading from given socket, sssd

+ would then know that the peer should only be handled in the context of

+ one or set of domains.

+ 

+ The sockets need to be created without sssd restarting -- it needs to be

+ online operation.

+ 

+ Interface

+ ~~~~~~~~~

+ 

+ Add dbus call which would take a list of domains and would return

+ directory path containing the sockets that can be used when only a given

+ set of domains should be addressable.

+ 

+ Sssd is welcome to reuse the same directory when the same set of domains

+ is requested with the next call. So there won't be another set of

+ sockets created per each new container -- the subsequent calls will just

+ use the already created ones. Sssd just needs to make sure the list of

+ domains matches.

+ 

+ Q: should there also be a list of responders that should be supported?

+ Would that be useful for some use cases, possibly making for more secure

+ setup?

@@ -0,0 +1,229 @@ 

+ SSSCTL

+ ======

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/385 <https://pagure.io/SSSD/sssd/issue/385>`__

+ -  `https://pagure.io/SSSD/sssd/issue/1788 <https://pagure.io/SSSD/sssd/issue/1788>`__

+ -  `https://pagure.io/SSSD/sssd/issue/1828 <https://pagure.io/SSSD/sssd/issue/1828>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2166 <https://pagure.io/SSSD/sssd/issue/2166>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2954 <https://pagure.io/SSSD/sssd/issue/2954>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2957 <https://pagure.io/SSSD/sssd/issue/2957>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Main purpose of this task is to make administration & debugging tasks

+ more user friendly and thus hopefully save time of users, support and

+ developers.

+ 

+ SSSCTL will be CLI client using the SSSD infopipe as a server that will

+ be providing necessary data and will perform/delegate commands to the

+ SSSD providers and responders.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ -  online/offline state

+    (`https://pagure.io/SSSD/sssd/issue/385 <https://pagure.io/SSSD/sssd/issue/385>`__).

+    Users have repeatedly asked for simple mean how to check if data

+    provider is offline or online without need to check logs (if logging

+    is enabled at all).

+ -  Report whether the entry is present in SSSD cache

+    (`https://pagure.io/SSSD/sssd/issue/2166 <https://pagure.io/SSSD/sssd/issue/2166>`__)

+ -  Check if the cached entry is valid and refresh if appropriate

+    (`https://pagure.io/SSSD/sssd/issue/2166 <https://pagure.io/SSSD/sssd/issue/2166>`__)

+ -  Measure the time an operation took (useful in performance tuning,

+    `https://pagure.io/SSSD/sssd/issue/385 <https://pagure.io/SSSD/sssd/issue/385>`__)

+ -  Failover status - Current state of failover process

+ -  Display server to which provider is connected to

+    (`https://pagure.io/SSSD/sssd/issue/385 <https://pagure.io/SSSD/sssd/issue/385>`__)

+ -  Display current debug level of a component

+ -  Generate memory report - Usually when user is observing a memory leak

+    we provide him a special build that generates talloc report which we

+    can then analyze. Using this tool customer would simply select SSSD

+    component that is supposed to leak memory and generate the talloc

+    report immediately.

+ -  Removing cache - Removing SSSD cache seems to be often misused act

+    done by administrators as there are few real needs for that.

+    Nevertheless, if administrator decides to remove the cache it would

+    be better to do this using the tool instead of crude removing

+    directories that might contain other useful data and could lead to

+    serious problems. Q: Is this what was requested as 'force reload'? Q:

+    Should this rather be part of sss\_cache tool?

+ 

+ Tool interface

+ ~~~~~~~~~~~~~~

+ 

+ The up-to-date list of all available commands can be printed with ::

+ 

+     # sssctl --help

+ 

+ Domain information

+ ^^^^^^^^^^^^^^^^^^

+ 

+ List of all domain available within SSSD can be obtain with the

+ following command. ::

+ 

+     # sssctl domain-list

+ 

+ We can also get more information about specific domain. ::

+ 

+     # sssctl domain-status $domain [--online, --last-request, --active-server, --servers]

+     Online status: Online/Offline

+     Active server: $currently-conected-server (server status, port status, resolver status)

+ 

+     Primary servers:

+         first.example.com (server status, port status, resolver status)

+         second.example.com (server status, port status, resolver status)

+ 

+     Backup servers:

+         first.example.com (server status, port status, resolver status)

+         second.example.com (server status, port status, resolver status)

+ 

+     10 most recent requests:

+     $req-name: started at XXX, finished at XXX, total duration XXX, success/error/...

+     ...

+ 

+ If a parameter is present only selected part will be printed.

+ 

+ Information about cached entry

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Unless we want to use dn, we need to specify whether we want to work

+ with user/group/sudo/..., interface for user follows, other object may

+ be done similarly. ::

+ 

+     # sssctl user-show $objname

+     User $objname is not present in cache.

+ 

+     # sssctl user-show $objname

+     Cache entry creation date: XXX

+     Cache entry last update date: XXX

+     Cache entry expiration date: XXX

+ 

+ To force an update of the cached entry: ::

+ 

+     # sssctl user-show $objname --update

+ 

+ Debug level

+ ^^^^^^^^^^^

+ 

+ ::

+ 

+     # sssctl logs-level [--monitor --nss --pam ... --domain $domain]

+     Monitor: 0x00f0

+     NSS responder: 0x3ff0

+     ...

+     Domain $domain: 0xfff0

+ 

+ Optionally we can provide a debug level to set (taking over

+ sss\_debuglevel tool, since we already have D-Bus functionality in

+ place). ::

+ 

+     # sssctl logs-level [--monitor --nss --pam ... --domain $domain] $new-level

+     Monitor: 0x00f0

+     NSS responder: 0x3ff0

+     ...

+     Domain $domain: 0xfff0

+ 

+ If a parameter is present only selected components are shown/changed.

+ 

+ Talloc report

+ ^^^^^^^^^^^^^

+ 

+ ::

+ 

+     # sssctl memory-report [--monitor --nss --pam ... --domain $domain] $file

+ 

+ If a parameter is present memory report is generated only for selected

+ components.

+ 

+ Cache operations

+ ^^^^^^^^^^^^^^^^

+ 

+ ::

+ 

+     # sssctl client-data-backup [--dir=$outputdir] [--force]

+ 

+ This command will backup all local data that are not present on the

+ server such as local view and local users. If an *$outputdir* is

+ specified data will be stored there otherwise */var/lib/sss/backup* will

+ be used. If *--force* option is specified than the existing backup is

+ replaced with the new one.

+ 

+ ::

+ 

+     # sssctl client-data-restore [--dir=$outputdir]

+ 

+ Restore local data with the content stored in *$outputdir*.

+ 

+ ::

+ 

+     # sssctl cache-remove

+ 

+ Remove SSSD cache database files, however in a manner that will backup

+ all local data so it can be restored later. The user is notified that

+ removing the cache will destroy all cached data and it is therefore not

+ recommended to do it in offline mode.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ We will create a new administrator tool called **sssctl**. This tool

+ will communicate with **InfoPipe** responder through its **public D-Bus

+ interface**. The InfoPipe will then internally forward the messages to

+ other SSSD components as necessary.

+ 

+ In cases where **sssctl** can be replaces with combination of existing

+ SSSD or system tools we will just call those tools directly through

+ system call or through their API if exist. For example the remove cache

+ operation is a sequence of: *sss\_override user-export

+ $dir/view\_users.bak && sss\_override group-export $dir/view\_groups.bak

+ && rm -f /var/lib/sss/db/\** so we can just call those programs.

+ 

+ Questions

+ ~~~~~~~~~

+ 

+ -  **Q1**: [Domain; Last request] What would be preferred number of

+    request to be printed? Do we wan't a parameter for this in sssd.conf

+    or even make it possible to change this value dynamically?

+ 

+    -  Start with fixed number of 10 request and keep it unless a bigger

+       requirenment comes.

+ 

+ -  **Q2**: [Domain; Server list] Is it enough to print only active

+    server or do we want full list of primary and backup servers as well?

+ 

+    -  Print full list containing also discovered servers.

+ 

+ -  **Q3**: [Domain; Server list] Do we want to also print IP adresses?

+ 

+    -  Not needed.

+ 

+ -  **Q4**: [Talloc report] Should we provide parameter $file or should

+    we hardcod the path as SSSD logs directory, generating name from

+    component and time?

+ 

+    -  Provide a file parameter but default to log directory.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ No configuration changes.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ Not at the moment.

+ 

+ How To Debug

+ ~~~~~~~~~~~~

+ 

+ Not at the moment.

+ 

+ Authors

+ ~~~~~~~

+ 

+ Pavel Reichl <`preichl@redhat.com <mailto:preichl@redhat.com>`__>

+ Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

@@ -0,0 +1,101 @@ 

+ **Nothing on this page truly exists. It contains only ramblings on what

+ SSSD might become in the future.** These are mostly unordered notes.

+ 

+ SSSD 2.0

+ --------

+ 

+ Major themes

+ 

+ -  Powered by systemd wherever possible

+ -  Eliminate the monitor process

+ -  Support socket activation and idle termination

+ -  Simplify configuration

+ -  Fast support for local users

+ 

+ systemd

+ ~~~~~~~

+ 

+ This init system supports many powerful features that could make large

+ parts of SSSD *infrastructure* irrelevant.

+ 

+ -  Supports process monitoring and automatic restart

+ -  Can support chaining multiple child processes together

+ -  Manages socket-activation for registered processes

+ -  Supports kdbus for secure and fast D-BUS communication between

+    processes

+ 

+ Process start-up

+ ~~~~~~~~~~~~~~~~

+ 

+ -  Use the ``sssd`` process (formerly the monitor) solely to parse

+    sssd.conf and convert it to the config LDB, which the other processes

+    will be able to read at startup.

+ -  Eliminate the services= line. All supported responders should simply

+    be invoked (socket-activated) when their client asks for them and

+    then proceed according to the config LDB (which may just tell it to

+    terminate again with an appropriate error reply)

+ -  We may want to load different providers as separate processes to make

+    it easier to decide when to terminate them. It would be better to

+    rethink whether it makes more sense to have one process per domain as

+    opposed to one process per configured provider back-end.

+ -  systemd now has a D-BUS method for setting up persistent or

+    non-persistent units (such as service units), so we can take

+    advantage of this to start up only the processes we really need.

+ 

+ idle termination

+ ~~~~~~~~~~~~~~~~

+ 

+ Shutting down any SSSD process that is not actively doing work would be

+ advantageous for several reasons (most notably memory and CPU resource

+ reduction during idle periods). Designing for this would also force us

+ to optimize for making SSSD operations stateless (or at least tracking

+ in-progress operations more carefully).

+ 

+ Some random thoughts on provider implementations:

+ 

+ -  LDAP provider (and derivatives) should auto-terminate once the the

+    ldap\_connection\_expire\_timeout is reached, since we can then

+    assume that nothing has been happening on the connection for quite

+    some time (15 minutes by default).

+ -  Other providers should terminate after a similar reasonable amount of

+    time.

+ -  Providers that have computationally-rare periodic tasks (such as

+    cache cleanup or kerberos ticket renewal) should be stored in a

+    persistent manner between process startups and should automatically

+    be invoked if their period has passed. We can take advantage of

+    systemd timer units to have the process periodically woken up to

+    process these events, rather than simply holding the process alive

+    and idle for long periods.

+ 

+ Local users

+ ~~~~~~~~~~~

+ 

+ -  We need to rework the local provider to behave more similarly to the

+    other providers (even if this just means a provider backend that

+    always returns "offline"). This way, the local id\_provider can be

+    configured with other providers like kerberos.

+ -  We need to optimize the local user behavior such that it is

+    acceptable to have ``sss`` first in the nsswitch.conf lines

+    everywhere. This will solve the performance problem caused by

+    disabling nscd for local users (which results in disk reads for all

+    lookups prior to trying sssd)

+ -  We need to work on

+    `​https://sourceware.org/glibc/wiki/Proposals/GroupMerging <https://sourceware.org/glibc/wiki/Proposals/GroupMerging>`__

+    and finish it.

+ 

+ Enumeration

+ ~~~~~~~~~~~

+ 

+ Enumeration mode should be deprecated in SSSD 2.0 with a strong

+ recommendation being placed on a D-BUS API for doing enumeration that

+ supports paging and filtering. Most uses of enumeration are for

+ presenting a user/group list in a UI, so this will be a better fit for

+ that anyway.

+ 

+ SBUS

+ ~~~~

+ 

+ The original point-to-point SBUS should be retired. Now that SSSD

+ supports running as non-root, we should instead set up a session bus and

+ use that (which will also be advantageous as we get into the

+ kdbus-enabled world).

@@ -0,0 +1,78 @@ 

+ Sub-Domains in SSSD

+ -------------------

+ 

+ Currently SSSD assumes that each domain configured in sssd.conf

+ represents exactly one domain in the backend. For example if only the

+ domains DOM1 and DOM2 are configured in sssd.conf a request for a user

+ for DOMX will return an error message and no backend is queried.

+ 

+ In an environment where different domains can trust each other and SSSD

+ shall handle user from domains trusting each other every single domain

+ must be configured in sssd.conf. Besides that this is cumbersome there

+ is an additional issue with respect to group memberships. SSSD by design

+ does not support group memberships between different configured domains,

+ e.g. a user A from domain DOM1 cannot be a member of group G from domain

+ DOM2.

+ 

+ It would be nice if SSSD can support trusted domains in the sense that

+ 

+ -  only one domain has to be configured in sssd.conf and all trusted

+    domains are available through SSSD

+ -  a user can be a member of a group in a different trusted domain

+ 

+ To achieve this SSSD must support the concept of domains inside of a

+ configured domain which we like to call sub-domain in the following.

+ Instead of creating a list of know domains from the data in sssd.conf

+ the PAM and NSS responder must query each backend for the names of the

+ domains the backend can handle. If the backend does not support the new

+ request the domain name from sssd.conf must be used as a fallback.

+ 

+ If a request for a simple user name (without @domain\_name, i.e. no

+ domain name is know) is received the first configured domain in

+ sssd.conf and all its sub-domains is queried first before moving to the

+ next configured domains and its sub-domains.

+ 

+ If a request with a fully qualified user name is received the backend

+ handling this (sub-)domain is queried directly. If the requested domain

+ is not know the configured domains are asked again for a list a

+ supported domains with a

+ 

+ -  force flag to indicate the the backed should try to updated the list

+    of trusted domains unconditionally

+ -  the name of the unknown domain which can be used as a hint in the

+    backend to find the specific domain and see if it is a trusted domain

+    (the backend may pass this hint on to a configured server and let the

+    server do the work)

+ 

+ This process might take some but since it will only happen once for each

+ unknown domain and there may be environment where it is only possible to

+ find a trusted domain with the help of the domain name this is

+ acceptable. Nevertheless, since a search for an unknown domain will lead

+ to some amount of network activity and system load there should be some

+ precaution implemented to avoid attacks based on random domain names

+ (maybe blacklists and timeouts).

+ 

+ With these considerations three development tasks can be identified to

+ add sub-domain support to SSSD

+ 

+ -  new get\_domains method: a new method to get the list of supported

+    domains from the backend must be defined so that the responders and

+    providers can use them

+ -  add get\_domains to providers: providers which can handled trusted

+    domains, currently IPA and winbind, must implement the new method

+ -  add get\_domains to the responders: the responders must call

+    get\_domains to get a list of supported domains and use the

+    configured domain name as a fallback (this might be split into two

+    task, first call get\_domains once at startup without force flag and

+    name of searched domain; second call get\_domains if domain cannot be

+    found with force flag and name of searched domain)

+ 

+ The first task must be solved first but is only a minor effort.The other

+ two must wait on the first but also require some more work.

+ 

+ For the first implementation it is sufficient that sub-domains work only

+ if the user name is fully qualified and that the domain name has to be

+ given in full and that short domain names are not supported. But it

+ should be kept in mind user names in general are not fully qualified and

+ that there are trust environments where short names are available to

+ safe some typing for the users.

@@ -0,0 +1,163 @@ 

+ Important sudo attributes

+ =========================

+ 

+ -  **sudoHost** - to what host does the rule apply

+ 

+    -  *ALL* - all hostnames

+    -  *hostname*

+    -  *IP address*

+    -  *+netgroup*

+    -  *regular expression* - contains one of "\\?\*[]"

+ 

+ -  **sudoUser** - to what user does the rule apply

+ 

+    -  *username*

+    -  *#uid*

+    -  *%group*

+    -  *+netgroup*

+ 

+ -  **sudoOrder** - rules ordering

+ -  **sudoNotBefore** and **sudoNotAfter** - time constraints

+ 

+ Complete LDAP schema can be found

+ `here <http://www.gratisoft.us/sudo/man/1.8.4/sudoers.ldap.man.html>`__.

+ 

+ Common

+ ======

+ 

+ Per host update

+ ---------------

+ 

+ Per host update returns all rules that:

+ 

+ -  sudoHost equals to ALL

+ -  direct match with sudoHost (by hostname or address)

+ -  contains regular expression (will be filtered by sudo)

+ -  contains netgroup (will be filtered by sudo)

+ 

+ Hostname match is performed in sudo source in

+ *plugin/sudoers/ldap.c/sudo\_ldap\_check\_host()*.

+ 

+ Per user update

+ ---------------

+ 

+ Per user update returns all rules that:

+ 

+ -  sudoUser equals to ALL

+ -  direct match with username, #uid or %group names

+ -  contains +netgroup (will be filtered by sudo)

+ 

+ Username match is performed via LADP filter in sudo source in

+ *plugin/sudoers/ldap.c/sudo\_ldap\_result\_get()*.

+ 

+ Smart refresh

+ -------------

+ 

+ Download only rules that were modified or newly created since the last

+ refresh.

+ 

+ Implementation

+ ==============

+ 

+ We will be looking for modified and newly created rules in short

+ intervals. Expiration of the rules is handled per user during the

+ execution time of *sudo*. We will also do periodical full refresh to

+ ensure consistency even if the *sudo* command is not used.

+ 

+ SysDB attributes

+ ----------------

+ 

+ | **sudoLastSmartRefreshTime** on *ou=SUDOers* - when the last smart

+   refresh was performed

+ | **sudoLastFullRefreshTime** on *ou=SUDOers* - when the last full

+   refresh was performed

+ | **sudoNextFullRefreshTime** on *ou=SUDOers* - when the next full is

+   scheduled

+ | **dataExpireTimestamp** on each rule - when the rule will be

+   considered as expired

+ 

+ Data provider

+ -------------

+ 

+ Data provider will be performing following actions:

+ 

+ A. Periodical download of changed or newly created rules (per host smart refresh)

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ | Interval is configurable via

+   **ldap\_sudo\_changed\_refresh\_interval** (default: 15 minutes)

+ | Enable *modifyTimestamp* with

+   **ldap\_sudo\_modify\_timestamp\_enabled** (default: false)

+ 

+ #. **if** server has changed **then** do **C**

+ #. **else if** *entryUSN* is available **then**

+ 

+    #. refresh rules per host, where entryUSN > currentHighestUSN

+    #. **goto** 3.2.

+ 

+ #. **else if** *modifyTimestamp* is enabled **then**

+ 

+    #. refresh rules per host, where entryUSN > currentHighestUSN

+    #. *sudoLastSmartRefreshTime* := current time

+    #. nextrefrest := (current time +

+       *ldap\_sudo\_changed\_refresh\_interval*)

+    #. **if** nextrefresh >= *sudoNextFullRefreshTime* AND nextrefresh <

+       (*sudoNextFullRefreshTime* +

+       *ldap\_sudo\_changed\_refresh\_interval*) **then**

+ 

+       #. nextrefresh := (*sudoNextFullRefreshTime* +

+          *ldap\_sudo\_changed\_refresh\_interval*)

+ 

+    #. schedule next smart refresh

+ 

+ #. **else** do nothing

+ 

+ B. Periodical full refresh of all rules

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Configurable via **ldap\_sudo\_full\_refresh\_interval** (default: 360

+ minutes)

+ 

+ #. do **C**

+ #. *sudoLastFullRefreshTime* := current time

+ #. *sudoNextFullRefreshTime* := (current time +

+    *ldap\_sudo\_full\_refresh\_interval*)

+ #. schedule next full refresh

+ 

+ C. On demand full refresh of all rules

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ #. Download all rules per host

+ #. Deletes all rules from the sysdb

+ #. Store downloaded rule in the sysdb

+ 

+ D. On demand refresh of specific rules

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ #. Download the rules

+ #. Delete them from the sysdb

+ #. Store downloaded rule in the sysdb

+ 

+ Responder

+ ---------

+ 

+ **sudo\_timed** (default: false) - filter rules by time constraints?

+ 

+ #. search sysdb per user

+ #. refresh all expired rules

+ #. **if** any rule was deleted **then**

+ 

+    #. schedule **C** (out of band)

+    #. search sysdb per user

+ 

+ #. **if** *sudo\_timed* = false **then** filter rules by time

+    constraints

+ #. sort rules

+ #. return rules to sudo

+ 

+ Questions

+ =========

+ 

+ #. Should we also do per user smart updates when the user runs *sudo*?

+ #. Should we create a tool to force full refresh of the rules

+    immediately?

@@ -0,0 +1,122 @@ 

+ Invalidate Cached SUDO Rules

+ ============================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2081 <https://pagure.io/SSSD/sssd/issue/2081>`__

+ -  `https://pagure.io/SSSD/sssd/issue/2884 <https://pagure.io/SSSD/sssd/issue/2884>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Currently sss\_cache can't be used to reliably invalidate sudo rules.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ Usually if admin changes sudo rules he would like to see an effect

+ immediately.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Sudo rules are stored in sss\_cache. Sometimes *smart* or *full* refresh

+ of sudo rules is done, but there is no effective way to invalidate them

+ (see

+ `https://docs.pagure.org/SSSD.sssd/design_pages/sudo_caching_rules <https://docs.pagure.org/SSSD.sssd/design_pages/sudo_caching_rules.html>`__).

+ 

+ Solution consists of two steps:

+ 

+ #. Invalidate sudo rules by setting expiration time to 0 which can

+    prevent to use old rules.

+ #. Trigger full refresh (and maybe even smart refresh) on demand.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Invalidating sudo rules

+ ^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ SSSD provides tool sss\_cache for invalidating items. ::

+ 

+     $ sss_cache --help

+     Usage: sss_cache [OPTION...]

+       -E, --everything            Invalidate all cached entries except for sudo rules

+       -u, --user=STRING           Invalidate particular user

+       -U, --users                 Invalidate all users

+       -g, --group=STRING          Invalidate particular group

+       -G, --groups                Invalidate all groups

+       -n, --netgroup=STRING       Invalidate particular netgroup

+       -N, --netgroups             Invalidate all netgroups

+       -s, --service=STRING        Invalidate particular service

+       -S, --services              Invalidate all services

+       -a, --autofs-map=STRING     Invalidate particular autofs map

+       -A, --autofs-maps           Invalidate all autofs maps

+       -h, --ssh-host=STRING       Invalidate particular SSH host

+       -H, --ssh-hosts             Invalidate all SSH hosts

+       -d, --domain=STRING         Only invalidate entries from a particular domain

+ 

+     Help options:

+       -?, --help                  Show this help message

+           --usage                 Display brief usage message

+ 

+ We need:

+ 

+ -  add option ``--sudo-rule=STRING`` for invalidating only STRING named

+    sudo rule,

+ -  add option ``--sudo-rules`` for invalidating all sudo rules,

+ -  change option ``--everything`` for invalidating sudo rules too.

+ 

+ For those changes we will provide new function

+ ``sysdb_search_sudo_rule()`` in ``db/sysdb_sudo.{hc}``. ::

+ 

+     errno_t

+     sysdb_search_sudo_rules(TALLOC_CTX *mem_ctx,

+                             struct sss_domain_info *domain,

+                             const char *filter,

+                             const char **attrs,

+                             size_t *num_hosts,

+                             struct ldb_message ***hosts)

+     /* Synopsis is inspired by other `sysdb_search_*()` functions. */

+ 

+ This new function be able to find sudo rule by given name (via filter).

+ 

+ On the other hand there is function

+ ``sudosrv_get_sudorules_query_cache()`` in

+ ``responder/sudo/sudosrv_get_sudorules.c`` which has very similar

+ behavior. Maybe it is candidate for proxy and moving to

+ ``db/sysdb_sudo.{hc}``.

+ 

+ --------------

+ 

+ To Be Done

+ ----------

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ A more technical extension of the previous section. Might include

+ low-level details, such as C structures, function synopsis etc. In case

+ of very trivial features (e.g a new option), this section can be merged

+ with the previous one.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ Does your feature involve changes to configuration, like new options or

+ options changing values? Summarize them here. There's no need to go into

+ too many details, that's what man pages are for.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ This section should explain to a person with admin-level of SSSD

+ understanding how this change affects run time behaviour of SSSD and how

+ can an SSSD user test this change. If the feature is internal-only,

+ please list what areas of SSSD are affected so that testers know where

+ to focus.

+ 

+ Authors

+ ~~~~~~~

+ 

+ Give credit to authors of the design in this section.

@@ -0,0 +1,122 @@ 

+ SUDO plugin API

+ ---------------

+ 

+ Since version 1.8 SUDO supports replacing standard policy behaviour

+ usign plugins.

+ 

+ Referral plugin API documentation can be found here:

+ `http://www.gratisoft.us/sudo/man/1.8.2/sudo\_plugin.man.html <http://www.gratisoft.us/sudo/man/1.8.2/sudo_plugin.man.html>`__

+ 

+ Basically to create a policy plugin, one must define a policy\_plugin

+ structure: ::

+ 

+     struct policy_plugin {

+      #define SUDO_POLICY_PLUGIN    1

+          unsigned int type; /* always SUDO_POLICY_PLUGIN */

+          unsigned int version; /* always SUDO_API_VERSION */

+          int (*open)(unsigned int version, sudo_conv_t conversation,

+                      sudo_printf_t plugin_printf, char * const settings[],

+                      char * const user_info[], char * const user_env[]);

+          void (*close)(int exit_status, int error);

+          int (*show_version)(int verbose);

+          int (*check_policy)(int argc, char * const argv[],

+                              char *env_add[], char **command_info[],

+                              char **argv_out[], char **user_env_out[]);

+          int (*list)(int argc, char * const argv[], int verbose,

+                      const char *list_user);

+          int (*validate)(void);

+          void (*invalidate)(int remove);

+          int (*init_session)(struct passwd *pwd);

+      };

+ 

+ To use the plugin, just edit /etc/sudo.conf: ::

+ 

+     Plugin policy_struct_name plugin.so

+ 

+ Only one policy plugin may be configured.

+ 

+ The most important functions are open(), close() and check\_policy().

+ 

+ open()

+ ~~~~~~

+ 

+ Initializes plugin with data passed by SUDO as arguments of this

+ function.

+ 

+ close()

+ ~~~~~~~

+ 

+ Does a data clean up and checks a return code of the command.

+ 

+ check\_policy()

+ ~~~~~~~~~~~~~~~

+ 

+ Determines whether the user can run the command or not.

+ 

+ Integration in SSSD

+ -------------------

+ 

+ .. FIXME:  Missing "high level view of integration" image

+ 

+ SSSD SUDO plugin

+ ----------------

+ 

+ All decision logic is done by responder and therefore this plugin should

+ be as light weight as possible.

+ 

+ Communication with responder is done by SSS CLI sockets interface.

+ 

+ .. FIXME: Missing "SSSD Sudo plugin" image

+ 

+ SSSD SUDO responder

+ -------------------

+ 

+ Plugin <=> responder protocol

+ -----------------------------

+ 

+ Query

+ ~~~~~

+ 

+ Byte array with format: ::

+ 

+     qualified_command_path\0argv[0]\0argv[i]\0\0env_add\0\0user_env\0\0settings\0\0user_info\0\0

+ 

+ where env\_add, user\_env, settings and user\_info are in the form of

+ NAME=VALUE pairs.

+ 

+ All fields are interpreted as char\*.

+ 

+ **qualified\_command\_path** is a full name of executed command

+ (/bin/ls, ./my-program)

+ 

+ **argv[]** arguments passed to executed programs

+ 

+ **env\_add** environment variables that user wants to add

+ 

+ **user\_env** current environment variables (provided in open() function

+ by SUDO)

+ 

+ **settings** provided in open() function by SUDO (see plugin API open())

+ 

+ **user\_info** provided in open() function by SUDO (see plugin API

+ open())

+ 

+ Reponse

+ ~~~~~~~

+ 

+ Byte array with format: ::

+ 

+     (result)argv\0\0command_info\0\0user_env\0\0

+ 

+ where command\_info and user\_env are in the form of NAME=VALUE pairs.

+ 

+ All fields except result are interpreted as char\*.

+ 

+ **result** interpreted as an integer value

+ 

+ **argv[]** arguments passed to executed programs

+ 

+ **command\_info** information about the command (see plugin API

+ check\_policy())

+ 

+ **user\_env** environment variables that should be kept / added.

@@ -0,0 +1,42 @@ 

+ .. FIXME: Missing "sudo highlevel v2" image.

+ 

+ Cache format of SUDO rules

+ ==========================

+ 

+ We have decided to use the current schema used by SUDO. The schema is

+ described

+ `here <http://www.gratisoft.us/sudo/man/1.8.2/sudoers.ldap.man.html>`__.

+ 

+ The reason is that Sudo can only understand the native schema anyway. We

+ will have to do a conversion when we implement support for the IPA sudo

+ schema down the road, but it's simply not needed now.

+ 

+ All rules are store under **cn=sudorules,cn=custom,cn=$domain,cn=sysdb**

+ subtree.

+ 

+ Communication protocols

+ =======================

+ 

+ SUDO -> Responder

+ -----------------

+ 

+ SUDO calls **SSS\_SUDO\_GET\_SUDORULES** command, providing a user name

+ of the requesting user. ::

+ 

+     <username(char*)>

+ 

+ Responder -> SUDO

+ -----------------

+ 

+ Sends all sudo rules entries that contains keyword ALL or matches

+ requested user name, his groups or netgroups. ::

+ 

+     <error_code(uint32_t)><num_rules(uint32_t)><rule1><rule2>...

+     <ruleN> = <num_attrs(uint32_t)><attr1><attr2>...

+     <attrN> = <name(char*)><num_values(uint32_t)><value1(char*)><value2(char*)>...

+ 

+ All strings are terminated with zero character.

+ 

+ If <error\_code> signals an error (i.e. it does not equal to

+ *SSS\_SUDO\_ERROR\_OK*), the remaining fields are omitted.

+ 

@@ -0,0 +1,156 @@ 

+ IPA sudo schema support

+ =======================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/1108 <https://pagure.io/SSSD/sssd/issue/1108>`__

+ 

+ Related design document(s)

+ 

+ -  `https://docs.pagure.org/SSSD.sssd/design_pages/sudo_caching_rules <https://docs.pagure.org/SSSD.sssd/design_pages/sudo_caching_rules.html>`__

+ 

+ Problem statement

+ -----------------

+ 

+ SSSD supports only standard sudo ldap schema at the moment. This has a

+ drawback of having the need to run compat plugin that converts IPA sudo

+ schema into the standard one. Once SSSD has support for IPA schema

+ administrators administrators can disable sudo compat tree which will

+ result in performance improvement on server side.

+ 

+ Use cases

+ ---------

+ 

+ -  compat plugin may be disabled when using IPA sudo provider

+ 

+ IPA sudo schema

+ ---------------

+ 

+ IPA sudo schema is rather different than the standard one. This section

+ contains the description of this schema together with ldap containers

+ where sudo rules are stored. A relevant standard attribute is noted when

+ possible. **RDN is marked in bold**. Attributes that hold dn are marked

+ in italic.

+ 

+ cn=sudocmds,cn=sudo,$dc

+ ^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ This container contains definition of single commands that may be

+ present in sudo rules.

+ 

+ -  objectClass = ipasudocmd

+ -  **ipaUniqueID**

+ -  sudoCmd ~ sudoCommand

+ -  *memberOf* (dn of sudo command group)

+ 

+ cn=sudocmdgroups,cn=sudo,$dc

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ This container contains definition of command groups that may be present

+ in sudo rules.

+ 

+ -  objectClass = ipasudocmdgroup

+ -  ipaUniqueID

+ -  **cn**

+ -  *member* (dn of sudo command)

+ 

+ cn=sudorules,cn=sudo,$dc

+ ^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ This container contains definition of sudo rules.

+ 

+ -  objectClass = ipasudorule

+ -  **ipaUniqueID**

+ -  cn

+ -  ipaEnabledFlag

+ 

+ -  ipaSudoOpt ~ sudoOption

+ -  *ipaSudoRunAs* ~ sudoRunAsUser (dn of user or group of users)

+ -  *ipaSudoRunAsGroup* ~ sudoRunAsGroup (dn of group)

+ -  *memberAllowCmd* (dn of sudo command or command group)

+ -  *memberDenyCmd* (dn of sudo command or command group)

+ -  *memberHost* ~ sudoHost (dn of ipa enrolled machine or hostgroup)

+ -  *memberUser* ~ sudoUser (dn of user or group of users)

+ -  hostMask (ip/mask)

+ -  *sudoNotAfter* ~ sudoNotAfter

+ -  *sudoNotBefore* ~ sudoNotBefore

+ -  *sudoOption* ~ sudoOption

+ 

+ The following attibures have a special meaning and can contain only

+ value "all". For example if cmdCategory is present, it is equivalent to

+ sudoCommand=ALL.

+ 

+ -  cmdCategory ~ sudoCommand

+ -  hostCategory ~ sudoHost

+ -  ipaSudoRunAsGroupCategory ~ sudoRunAsGroup

+ -  ipaSudoRunAsUserCategory ~ sudoRunAsUser

+ -  userCategory ~ sudoUser

+ 

+ The following attributes are used to contain external objects not known

+ to IPA nor SSSD. Since SSSD by design provides rules only to users and

+ groups known to it, we can safely ignore those attributes.

+ 

+ -  externalHost

+ -  externalUser

+ -  ipaSudoRunAsExtGroup

+ -  ipaSudoRunAsExtUser

+ 

+ Overview of the solution

+ ------------------------

+ 

+ We will again use rules, smart and full refresh similar to what we do in

+ ldap provider. Since we are working with three containers, it is not

+ very simply to translate everything at once into current standard sudo

+ schema that we use inside SSSD, because it would make changes in

+ commands and command groups hard to propagate. Instead we will keep

+ command and command groups stored separately and translate it into

+ sudoCommand in responder on the fly.

+ 

+ We will take advantage of using an IPA server and translate dn into

+ names by parsing dn when possible.

+ 

+ Implementation details

+ ----------------------

+ 

+ Full refresh

+ ^^^^^^^^^^^^

+ 

+ -  download everything under cn=sudo,$dc that applies to this host

+ -  store only commands and command groups that are present in at least

+    one rule

+ -  convert what possible to sudo schema but leave references to commands

+    and command groups for further processing in responder

+ 

+ Smart refresh

+ ^^^^^^^^^^^^^

+ 

+ -  download everything under cn=sudo,$dc that applies to this host newer

+    than last usn value

+ -  if new command or command group is downloaded store it only if it is

+    present in changed rule

+ -  if a rule contains command or command group that is not yet present

+    in sysdb, fetch it with dereference or single lookup

+ 

+ Rules refresh

+ ^^^^^^^^^^^^^

+ 

+ -  refresh expired rules and commands and command groups that are

+    present in those rules

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ No new options. But we have to provide a way to distinguish between

+ usage of IPA and ldap schema. By default we will use IPA schema and if

+ ldap\_sudo\_search\_base is set to anything else then cn=sudo,$dc we

+ will use the standard sudo ldap schema.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ -  existing tests can be used, only switching ldap server for IPA

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Pavel Březina <`pbrezina@redhat.com <mailto:pbrezina@redhat.com>`__>

@@ -0,0 +1,160 @@ 

+ SUDO Responder Cache Behaviour

+ ==============================

+ 

+ Before we go into the caching its better to know some useful

+ information about sudo schema.

+ 

+ 1. As the document at

+    `http://www.gratisoft.us/sudo/man/1.8.1/sudoers.ldap.man.html <http://www.gratisoft.us/sudo/man/1.8.1/sudoers.ldap.man.html>`__

+    indicates the sudo rules are contained in a ldap server inside

+    the SUDOers container.

+ 

+ 2. The SUDOers container contains the sudoRole objects.

+    Where each object indicates a a sudorule.

+ 

+ 3. Each sudoRole object supports following attributes:

+ 

+     | **sudoUser** - A user name, uid (prefixed with '#'), Unix group

+       (prefixed with a '%') or user netgroup (prefixed with a '+').

+ 

+     | **sudoHost** - A host name, IP address, IP network, or host

+       netgroup (prefixed with a '+'). The special value ALL will match

+       any host.

+ 

+     | **sudoCommand** - A Unix command with optional command line

+       arguments and wild chars.

+ 

+     | **sudoOption** – Specifies options to be enabled or disabled as in

+       the sudoers file.

+ 

+     | **sudoRunAsUser** - A user name or uid (prefixed with '#') that

+       commands may be run as or a Unix group (prefixed with a '%') or

+       user netgroup (prefixed with a '+') that contains a list of users

+       that commands may be run as. The special value ALL will match any

+       user.

+ 

+     | **sudoRunAsGroup** - A Unix group or gid (prefixed with '#') that

+       commands may be run as. The special value ALL will match any

+       group.

+ 

+     | **sudoNotBefore** - A time-stamp in the form yyyymmddHHMMZ that

+       can be used to provide a start date/time for when the sudoRole

+       will be valid. If multiplesudoNotBefore entries are present, the

+       earliest is used. Note that timestamps must be in Coordinated

+       Universal Time (UTC), not the local timezone.

+ 

+     | **sudoNotAfter** - A time stamp in the form yyyymmddHHMMZ that

+       indicates an expiration date/time,after which the sudoRole will no

+       longer be valid. If multiplesudoNotBefore entries are present, the

+       last one is used. Note that time-stamps must be in Coordinated

+       Universal Time (UTC), not the local timezone

+ 

+     **\*To use SudoNotBefore and SudoNotAfter the user should enable the

+     SUDOERS\_TIMED option in the config file.**\ \*

+ 

+     |   **sudoOrder** - The sudoRole entries retrieved from the LDAP

+         directory have no inherent order. The sudoOrder attribute is an

+         integer (or floating point value for LDAP servers that support

+         it) that is used to sort the matching entries. This allows

+         LDAP-based sudoers entries to more closely mimic the behaviour

+         of the sudoers file, where the of the entries influences the

+         result. If multiple entries match, the entry with the highest

+         sudoOrder attribute is chosen.

+ 

+     **\*A sudoRole must contain at least one sudoUser, sudoHost and

+     sudoCommand.**\ \*

+ 

+ 4. The sudoRole that have ``'''cn=Defaults'''`` will be applied

+    (if specified) over all the rules before applying any other

+    rules. This mimics the default statements in the sudoers file.

+ 

+ Now we have the necessary information to move on.

+ 

+ Q1). **What should we cache for offline sudo authentication???**

+ 

+     The anatomy of sudo is as follows:

+ 

+     when you type 'sudo cmd' sudo is going to lookup in ldap to try and

+     find the user posix groups and the user/host Nisnetgroup where the

+     user is a member. Then it is going to do an ldap search in the

+     ou=SUDOers container looking for any rule that matches that user or

+     his usergroups. when it matches some rules, it goes down that list

+     to see if the hostname OR a netgroup that the host is a member of is

+     in that same rule, then finally it determines if the command 'less'

+     is allowed.

+ 

+     This is how it works in a ldap server client. But in order to

+     incorporate the netgroups we have to do some hack in the order of

+     this anatomy.

+ 

+     For the successful validation we need to know the nisnetgroups and

+     posix groups that a user/host is a member of. So that we need to

+     cache the host/user-> nisgroup and user->posixgroups along with all

+     sudoRole objects inside the SUDOers container that references to the

+     specified command.

+ 

+     The simple solution for this problem is to enumerate all groups and

+     netgroup information and check it with sudorules. But this approach

+     less efficient and costly. Instead of enumerating all the rules, the

+     procedure goes like this.

+ 

+     - First we need to find the groups and user/host netgroups in which

+       the user is a member of. That is the first step. We can do the

+       search in DN: cn=accounts,dc=example,dc=com. Here we can use

+       memberof plugin to resolve the user groups and the host groups. This

+       step is already implemented inside the sssd. In order to include the

+       support for nis net groups we can add one more filter to the query

+       that searches for the user groups. The query is ::

+ 

+         (|

+            (nisNetgroupTriple=\28*,username,*\29)

+            (nisNetgroupTriple=\28hostname,*,*\29)

+         )

+ 

+       This will give you the netgroup that the user/host is a member of.

+ 

+     - In the second phase we apply the search to filter the rules

+       that applies to the user/host posix groups and netgroups found

+       in step1. Search returns ::

+ 

+         (\|(sudoBaseCommand=cmd)(sudoCommand=ALL)) where the

+         sudoBaseCommand is JUST the command (not including args).

+ 

+       The skeleton of the filter will be: ::

+ 

+         (&

+             (objectClass=sudoRole)

+             (|

+                 (sudoUser=username)

+                 (sudoUser=#uid)

+                 (sudoUser=%usergroup1)

+                 (sudoUser=%usergroupN)

+                 (sudoUser=+userNetgroup1)

+                 (sudoUser=+userNetgroupN)

+                 (sudoUser=ALL)

+             )

+             (|

+                 (sudoHost=ipa.example.com)

+                 (sudoHost=+sample_host_group)

+                 (sudoHost=ALL)

+             )

+             (|

+                 (sudoBaseCommand=!cmd*)

+                 (sudoCommand=ALL)

+             )

+          )

+ 

+     - From these rules the evaluation is done.

+ 

+     **Performance Considerations**

+ 

+     1. Within a sudoRole, The sudoCommand attribute with an command

+        negation is executed first, then sudoCommand with exact command

+        is evaluated, at last the sudoCommands with 'all' is evaluated.

+     2. To incorporate the sudoOrder attribute we can do the sorting

+        AFTER our search filter. So we'll limit the number of rules to

+        sort first.

+ 

+ Q2) **How to store cached data?**

+     The cached data is in the LDAP format.

+     So that the simple option available is to store it in the ldb file.

@@ -0,0 +1,149 @@ 

+ SUDO Support to SSSD

+ ====================

+ 

+ This Design document talks about the integration of SUDO support to

+ SSSD through plugin system.The SUDO can be used to check whether a

+ user have rights to execute the instructions as another user. Using

+ this plugin SUDO can be configured to use custom rules and policies.

+ So that we can alter authorization rules so as to provide cached

+ access to LDAP. This will make it easier to maintain centralized sudo

+ rules. The SSSD caches the SUDO policies, so that the SUDO support can

+ work better with a LDAP sever.

+ 

+ To integrate the sudo feature into the System Security Service

+ Daemon(SSSD) we must have some components to handle sudo policies

+ and authentication at both client side and server side. At the LDAP

+ sever side, the schema for storing the sudo policies is available(or

+ already implemented). So that, the SUDO rules can be loaded into an

+ LDAP server and managed centrally using this schema. The LDAP

+ servers follows the SUDO schema specified at the Suborders LDAP

+ Manual

+ (`http://www.gratisoft.us/sudo/man/1.8.0/sudoers.ldap.man.html <http://www.gratisoft.us/sudo/man/1.8.0/sudoers.ldap.man.html>`__).

+ Any Directory Server can load the described schema and become a

+ provider of the centrally managed SUDO rules for the SUDO clients.

+ 

+ Design:

+ --------

+ 

+ At the highest level the SUDO feature is implemented as the client

+ side SUDO utility and server side SUDO schema as shown below.

+ 

+ .. FIXME: Missing "High level design of SUDO system" image

+ 

+ In this level the LDAP server contains centrally managed SUDO schema

+ specifying the SUDO rules. The sudo utility at the client side

+ queries the LDAP server to authenticate the request. The server

+ inspects the query with SUDO schema to determine whether the

+ requested user have the right to execute this command. Then returns

+ the result back to the client.

+ 

+ When Plugin comes in to action, the SSSD works with this plugin so that

+ the sudo queries is sent to the LDAP server. On receiving the sudo

+ command on the runtime the SUDO utility will transfer give the chance

+ for processing the request to the plugin integrated to the sudo utility

+ at client side. This is done based on the Plugin-API provided by the

+ sudo utility. The manual for Plugin-API can be found here

+ (`http://www.gratisoft.us/sudo/man/1.8.1/sudo\_plugin.man.html <http://www.gratisoft.us/sudo/man/1.8.1/sudo_plugin.man.html>`__).

+ The SSSD provides a cached support to SUDO using this plugin.

+ 

+ .. FIXME: Missing "SUDO plugin in action" image

+ 

+ The Plug-in is designed as 4 components. They are:

+ 

+ - Lightweight plugin for SUDO.

+ - SUDO responder daemon.

+ - SUDO provider type.

+ - LDAP implementation of the SUDO provider type

+ 

+ 

+ SUDO Plugin

+ -----------

+ 

+ The lightweight plugin just forwards the sudo command request from

+ the SUDO utility to the SSSD. The plugin is made using the

+ plugin-API and connected to the sudo utility at the client side.

+ When user types in the command with sudo (eg: sudo ls ) the sudo

+ utility gives the chance for execution to the sudo plugin. The

+ plugin calls the pam\_authenticate() from the pam library with pam

+ service defined for sudo, to verify the user provided information.

+ If the pam module returns PAM\_SUCCESS then plugin continues with

+ sssd else dies with an error message. If pam authentication is

+ successful, then the plugin just forwards the request obtained from

+ the sudo utility to the SSSD daemon. The format of this forward

+ request is available at `https://docs.pagure.org/SSSD.sssd/design_pages/sudo_support_plugin_wire_protocol#the-plugin-wire-protocol <https://docs.pagure.org/SSSD.sssd/design_pages/sudo_support_plugin_wire_protocol.html#the-plugin-wire-protocol>`__

+ 

+ .. FIXME: Missing "SUDO Plugin" image

+ 

+ 

+ SUDO Responder

+ --------------

+ 

+ The request forwarded by plugin reaches SSSD which gives the request to

+ the SUDO responder daemon. First of all, the responder performs a

+ look-up against the SSSD's cache or sysdb. The result of this lookup can

+ be either a cache-hit or a cache-miss or an expired entry. If the entry

+ exists in the cache and is not expired, we will immediately reply to the

+ plugin module with the answer that shows whether the user can be

+ authenticated to run the specified command as SUDO user. There the sudo

+ request ends. In the case of a cache-miss, the SUDO responder needs to

+ get the latest SUDO schema information from the LDAP server. So that the

+ request for contacting the LDAP server is given to LDAP provider. When

+ the entry in the cache is found to be expired the same step as in the

+ cache miss is followed.

+ 

+ .. FIXME: Missing "SUDO responder checks the cache" image

+ 

+ SUDO Provider Interface

+ -----------------------

+ 

+ Sudo provider provides a interface to write/update the schema obtained

+ from the LDAP server through LDAP provider. In the case of a cache miss

+ or expired cache the sudo responder contacts the LDAP provider to get

+ the schema from server and the returned schema is used to serve the SUDO

+ request from the sudo utility. and this result is passed to the SUDO

+ provider which writes this data to the offline cache in the SSSD. So

+ that the next request before the expiration time can be served without

+ any LDAP searches. SO that the over all efficiency of SUDO operation

+ increases by the use of SSSD and its offline cache.

+ 

+ The provider defines the communication protocol and functional interface

+ for actual back end and the responder.

+ 

+ LDAP Provider for SUDO

+ ----------------------

+ 

+ In case of a cache miss or expired entry in the cache the SUDO need to

+ connect to the LDAP sever to update the offline schema in the cache. The

+ LDAP provider will deal with all the issues related to connecting the

+ LDAP server, Sending LDAP queries and receiving schema from the LDAP

+ server etc. Sudo provider gets this request from the responder. The

+ result obtained ( schema from LDAP server ) is returned to the sudo

+ responder.

+ 

+ The LDAP Provider has existing interfaces for:

+ 

+ - Identity,

+ - Authentication,

+ - Access-control

+ - Password-change.

+ 

+ The design adds one more component to it:

+ 

+ - SUDO policy.

+ 

+ This component deals with the back end and provider issues.

+ 

+ .. FIXME: Missing "SUDO provider connects to LDAP server" image

+ 

+ Advantages of this design

+ -------------------------

+ 

+ This design allows us to load any SUDO implementation into the back-end

+ daemon and having it:

+ 

+ - working properly;

+ - being completely self-contained.

+ 

+ So that the SUDO Responder doesn't need to know which is running. i.e

+ without any additional burden to the SUDO responder we can connect it to

+ any server system with valid SUDO implementation.

@@ -0,0 +1,309 @@ 

+ Sudo Plugin Wire Protocol

+ =========================

+ 

+ Sudo v1.8 supports a plugin API that can be used to extend features of

+ SUDO. These pluggable modules can be of two types

+ 

+ #. Policy Plugin

+ #. I/O log Plugin

+ 

+ Policy plugin can determine whether the user is allowed run the

+ specified command as specified user. Only one policy plugin may be

+ loaded at a time. Where as the I/O log plugin logs the session to local

+ file including the tty input/output, stdin, stdout, stderr etc. Through

+ this policy plugin the user can different security policies that can be

+ plugged into action. In the forwarder plugin we are not using I/O plugin

+ to log data.

+ 

+ 

+ Policy Plugin

+ -------------

+ 

+ open()

+ ~~~~~~

+ 

+ ::

+ 

+     int (*open)(unsigned int version, sudo_conv_t conversation,

+                      sudo_printf_t plugin_printf, char * const settings[],

+                      char * const user_info[], char * const user_env[]);

+ 

+ This function opens the connection between plugin and SUDO

+ 

+ **Input**

+ 

+ @param[in] version - The major and minor version number of the plugin

+ API

+ 

+ @param[in] conversation - A pointer to the conversation function that

+ can be used by the plugin to interact with the user (see below). Returns

+ 0 on success and -1 on failure.

+ 

+ @param[in] plugin\_printf - A pointer to a printf-style function that

+ may be used to display informational or error messages

+ 

+ @param[in] user\_info - A vector of user-supplied sudo settings in the

+ form of "name=value" strings. The vector is terminated by a NULL

+ pointer.

+ 

+ @param[in] user\_env - A vector of information about the user running

+ the command in the form of "name=value" strings. The vector is

+ terminated by a NULL pointer.

+ 

+ **Output**

+ 

+ @return 1 success

+ 

+ @return 0 failure

+ 

+ @return -1 general error

+ 

+ @return -2 usage error,

+ 

+ If an error occurs, the plugin may optionally call the conversation or

+ plugin\_printf function with SUDO\_CONF\_ERROR\_MSG to present

+ additional error information to the user.

+ 

+ close()

+ ~~~~~~~

+ 

+ ::

+ 

+     void (*close)(int exit_status, int error);

+ 

+ The close function is called when the command being run by sudo

+ finishes.

+ 

+ **Input**

+ 

+ @param[in] exit\_status - The command's exit status, as returned by the

+ wait system call. The value of exit\_status is undefined if error is

+ non-zero.

+ 

+ @param[out] error - If the command could not be executed, this is set to

+ the value of errno set by the execve system call. If the command was

+ successfully executed, the value of error is 0.

+ 

+ check\_policy()

+ ~~~~~~~~~~~~~~~

+ 

+ ::

+ 

+     int (*check_policy)(int argc, char * const argv[]

+                          char *env_add[], char **command_info[],

+                          char **argv_out[], char **user_env_out[]);

+ 

+ The check\_policy function is called by sudo to determine whether the

+ user is allowed to run the specified commands.

+ 

+ **Input**

+ 

+ @param[in] argc - The number of elements in argv, not counting the final

+ NULL pointer.

+ 

+ @param[in] argv - The argument vector describing the command the user

+ wishes to run, in the same form as what would be passed to the execve()

+ system call which is terminated by a NULL pointer.

+ 

+ @param[in] env\_add - Additional environment variables specified by the

+ user on the command line in the form of a NULL-terminated vector of

+ "name=value" strings.

+ 

+ @param[in] command\_info - Information about the command being run in

+ the form of "name=value" strings.

+ 

+ @param[out] argv\_out - The NULL-terminated argument vector to pass to

+ the execve() system call when executing the command.

+ 

+ @param[in] user\_env\_out - The NULL-terminated environment vector to

+ use when executing the command.

+ 

+ **Output**

+ 

+ @return 1 - Command is allowed

+ 

+ @return -1 - general error

+ 

+ @return -2 - usage error

+ 

+ If an error occurs, the plugin may optionally call the conversation or

+ plugin\_printf function with SUDO\_CONF\_ERROR\_MSG to present

+ additional error information to the user.

+ 

+ validate()

+ ~~~~~~~~~~

+ 

+ ::

+ 

+     int (*validate)(void);

+ 

+ The validate function is called when sudo is run with the -v flag. For

+ policy plugins such as sudoers that cache authentication credentials,

+ this function will validate and cache the credentials. i.e, sudo will

+ update the user's cached credentials, authenticating the user's password

+ if necessary. The default sudoers plugin caches the user credential for

+ a timeout of 5 minutes. The invocation of validate function through

+ 'sudo -v' flag extends the timeout of the user credentials after

+ authentication if necessary.

+ 

+ No Input

+ 

+ **Output**

+ 

+ @return 1 - success

+ 

+ @return 0 - failure

+ 

+ @return -1 - error

+ 

+ On error, the plugin may optionally call the conversation or

+ plugin\_printf function with SUDO\_CONF\_ERROR\_MSG to present

+ additional error information to the user.

+ 

+ invalidate()

+ ~~~~~~~~~~~~

+ 

+ ::

+ 

+     void (*invalidate)(int remove);

+ 

+ The invalidate function is called when sudo is called with the -k or -K

+ flag. This function will invalidate the credentials. i.e, the user

+ credentials will be marked as invalid so that on the nest invocation of

+ sudo user will be forcefuilly prompted undergo the authentication

+ procedures. The invalidate function should be NULL if the plugin does

+ not support credential caching.

+ 

+ **Input**

+ 

+ @param[in] remove - If the remove flag is set, the plugin may remove the

+ credentials instead of simply invalidating them.

+ 

+ Conversation API & Printf-style fuctions

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ ::

+ 

+      typedef int (*sudo_conv_t)(int num_msgs,

+                   const struct sudo_conv_message msgs[],

+                   struct sudo_conv_reply replies[]);

+ 

+      typedef int (*sudo_printf_t)(int msg_type, const char *fmt, ...);

+ 

+ If the plugin needs to interact with the user or display informational

+ or error messages, it may do so via the conversation function. The

+ caller must include a trailing newline in msg if one is to be printed.

+ The messages are passed in the the msg[] array of sudo\_conv\_messages

+ and the replies are received in the array sudo\_conv\_reply structures.

+ 

+ The format of sudo\_conv\_messages and sudo\_conv\_reply are

+ 

+ ::

+ 

+      struct sudo_conv_message {

+          int msg_type;

+          int timeout;

+          const char *msg;

+      };

+ 

+      struct sudo_conv_reply {

+          char *reply;

+      };

+ 

+ A printf-style function is also available that can be used to

+ display informational or error messages to the user, which is

+ usually more convenient for simple messages where no use input is

+ required.

+ 

+ The msg\_type can be any one of these

+ 

+ ::

+ 

+      SUDO_CONV_PROMPT_ECHO_OFF    /* do not echo user input */

+      SUDO_CONV_PROMPT_ECHO_ON     /* echo user input */

+      SUDO_CONV_ERROR_MSG          /* error message */

+      SUDO_CONV_INFO_MSG           /* informational message */

+      SUDO_CONV_PROMPT_MASK        /* mask user input */

+      SUDO_CONV_PROMPT_ECHO_OK     /* flag: allow echo if no tty */

+ 

+ The formatted string given in the printf-style function is printed to

+ the screen.

+ 

+ THE PLUGIN WIRE PROTOCOL

+ ------------------------

+ 

+ This is the structure of message packet that is sent from plugin to

+ SSSD responder for getting the authentication result.

+ 

+ The structure is as shown below.

+ 

+ Each string message is grouped into a container of format: ::

+ 

+     message_type +(uint32_t) message_size + message_string

+ 

+ and each integer messages are grouped into container as: ::

+ 

+     message_type+ sizeof( uint32_t ) + (uint32_t)integer_value

+ 

+ So that string message occupies a space of { 2\*(sizeof

+ uint32\_t)+sizeof string } and integer type takes a space of {

+ 3\*(sizeof uint32\_t) }

+ 

+ message\_type : is defined at "sss\_sudo\_cli.h" as **enum

+ sudo\_item\_type**

+ 

+ The message foramt will be: ::

+ 

+     start_header + message_container1 + message_container2 + ........ + message_containerN + stop_header.

+ 

+ where: ::

+ 

+     start\_header : SSS\_START\_OF\_SUDO\_REQUEST

+     end\_header : SSS\_END\_OF\_SUDO\_REQUEST

+ 

+ The messages are: ::

+ 

+     MESSAGE                            MESSAGE TYPE                     DESCRIPTION

+ 

+ 

+     uid                                SSS_SUDO_ITEM_UID                UID of the user

+ 

+     Current directory                  SSS_SUDO_ITEM_CWD                Current working directory of the user

+ 

+     tty                                SSS_SUDO_ITEM_TTY                tty used by the user

+ 

+     Run as user                        SSS_SUDO_ITEM_RUSER              User name to run the command as

+ 

+     run as group                       SSS_SUDO_ITEM_RGROUP             group name to run the command as

+ 

+     prompt to be used                  SSS_SUDO_ITEM_PROMPT             Prompt to be used when credentials are requested

+ 

+     network address                    SSS_SUDO_ITEM_NETADDR            Network address of user

+ 

+     Use sudo edit                      SSS_SUDO_ITEM_USE_SUDOEDIT       Use sudo edit instead of sudo

+ 

+     set HOME to target user's home     SSS_SUDO_ITEM_USE_SETHOME        set HOME env variable to target user's home

+ 

+     preserve environment               SSS_SUDO_ITEM_USE_PRESERV_ENV    Preserve the environment to be used

+ 

+     implied shell support              SSS_SUDO_ITEM_USE_IMPLIED_SHELL  use sudo without any command

+ 

+     Use login shell                    SSS_SUDO_ITEM_USE_LOGIN_SHELL    indicates that user want to run a login shell

+ 

+     Run a shell                        SSS_SUDO_ITEM_USE_RUN_SHELL      Want to run a shell instead of command

+ 

+     preserve groups                    SSS_SUDO_ITEM_USE_PRE_GROUPS     Preserve group information

+ 

+     ignore cached results              SSS_SUDO_ITEM_USE_IGNORE_TICKET  Ignore the cached credentials

+ 

+     be noninteractive                  SSS_SUDO_ITEM_USE_NON_INTERACTIVE die when user input is needed

+ 

+     debug level                        SSS_SUDO_ITEM_DEBUG_LEVEL        debug level

+ 

+     command                            SSS_SUDO_ITEM_COMMAND            command with its arguments to be executed

+ 

+     user's enviroment variables        SSS_SUDO_ITEM_USER_ENV           null terminated list of environment variables

+ 

+     client pid                         SSS_SUDO_ITEM_CLI_PID            client's pid

+ 

+ 

@@ -0,0 +1,6 @@ 

+ Sample Sudo rules.ldif

+ ----------------------

+ 

+     This ldif file contains the data taht is to be used for testing

+     against the filters in order to prove the proposed filter techniques

+     can validate the sudo rules.

@@ -0,0 +1,136 @@ 

+ Change format of SYSDB\_NAME attribute for users and groups

+ ===========================================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2011 <https://pagure.io/SSSD/sssd/issue/2011>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Currently the "name" (SYSDB\_NAME) attribute for users and groups can be

+ stored in different formats depending on domain configuration, in

+ particular the ``full_name_format`` option. If the domain does not

+ require fully-qualified domain names (FQDN), the name in SYSDB\_NAME is

+ stored without the domain portion (for example ``joe``). If FQDN is

+ required in the domain, then the domain portion is stored in the

+ SYSDB\_NAME attribute (for example ``joe@example.com``). The format in

+ which the FQDN is stored is stored is also configurable in sssd.conf.

+ 

+ There are two major problems with this approach:

+ 

+ -  For admins - The format of data in sysdb is dependent on SSSD

+    configuration. Changes in sssd.conf may render the cached data

+    invalid, so admins have to remove the cache. In general, allowing an

+    option that should purely control the ouput format to also control

+    the database layout is a very bad idea.

+ 

+ -  For code maintainers - The code that deals with SYSDB\_NAME attribute

+    often contains conditions and multiple branches to treat the

+    FQDN/non-FQDN names differently. This makes the code less readable

+    and more fragile.

+ 

+ In addition, some features such as using only the name part for

+ subdomain users are very hard to implement with the current code.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ As an Administrator, I would like an option to only output the short

+ names of trusted AD users without the domain component.

+ 

+ As an Administrator, I would like to change the output name format

+ without having the flush the whole database.

+ 

+ As a code maintainer, I need a predictable way to store user and group

+ entries without special casing the name formats.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Always store SYSDB\_NAME attribute for users and groups in special

+ internal FQDN format that is not configuration dependent. The options

+ ``use_fully_qualified_names`` and ``full_name_format`` should only be

+ relevant for code that prepares data for user output. Internally, only

+ the internal FQDN should be used.

+ 

+ Using a fully qualified name (as opposed to a non-qualified name) for

+ all users is better to make it possible to use the ``memberUid`` and

+ ``ghost`` attributes in our ldb cache for cases where a group stores

+ members from multiple domains.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The new internal FQDN will have the following format: ``name@domain``.

+ The name portion will retain the original case, while the domain portion

+ will be normalized as lower-case. The SYSDB\_ALIAS attribute will have

+ the same format, but lowercased. The database will not store the

+ shortname for users and groups at all, but the code would parse the

+ shortname if needed. This is acceptable because the shortname would only

+ be needed during interaction with outside of SSSD, such as creating

+ filters or during output.

+ 

+ The name that SSSD receives from the client libraries would be converted

+ to the internal format when a responder loops over a domain, much like

+ we normalize the case at the moment. The back end would receive the

+ qualified name as part of the ``be_req`` structure already and

+ internally would work with the qualified name only except places where

+ we need to use the name portion only (such as when constructing an LDAP

+ filter).

+ 

+ All functions that work with user and/or group names should be modified

+ to accept this format.

+ 

+ When working on the conversion, care must be taken to not tie the code

+ to any particular format, but always use functions to create or parse

+ the internal name. This could be tested by changing the functions to

+ create and parse the format to create the FQDN in a different format and

+ making sure all tests still pass.

+ 

+ A sysdb version upgrade will be necessary. The changes in sysdb will be

+ following:

+ 

+ -  Change the SYSDB\_NAME attribute for users and groups to use the new

+    internal format.

+ -  Use the new internal format for SYSDB\_GHOST and SYSDB\_MEMBERUID

+    attributes.

+ -  The member and memberof attributes will have to be changed to use the

+ 

+ sysdb upgrade

+ ^^^^^^^^^^^^^

+ 

+ The sysdb upgrade is tricky for two reasons:

+ 

+ #. the amount of data we'll have to change and write can potentially be

+    huge if the database contains many users and groups. To mitigate the

+    performance impact, we will open the database in a nosync mode,

+    perform all the writes at once and flush when we are done.

+ #. the memberof plugin normally prevents the ldb user to write the

+    SYSDB\_MEMBERUID, SYSDB\_GHOST and SYSDB\_MEMBEROF attributes

+    directly. Because there is no way to selectively disable one module

+    when connecting to ldb, we will have to add a way to the memberof

+    plugin to allow the user to bypass the module (maybe when an

+    environment variable is set)

+ 

+ Additionaly, because this update is risky, we should perform the update

+ on a copy of the database and only rename the copy when the upgrade

+ finishes successfully. This would allow the admin to downgrade sssd back

+ and still use the original database in the previous format.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ No configuration changes required, this is internal change only.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ All available tests should still pass. The tests should also pass if the

+ format of the database was changed.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

+ -  Michal Židek <`mzidek@redhat.com <mailto:mzidek@redhat.com>`__>

@@ -0,0 +1,73 @@ 

+ SSSD Test Suite Coverage

+ ------------------------

+ 

+ This document describes the plan on improving the test coverage of the

+ SSSD using the cmocka unit test library. The plan might be implemented

+ through Fedora's participation in Gnome outreach for women.

+ 

+ High Level Strategy

+ ~~~~~~~~~~~~~~~~~~~

+ 

+ The contributor should, on a high level, follow these steps:

+ 

+ #. familiarize herself with SSSD from user point of view

+ 

+    -  what is it good for?

+    -  how do I install it?

+    -  how do I configure it?

+    -  Useful links:

+ 

+       -  `lwn.net article on SSSD <http://lwn.net/Articles/457415/>`__

+       -  `RHEL documentation on

+          SSSD <https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/SSSD-Introduction.html>`__

+ 

+       .. FIXME: A link for team presentations is missing here!

+       .. -  `docs on this wiki, includes

+       ..    presentations <https://docs.pagure.org/sssd-test2/Documentation.html>`__

+ 

+ #. investigate how the SSSD can be built from source

+ 

+    -  clone the git repository

+    -  configure and make the sources

+    -  make check

+ 

+ #. learn about the current unit tests being used in the SSSD project

+ 

+    -  currently used tests are stored in ``src/tests`` subdirectory and

+       mostly use the check unit test suite

+    -  newer tests have started to use the cmocka library to leverage the

+       mocking framework

+ 

+ #. get familiar with the cmocka unit test library

+ 

+    -  `http://cmocka.cryptomilk.org/ <http://cmocka.cryptomilk.org/>`__

+ 

+ #. implement a new unit test for the SSSD as a proof-of-concept

+ 

+    -  this first unit test would make the contributor familiar with how

+       a unit test can be written

+    -  does not have to cover any important piece of the SSSD, but rather

+       one that is self-contained

+ 

+ #. gradually start learning about the architecture of the SSSD

+ 

+    -  this step requires coordination with the SSSD developers

+    -  work with the SSSD developers on understanding the architecture

+    -  blog posts on the SSSD are mostly welcome!

+ 

+ #. propose a couple of unit tests to the SSSD upstream

+ 

+    -  the most important tier would contain tests such as "does identity

+       work" or "does authentication work"

+ 

+ #. cover the most important parts of the SSSD with unit tests

+ 

+    -  as some of the unit tests might share the same boilerplate code,

+       the contributor might also need to implement convenient functions

+       reusable by different tests

+ 

+ Schedule

+ ~~~~~~~~

+ 

+ The complete OPFW schedule is

+ `here <https://wiki.gnome.org/OutreachProgramForWomen/2013/DecemberMarch>`__.

@@ -0,0 +1,80 @@ 

+ Do not always override home directory with subdomain\_homedir value in server mode

+ ==================================================================================

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2583 <https://pagure.io/SSSD/sssd/issue/2583>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ Prior to sssd 1.12, we didn't have the ability to read home directory

+ values from AD in AD-IPA trust setups at all. Instead, we always used

+ the ``subdomain_homedir`` value. We can read custom LDAP values now, but

+ in order to stay backwards-compatible, we kept using the

+ ``subdomain_homedir`` value.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ Users from AD with POSIX attributes want to use individually set value

+ for home directory.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ ``subdomain_homedir`` for SSSD in server mode should support '%o'

+ template expansion (The original home directory retrieved from the

+ identity provider). In case when ``subdomain_homedir`` would be expanded

+ to an empty string ('subdomain\_homedir=%o' and AD user without POSIX

+ attributes) SSSD should not error out but ``fallback_homedir`` should be

+ utilized instead.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ -  Extend set of attributes returned by ``get_object_from_cache()`` by

+    SYSDB\_HOMEDIR attribute.

+ -  Update ``apply_subdomain_homedir()``

+ 

+    -  to parse AD home dirrctory from ldb\_msg

+    -  do not call ``store_homedir_of_user()`` if value of expanded home

+       directory is an empty string.

+ 

+ -  Extend interface of ``get_subdomain_homedir_of_user()`` to accept AD

+    home directory as parameter.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ No configuration changes are proposed.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ #. On SSSD in server mode

+ 

+    -  For AD user with posix attributes set home directory attribute

+ 

+       -  in sssd.conf set ``subdomain_homedir`` option to '%o'

+       -  invalidate cache (sss\_cache) and restart SSSD

+       -  call ``getent passwd user`` and check that home directory

+          reflects value from AD

+ 

+    -  For AD user ``without`` posix attributes

+ 

+       -  in sssd.conf set ``subdomain_homedir`` option to %o and

+          ``fallback_homedir`` to /home/%u

+       -  invalidate cache (sss\_cache) and restart SSSD

+       -  call ``getent passwd user`` and check that home directory

+          reflects ``fallback_homedir``

+ 

+ #. On SSSD acting as IPA client

+ 

+    -  Check that results are the same as on SSSD in server mode and that

+       local ``fallback_homedir`` is ignored

+ 

+ Authors

+ ~~~~~~~

+ 

+ `preichl@redhat.com <mailto:preichl@redhat.com>`__

@@ -0,0 +1,111 @@ 

+ User Account Management Consolidation

+ -------------------------------------

+ 

+ Related ticket(s):

+ 

+ -  N/A

+ 

+ The following proposal is the result of the understanding reached at the

+ February 22nd, 2013 meeting held at the Red Hat offices in Brno.

+ 

+ Problem Statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ User management is currently fragmented throughout our system. The only

+ unifying interface is nsswitch, provided by glibc. However, this

+ interface is minimal, provides only POSIX information and is a querying

+ interface only.

+ 

+ An interface used for limited editing of account data is provided

+ through libuser. This library can be used to modify data in local files

+ or LDAP servers. However the libuser interface is not generic and does

+ not allow to dynamically select the target database nor add additional

+ user data.

+ 

+ Desktop tools augment user information by storing additional data in a

+ separate database.

+ 

+ Legacy aspects of user management

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Local files

+ ^^^^^^^^^^^

+ 

+ Even today, many tools may still have direct dependencies on the files

+ even tough the

+ `nsswitch <https://www.gnu.org/software/libc/manual/html_node/Name-Service-Switch.html>`__

+ interface has been around for a long time. Also some administrators are

+ used to vipw password files or use scripts that directly manipulate

+ them. For these reasons it is not advisable to stop using the

+ traditional files for local accounts completely.

+ 

+ The only option to augment the files with non-POSIX information is to

+ access them through a common interface and store additional information

+ in a separate database. Legacy files would still remain authoritative.

+ 

+ Managing remote accounts

+ ^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ For accessing remote information, nsswitch became the de facto standard.

+ Red Hat is standardizing on the SSSD daemon for accessing remote user

+ information and perform authentication for remote users.

+ 

+ Remote directories often provide more flexibility, so additional data

+ will pushed there when possible. However in some cases additional

+ information may need to be stored locally if the remote server can't

+ hold it. The directory remains authoritative.

+ 

+ Unified interface through SSSD

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The proposal is to leverage SSSD to unify account management. The pros

+ and cons of this approach are listed below:

+ 

+ -  Pros:

+ 

+    -  Provides all the infrastructure needed to cache remote data and to

+       store locally additional information.

+    -  SSSD's database is easily extensible (LDAP -like)

+    -  Already provide PAM and nsswitch interfaces

+ 

+ -  Cons:

+ 

+    -  Misses an interface to directly manage users, however already has

+       infrastructure in place to make it easy to build this interface.

+ 

+ Changing authentication and user lookup

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The system will continue to use the classic PAM and nsswitch interfaces

+ for authentication and account lookups.

+ 

+ However we will probably change the PAM stack to try pam\_sss before

+ pam\_unix so that sssd is consulted first and pam\_unix is only used as

+ a fallback to directly access files.

+ 

+ Similarly for the nsswitch interface we will probably switch passwd and

+ group (and potentially other) database to use the sss target first and

+ only later fall back to the files target

+ 

+ Action Items

+ ~~~~~~~~~~~~

+ 

+ -  Develop dbus interface specification to satisfy desktop requirements

+    (design doc for SSSD)

+ -  Open tickets in SSSD to:

+ 

+    -  Build Files provider in SSSD

+    -  Build Rich API/dbus responder

+    -  Insure additional information pins cache contents

+ 

+ -  Modify libuser to become a compatibility layer on top of the Rich

+    API/dbus responder

+ -  Test and implement Root only access to files, and channel all access

+    through sssd

+ 

+    -  Needed for openshift and similar containerized envs.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Simo Sorce <`ssorce@redhat.com <mailto:ssorce@redhat.com>`__>

@@ -0,0 +1,278 @@ 

+ Feature Name

+ ============

+ 

+ Related ticket(s):

+ 

+ -  `https://pagure.io/SSSD/sssd/issue/2553 <https://pagure.io/SSSD/sssd/issue/2553>`__

+ 

+ Problem statement

+ ~~~~~~~~~~~~~~~~~

+ 

+ The InfoPipe responder adds a listing capability to the frontend code,

+ allowing the user to list users matching a very simple filter. To

+ implement the back end part of this feature properly, we need to add the

+ possibility to retrieve multiple, but not all entries with a single DP

+ request.

+ 

+ For details of the InfoPipe API, please see the `DBus responder design

+ page <https://docs.pagure.org/SSSD.sssd/design_pages/dbus_users_and_groups.html>`__.

+ 

+ Use cases

+ ~~~~~~~~~

+ 

+ A web application, using the InfoPipe interface requests all users

+ starting with the letter 'a' so the users can be displayed in the

+ application UI on a sigle page. The SSSD must fetch and return all

+ matching user entries, but without requiring enumeration, which would

+ pull down too many users.

+ 

+ Overview of the solution

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ 

+ Currently, the input that Data Provider receives can only be a single

+ user or group name. Wildcards are not supported at all, the back end

+ actively sanitizes the input to escape any characters that have a

+ special meaning in LDAP. Therefore, we need to add functionality to the

+ Data Provider to mark the request as a wildcard.

+ 

+ Only requests by name will support wildcards, not i.e. requests by SID,

+ mostly because there would be no consumer of this functionality.

+ Technically we could allow wildcard searches on any attribute with the

+ same code, though. Also, only requests for users and groups will support

+ wildcards.

+ 

+ When the wildcard request is received by the back end, sanitization will

+ be done, but modified in order to avoid escaping the wildcard. After the

+ request finishes, a search-and-delete operation must be run in order to

+ remove entries that matched the wildcard search previously but were

+ removed from the server.

+ 

+ Implementation details

+ ~~~~~~~~~~~~~~~~~~~~~~

+ 

+ The wildcard request will only be used by the InfoPipe responder, but

+ will be implemented in the common responder code, in particular the new

+ ``cache_req`` request.

+ 

+ The following sub-sections document the changes explained earlier in

+ more detail.

+ 

+ Responder lookup changes

+ ^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The responder code changes will be done only in the new cache lookup

+ code (``src/responder/common/responder_cache_req.c``). Since the NSS

+ responder wouldn't initially expose the functionality of wildcard

+ lookups, we don't need to update the lookup code currently in use by the

+ NSS responder.

+ 

+ The ``cache_req_input_create()`` function should be extended to denote

+ that the ``name`` input contains a wildcard to make sure the caller

+ really intends to left the asterisk unsanitized. Internally, the

+ ``cache_req_type`` would add a new value as well.

+ 

+ We might add a new user function and a group function that would grab

+ all entries by sysdb filter, which can be more or less a wrapper around

+ ``sysdb_search_entry``, just setting the right search bases and default

+ attributes. This new function must be able to handle views.

+ 

+ These responder changes should be developed as a first phase of the work

+ as they can be initially tested with enumeration enabled on the back end

+ side.

+ 

+ Responder <-> Data Provider communication

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ The request between the responders and the Data Provider is driven by a

+ string filter, formatted as follows: ::

+ 

+         type:value:extra

+ 

+ Where ``type`` can be one of ``name``, ``idnumer`` or ``secid``. The

+ ``value`` field is the username, ID number or SID value and extra

+ currently denotes either lookup with views or lookup by UPN instead of

+ name.

+ 

+ To support the wildcard lookups, we have two options here - add a new

+ ``type`` option (perhaps ``wildcard_name``) or add another

+ ``extra_value``.

+ 

+ Adding a new ``type`` would be easier since it's just addition of new

+ code, not changing existing code. On the backend side, the ``type``

+ would be typically handled together with ``name`` lookups, just sanitize

+ the input differently. The downside is that if we wanted to ever allow

+ wildcard lookups for anything else, we'd have to add yet another type.

+ Code-wise, adding a new type would translate to adding new values for

+ the ``sss_dp_acct_type`` enum which would then print the new type value

+ when formatting the sbus message.

+ 

+ The other option would be to allow multivalued ``extra`` field: ::

+ 

+         type:value:extra1:extra2:...:extraN

+ 

+ However, that would involve changing how we currently handle the

+ ``extra`` field, which is higher risk of regressions. Also, the back

+ ends can technically be developed by a third party, so we should be

+ extremely careful about changing the protocol between DP and providers.

+ Since we don't expect to allow any other wildcard requests than by name

+ yet, I'm proposing to go with the first option and add a comment to the

+ code to change to using the extra field if we need wildcard lookups by

+ another attribute.

+ 

+ Relax the ``sss_filter_sanitize`` function

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ When a wildcard request is received, we still need to sanitize the input

+ and escape special LDAP characters, but we must not escape the asterisk

+ (``*``).

+ 

+ As a part of the patchset we need to add a parameter that will denote

+ characters that should be skipped during sanitization.

+ 

+ Delete cached entries removed from the server

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ After a request finishes, the back end needs to remove entries that are

+ cached from a previous lookup using the same filter, but no longer

+ present on the server.

+ 

+ Because wildcard requests can match multiple entries, we need to save

+ the time of the backend request start and delete all entries that match

+ a sysdb filter analogous to the LDAP filter, but were last updated prior

+ to the start of the request.

+ 

+ Care must be taken about case sensitivity. Since the LDAP servers are

+ typically case-insensitive, but sysdb (and POSIX systems) are

+ case-sensitive, we will default to matching only case-sensitive ``name``

+ attribute by default as well. With case-insensitive back ends, the

+ search function must match also the ``nameAlias`` attribute.

+ 

+ LDAP provider changes

+ ^^^^^^^^^^^^^^^^^^^^^

+ 

+ The LDAP provider is the lowest common denominator of other providers

+ and hence it would contain the low-level changes related to this

+ feature.

+ 

+ In the LDAP provider, we need to use the relaxed version of the input

+ sanitizing and the wildcard method to delete matched entries. These

+ changes will be contained to the ``users_get_send()`` and

+ ``groups_get_send()`` requests.

+ 

+ The requests that fetch and store the users or groups from LDAP

+ currently have a parameter called ``enumerate`` that is used to check

+ whether it's OK to receive multiple results or not. We should rename the

+ parameter or even invert it along with renaming (i.e change the name to

+ ``direct_lookup`` or similar).

+ 

+ We also need to limit the number of entries returned from the server,

+ otherwise the wildcard request might easily turn into a full

+ enumeration. To this end, we will add a new configuration option

+ ``wildcard_search_limit``. Internally, we would change the boolan

+ parameter of ``sdap_get_users_send`` to a tri-state that would control

+ whether we expect only a single entry (ie don't use the paging control),

+ multiple entries with a search limit (wildcard request) or multiple

+ entries with no limit (enumeration). We need to make sure during

+ implementation that it is discoverable via DEBUG messages that the upper

+ limit was reached.

+ 

+ IPA provider changes

+ ^^^^^^^^^^^^^^^^^^^^

+ 

+ The tricky part about IPA provider are the views. The lookups with views

+ have two branches - either an override object matches the input and then

+ we look up the corresponding original object or the other way around.

+ The code must be changed to support multiple matches for both overrides

+ and original objects in the first pass. We might end up fetching more

+ entries than needed because the resulting object wouldn't match in the

+ responder after applying the override, but the merging on the responder

+ side will only filter out the appropriate entries.

+ 

+ Currently, the request handles all account lookups in a single tevent

+ request, with branches for special cases, such as initgroup lookups or

+ resolving ghost members during group lookups. We might need to refactor

+ the single request a bit into per-object tevent lookups to keep the code

+ readable.

+ 

+ Please keep in mind that each tevent request has a bit of performance

+ overhead, so adding new request is always a trade-off. Care must be

+ taken to not regression performance of the default case unless

+ necessary.

+ 

+ If the first override lookup matches, then we must loop over all

+ returned overrides and find matching originals. The current code re-uses

+ the state->ar structure, which is single-valued, we need to add another

+ multi-valued structure instead (``state->override_ar``) and perhaps even

+ split the lookup of original objects into a separate request, depending

+ on the complexity.

+ 

+ Conversely, when the original objects match first, we need to loop over

+ the original matches and fetch overrides for each of the objects found.

+ Here, the ``get_object_from_cache()`` function needs to be able to

+ return multiple results and the following code must be turned into a

+ loop.

+ 

+ When looking up the overrides, the ``be_acct_req_to_override_filter()``

+ must be enhanced to be able to construct a wildcard filter. The

+ ``ipa_get_ad_override_done`` must also return all matched objects if

+ needed, not just the first array entry. The rest of the

+ ``ipa_get_ad_override_send()`` request is generic enough already.

+ 

+ IPA subdomain lookups via the extdom plugin

+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+ 

+ Currently the extdom plugin only supports direct entry lookups, even on

+ the server side. We could add a new request that accepts a filter with

+ asterisk and returns a list of matching DNs or names, but because of the

+ complexity of the changes, this part of implementation should be

+ deferred until requested specifically.

+ 

+ If the IPA subdomain would receive a wildcard request, it would reply

+ with an error code that would make it clear this request is not

+ supported.

+ 

+ Making sure the IPA provider in server mode is capable of returning

+ wildcard entries and adding a wildcard-enabled function for the

+ ``libnss_sss_idmap`` library would a prerequisity so that the extop

+ plugin can request multiple entries from the SSSD running in the server

+ mode.

+ 

+ AD provider changes

+ ^^^^^^^^^^^^^^^^^^^

+ 

+ No changes seem to be required for the AD provider, since the AD

+ provider mostly just passes around the original ``ar`` request to a

+ Global Catalog lookup or an LDAP lookup. However, testing must be

+ performed in an environment where some users have POSIX attributes but

+ those attributes are not replicated to the Global Catalog to make sure

+ we handle the fallback between connections well.

+ 

+ Other providers

+ ^^^^^^^^^^^^^^^

+ 

+ Proxy provider support is not realistic, since the proxy provider only

+ uses the NSS functions of the wrapped module which means it would rely

+ on enumeration anyway. With enumeration enabled, the responders would be

+ able to return the required matching entries already. The local provider

+ is not a real back end, so it should get the wildcard support for free,

+ just with the changes to the responder.

+ 

+ Configuration changes

+ ~~~~~~~~~~~~~~~~~~~~~

+ 

+ A new option ``wildcard_search_limit`` will be added. The default value

+ would be 1000, which is also typically the size of one page.

+ 

+ How To Test

+ ~~~~~~~~~~~

+ 

+ When the InfoPipe API is ready, then testing will be done using the

+ methods such as ListByName. Until then, the feature is not exposed or

+ used anyway, so developers can test using a special command-line tool

+ that would send the DP request directly. This tool wouldn't be commited

+ to the git tree.

+ 

+ Authors

+ ~~~~~~~

+ 

+ -  Jakub Hrozek <`jhrozek@redhat.com <mailto:jhrozek@redhat.com>`__>

This work has been done based on a test repo created by @lslebodn.

The changes done to the documents were really minimal (just to ensure the links were working and so on). in other words, no review has been done in any of the design pages!

A few "FIXME" have been added to the added documents in order to ensure that later on tickets will be created and those issues will be addressed. The most part of the issues are either related to missing images or to pages that were not migrated yet.

I did not port ding-libs design pages s, altughough there's just one page, I do believe it would fit better in the ding-libs's pagure repo.

Pull-Request has been closed by jhrozek

6 years ago
Metadata
Changes Summary 83
+901
file added
design_pages/accounts_service.rst
+285
file added
design_pages/active_directory_access_control.rst
+209
file added
design_pages/active_directory_dns_sites.rst
+320
file added
design_pages/active_directory_dns_updates.rst
+106
file added
design_pages/active_directory_fixed_dns_site.rst
+483
file added
design_pages/active_directory_gpo_integration.rst
+115
file added
design_pages/async_ldap_connections.rst
+81
file added
design_pages/async_winbind.rst
+249
file added
design_pages/autofs_integration.rst
+91
file added
design_pages/backend_dns_helpers.rst
+172
file added
design_pages/cached_authentication.rst
+105
file added
design_pages/config_check_tool.rst
+95
file added
design_pages/config_enhancements.rst
+252
file added
design_pages/cwrap_ldap.rst
+411
file added
design_pages/data_provider.rst
+99
file added
design_pages/dbus_cached_objects.rst
+137
file added
design_pages/dbus_domains.rst
+73
file added
design_pages/dbus_multiplier_interfaces.rst
+354
file added
design_pages/dbus_responder.rst
+117
file added
design_pages/dbus_signal_property_changed.rst
+636
file added
design_pages/dbus_simple_api.rst
+245
file added
design_pages/dbus_users_and_groups.rst
+99
file added
design_pages/ddns_messages_update.rst
+97
file added
design_pages/fast_nss_cache.rst
+260
file added
design_pages/files_provider.rst
+135
file added
design_pages/global_catalog_lookups.rst
+127
file added
design_pages/idmap_auto_assign_new_slices.rst
+85 -3
file changed
design_pages/index.rst
+285
file added
design_pages/integrate_sssd_with_cifs_client.rst
+278
file added
design_pages/ipa_server_mode.rst
+253
file added
design_pages/ipc.rst
+146
file added
design_pages/kerberos_locator.rst
+83
file added
design_pages/kerberos_principal_mapping_to_proxy_users.rst
+129
file added
design_pages/ldap_referrals.rst
+182
file added
design_pages/libini_config_file_checks.rst
+62
file added
design_pages/local_group_members_for_rfc2307.rst
+209
file added
design_pages/lookup_users_by_certificate.rst
+138
file added
design_pages/lookup_users_by_certificate_part2.rst
+115
file added
design_pages/member_of_v1.rst
+81
file added
design_pages/member_of_v2.rst
+63
file added
design_pages/multiple_search_bases.rst
+177
file added
design_pages/netgroups.rst
+547
file added
design_pages/not_root_sssd.rst
+232
file added
design_pages/nss_responder_id_mapping_calls.rst
+188
file added
design_pages/nss_with_kerberos_principal.rst
+401
file added
design_pages/one_fifteen_code_refactoring.rst
+170
file added
design_pages/one_fourteen_performance_improvements.rst
+241
file added
design_pages/one_way_trusts.rst
+313
file added
design_pages/open_lmi_provider.rst
+223
file added
design_pages/otp_related_improvements.rst
+346
file added
design_pages/pam_conversation_for_otp.rst
+126
file added
design_pages/periodic_tasks.rst
+91
file added
design_pages/periodical_refresh_of_expired_entries.rst
+294
file added
design_pages/prompting_for_multiple_authentication_types.rst
+37
file added
design_pages/recognize_trusted_domains_in_ad_provider.rst
+161
file added
design_pages/restrict_domains_in_pam.rst
+73
file added
design_pages/rpc_idmapd_plugin.rst
+167
file added
design_pages/secrets_service.rst
+155
file added
design_pages/sigchld.rst
+180
file added
design_pages/smartcard_authentication_pkinit.rst
+212
file added
design_pages/smartcard_authentication_step1.rst
+250
file added
design_pages/smartcard_authentication_testing_with_ad.rst
+423
file added
design_pages/smartcards.rst
+189
file added
design_pages/smartcards_and_multiple_identities.rst
+206
file added
design_pages/socket_activatable_responders.rst
+80
file added
design_pages/sockets_for_domains.rst
+229
file added
design_pages/sssctl.rst
+101
file added
design_pages/sssd_two_point_oh.rst
+78
file added
design_pages/subdomains.rst
+163
file added
design_pages/sudo_caching_rules.rst
+122
file added
design_pages/sudo_caching_rules_invalidate.rst
+122
file added
design_pages/sudo_integration.rst
+42
file added
design_pages/sudo_integration_new_approach.rst
+156
file added
design_pages/sudo_ipa_schema.rst
+160
file added
design_pages/sudo_responder_cache_behaviour.rst
+149
file added
design_pages/sudo_support.rst
+309
file added
design_pages/sudo_support_plugin_wire_protocol.rst
+6
file added
design_pages/sudo_support_sample_sudo_rules_ldif.rst
+136
file added
design_pages/sysdb_fully_qualified_names.rst
+73
file added
design_pages/test_coverage.rst
+80
file added
design_pages/use_ad_homedir.rst
+111
file added
design_pages/usr_account_mgmt_consolidation.rst
+278
file added
design_pages/wildcard_refresh.rst