#2775 sssd-proxy with vas4 library groups issue
Closed: Invalid None Opened 8 years ago by mogthesprog.

Hi All,

I've scoured the internet and the man pages and can't seem to find the solution to my problem. It may be me being an idiot but it might also be that the functionality isn't there or not covered in documentation. Either way, figured this is the place to come...

So I have a container which contains a web app that needs pam authentication. Therefore I've set up an sssd-proxy on the host with the following settings.

<begin>
[sssd]
services = nss, pam
config_file_version = 2
domains = proxy
debug_level=9
[nss]
[pam]
pam_verbosity = 3
[domain/proxy]
debug_level=9

id_provider = proxy
auth_provider = proxy

The proxy provider will look into /etc/passwd for userinfo

proxy_lib_name = vas4

The proxy provider will authenticate against /etc/pam.d/sss_proxy

proxy_pam_target = sss_proxy
<end>

Now on the container i add sss to my nsswitch.conf file and i can su to VAS users and even authenticate against the library...

[root@9f293d1dc79a jupyterhub]# groups morganj
morganj : Unix_Users
[root@9f293d1dc79a jupyterhub]# su morganj
bash-4.2$ groups
Unix_Users
bash-4.2$ newgrp UNIX_AS_AN
bash-4.2$ groups
UNIX_AS_AN Unix_Users

So this illustrates problem number one. The sss service within the container is aware of my groups however it's not appending them all to my new shell instance. And i can't figure out why. I can get the jump into the groups with the newgrp command though. I don't think this is an issue with containers, but i am fairly new to them (only a few weeks).

to clarify my other settings. my proxy target sss_proxy file

<sss_proxy>
auth required pam_env.so
auth sufficient pam_vas3.so create_homedir get_nonvas_pass
auth requisite pam_vas3.so echo_return
auth sufficient pam_unix.so nullok try_first_pass use_first_pass
auth requisite pam_succeed_if.so uid >= 1000 quiet_success
auth required pam_deny.so

account sufficient pam_vas3.so
account requisite pam_vas3.so echo_return
account required pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_succeed_if.so uid < 1000 quiet
account required pam_permit.so

password requisite pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=
password sufficient pam_vas3.so
password requisite pam_vas3.so echo_return
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok
password required pam_deny.so

session optional pam_keyinit.so revoke
session required pam_limits.so
-session optional pam_systemd.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_vas3.so create_homedir
session requisite pam_vas3.so echo_return
session required pam_unix.so
<sss_proxy>

the content of my nsswitch.conf on the container

<nsswitch.conf>
bash-4.2$ cat /etc/nsswitch.conf | grep sss
passwd: files sss
shadow: files sss
group: files sss
services: files sss
netgroup: files sss
<nsswitch.conf>

And to allow the sssd-client service to communicate with the host I have bind mounted the /var/lib/sss/pipes directory to the container. I realise that this is a unorthodox setup but i'm hoping someone here can help me.

Any help is appreciated. Cheers.


Your config files are a bit hard to read, do you have a change to reformat them or just attach them to the ticket as plain files?

About the issue, can you check if you have an initgroups line in nsswitch.conf? If yes, please remove it.

Fields changed

cc: => sbose

Hi, Thanks for the quick response. I fixed that part yesterday so the groups work now. I'm sorry i hadn't worked that out already, this is all quick new to me.

However I still have the problem that authentication and "groups" queries take forever, nearly a minute sometimes. Any idea why this may be? The sssd-proxy log prints this line

(Thu Sep 3 12:01:53 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small

Whenever a "groups", "id" or "su" command is issued. Any idea why this may be?

Thanks for your help.

Cheers,

Morgan

Further info, if I issue one command "id user" i get the followig log...

>(Thu Sep  3 13:01:17 2015) [sssd[be[proxy]]] [get_initgr_groups_process] (0x0100): User [morganj] appears to be member of 18groups
>(Thu Sep  3 13:01:17 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
>(Thu Sep  3 13:01:17 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
>(Thu Sep  3 13:01:22 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
>(Thu Sep  3 13:01:29 2015) [sssd[be[proxy]]] [acctinfo_callback] (0x0100): Request processed. Returned 0,0,Success
>(Thu Sep  3 13:01:29 2015) [sssd[be[proxy]]] [get_initgr_groups_process] (0x0100): User [morganj] appears to be member of 18groups
>(Thu Sep  3 13:01:29 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
>(Thu Sep  3 13:01:29 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
>(Thu Sep  3 13:01:34 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
>(Thu Sep  3 13:01:42 2015) [sssd[be[proxy]]] [acctinfo_callback] (0x0100): Request processed. Returned 0,0,Success

It's clearly making the query twice somewhere. I'm pretty sure this is an error on my part in the config. I'll look into it. It doesn't change the fact that one query is still taking a long time ~ 20 seconds or so. I'm guessing this shouldn't be the case. I'll attach my config files in a second.

_comment0: Further info, if I issue one command "id user" i get the followig log...

(Thu Sep 3 13:01:17 2015) [sssd[be[proxy]]] [get_initgr_groups_process] (0x0100): User [morganj] appears to be member of 18groups
(Thu Sep 3 13:01:17 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
(Thu Sep 3 13:01:17 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
(Thu Sep 3 13:01:22 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
(Thu Sep 3 13:01:29 2015) [sssd[be[proxy]]] [acctinfo_callback] (0x0100): Request processed. Returned 0,0,Success
(Thu Sep 3 13:01:29 2015) [sssd[be[proxy]]] [get_initgr_groups_process] (0x0100): User [morganj] appears to be member of 18groups
(Thu Sep 3 13:01:29 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
(Thu Sep 3 13:01:29 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
(Thu Sep 3 13:01:34 2015) [sssd[be[proxy]]] [handle_getgr_result] (0x0080): Buffer too small
(Thu Sep 3 13:01:42 2015) [sssd[be[proxy]]] [acctinfo_callback] (0x0100): Request processed. Returned 0,0,Success

It's clearly making the query twice somewhere. I'm pretty sure this is an error on my part in the config. I'll look into it. It doesn't change the fact that one query is still taking a long time ~ 20 seconds or so. I'm guessing this shouldn't be the case. I'll attach my config files in a second.
=> 1441288770102977

PAM proxy service in /etc/pam.d/sss_proxy on the host
sss_proxy.host

the system-auth from the container, password-auth is also identical
system-auth.container

Are you sure that you see the duplicated request during the 'id' command and not during log-in e.g. su? If it is during log-in setting pam_id_timout to e.g. 20 might help.

According to the logs the user is a member of 18 groups and some of them are quite big, i.e. have many members. The 'Buffer too small' message is generated when the default buffer of 4k is too small and has to be increased. So both SSSD and VAS have to process a long member list which might explain the delay you see.

But when you call 'id' or 'groups' a second time it should be faster because SSSD should answer them from its own cache and not talk to VAS again. For log-in this is different becasue SSSD will always check group memberships to have up-to-date data at log-in time.

Please allow me a general question. Why are you using this mixed setup with SSSD and VAS? It might be a lot easier (any maybe faster) to just use the SSSD AD provider?

Yeah of course.

So i agree it would probably be much faster, but i don't work in the linux team of our company and when i suggested the use of SSSD they said i should use vas instead, as it's already there. They seemed reluctant to change that too.

I tried to setup SSSD in parallel but i ran into trouble when it came to joining AD with kerberos. I think it's because VAS has already joined the machine with AD. but i don't know enough about it to be sure.

The reason i'm using this mixed setup is because I have a container which I need to authenticate with LDAP, and rather than run two services within the container, which isn't ideal, I decided to authenticate with the hosts AD provider, VAS. This has the advantage that multiple containers can authenticate against the host. I think ultimately I'll need to rethink this approach.

Abit of background, I work in the R and D part of our company so i'm just experimenting with containers at the moment. But it seems like there is no built in mechanism for just using the host authentication, I've been googling it for over a week now. If you know anymore I'd gladly take your advice. The 'id' and 'groups' command are much quicker second time round. but the login is quite slow. I take it from your answer that this is something i'm going to have to live with? (It's fine like, just so long as i haven't done anything to cause this behaviour).

Appreciate all your help on this.

cheers

You can use SSSD in a host from a container if you:
a) Install SSSD client rpm in to the container
b) Mount sssd sockets into the container

Then you would be able to use SSSD from within the container for authentication and identity. The access control is tricky. There would be no way to differentiate the container from the host. If this is OK then fine. If not there is a solution to put an instance of SSSD in a container and then mount its sockets into another container using the host's instance of SSSD.

It seems you have resolved some issues since filing this ticket while new ones appeared, and I'm a bit lost figuring out what is the current hurdle. Could you describe the minimal use-case that is currently failing, and also describe if outside of containers that setup fails?

Replying to [comment:6 mogthesprog]:

I tried to setup SSSD in parallel but i ran into trouble when it came to joining AD with kerberos. I think it's because VAS has already joined the machine with AD. but i don't know enough about it to be sure.

yes, a single client can only join once. But the most important part here are the host keys in the keytab which VAS might store in /etc/opt/quest/vas/host.keytab. You can use the option krb5_keytab in sssd.conf to tell SSSD not to look for the keys in /etc/krb5.keytab but in a different one.

Hi sbose,

Thanks for the help, really appreciate it. I'm going to have a stab at that this afternoon and see how it goes. So ami right in thinking since I've pointed sssd at the keys that the sssd example config at this link is sufficient?

https://fedorahosted.org/sssd/wiki/Configuring_sssd_with_ad_server

That is, I don't need to configure kerberos or samba, i can just point sssd at the host.keytab file and it will work? For instance do I need to provide it with information like...

ldap_schema = rfc2307bis

ldap_search_base = dc=example,dc=co,dc=uk

ldap_user_name = sAMAccountName

thank you again for all of your help, I appreciate that this is not a forum but a bug report service, so thanks for your time.

No, please use the first one with 'id_provider = ad' this will do all the ldap_* settings with AD specific stuff for you.

Hi sbose,

The sssd.conf file from the first link isn't working. The second one is sort of working but not returning groups. I don't want to bother you guys much more with this, I'm sure you've got real tickets to deal with. I really appreciate all your help with this, I'm slowly getting a hold of sssd.

Again, thanks for your help, you can close this ticket if you like :)

Cheers,

Morgan

just to update you, I have a workig configuration now. I'm guessing that our AD is of the 2008 windows server config.

Thanks again sbose, really appreciate your help. I'll probably make a blog post about what i've learned, just incase someone new comes into these issues again.

Cheers,

Morgan

Glad to hear that it is working for you now. Nevertheless I'm a bit surprised that the ip_provider=ad version didn't work for you, it is the recommended provider for AD.

If you have further question you might want to subscribe to the sssd-user mailing-list at https://lists.fedorahosted.org/mailman/listinfo/sssd-users .

Fields changed

resolution: => invalid
status: new => closed

Metadata Update from @mogthesprog:
- Issue set to the milestone: NEEDS_TRIAGE

7 years ago

SSSD is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in SSSD's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/SSSD/sssd/issues/3816

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Login to comment on this ticket.

Metadata