#1197 Raise limits for max num of files sssd_nss/sssd_pam can use
Closed: Fixed None Opened 7 years ago by simo.

We currently keep the default max files limit which is set at around 1000 files for normal processes.
Given we let clients keep connections open for a long time, this limit is low on very large servers that may easily exceed 1000 processes.

We should use setrlimit and raise this limit to 8k files for now.
We can tune it back down a bit once we will have the shared memory cache.
At that point keeping the socket open should not be needed anymore an we should change the clients (or even just the server) to close the socket once a request is done.

Because we do not control clients we should also keep track of file descriptors and periodically prune inactive file descriptors when we are close to the max files limit in order to avoid starving the system (when we run out of FDs we are incapable of serving new processes).

This needs to be fixed in 1.8. Very busy systems can get into a lot of trouble here.

For 1.8 we will hard-code a maximum request of 8k file descriptors (which can be clamped down to the hard limit specified in limits.conf). On Fedora, with an unchanged, default limits.conf, this will result in 4k available file descriptors.

For master/1.9, we will implement a new config file option to set the file descriptor limit to any value. For this we will request the CAP_SYS_RESOURCE SELinux capability to be able to set this to an arbitrary value.

milestone: NEEDS_TRIAGE => SSSD 1.8.0 (LTM)
owner: somebody => sgallagh
priority: major => critical
status: new => assigned

Created #1198 and #1199 to track additional requirements as separate tickets.

Fixed by:
- master (configurable)
- 1a63155
- 457927f
- 237eb8b
- sssd-1-8 (hard-coded)
- fa3f237

patch: 0 => 1
resolution: => fixed
status: assigned => closed

Metadata Update from @simo:
- Issue assigned to sgallagh
- Issue set to the milestone: SSSD 1.8.0 (LTM)

2 years ago

Login to comment on this ticket.