#47373 Make TCP accept() process multi-threaded
Closed: wontfix 5 years ago Opened 9 years ago by nkinder.

Currently there is a single thread in 389 DS that 'plucks' waiting tcp
connections out of the receive queue, calls accept() to create a new connection, and passes them off to the various worker threads.

With the massive network pipes and resources available to servers today, this
should be a multi-threaded process, and the size of the receive queue should be
a tunable parameter as opposed to the current hard-coded value of 128 (the last part was fixed in https://fedorahosted.org/389/ticket/47377)


Related to "48121 - Accept many new connection requests"

So we are obviously planning to change to nunc-stans.

Now in nunc-stans we have a job type of "thread", which seems like the solution here. The issue is it's processed as:

{{{
libevent triggers that we have a new connection
Because the job is threaded, we push the event now to the work_q
A thread picks up from the work_q
the connection is accepted.
}}}

So there are some issues here. We are doing a lot of work we don't need to. The time it takes to enqueue an event to the work_q, we may have already just completed the accept().

So by having a smaller, tighter accept loop, we can have the following process.

{{{
libevent triggers that we have a new connection
the connection is accepted.
}}}

So we cut out two steps.

Next, when the connection is accepted, because of the way it works, when the connection does have an event, NOW we put it onto the work_q for processing, because the actual connection read, process, etc takes a long time, so it's worth the cost to put it into the work_q, and when it goes to rearm, it comes back around to the event_q for rearm.

I think a bigger bottleneck, even if you were to multi thread this the connection table. Because of the current design and how it's locked, we just can't escape that it will cause serialisation of connection acceptance.

Lets focus on removing the connection table first, then we'll come back to multi threading accept (I don't think it will be needed).

Another option if we run the accept() in a separate thread is "turbo mode" accept() - accept as many connections as you can, in a loop, with some thread yields thrown in there to avoid starvation, before you get an EAGAIN. This is similar to the way connection_threadmain works now with "read" turbo mode - read and process as many operations as you can, in a single thread, without putting the connection back in the poll array. Instead, do the poll() inside the thread. The downside is that there is some complex logic to avoid hysteresis and having too many turbo threads, but it is great for things like replication and bulk update where you can have a single thread read and process a firehose of operations. I could imagine something similar for accept - accept as many connections as you can, and just queue them up for processing.

But I agree, in this case, the connection table is a huge bottleneck.

Metadata Update from @nhosoi:
- Issue assigned to rmeggins
- Issue set to the milestone: 1.3.6 backlog

5 years ago

Metadata Update from @firstyear:
- Issue assigned to firstyear (was: rmeggins)

5 years ago

I don't think this ticket is valid anymore, because of the changes in nunc-stans, and other easier issues like the connection table. I think we can resolve this, and find other areas to improve.

Metadata Update from @firstyear:
- Custom field reviewstatus adjusted to review (was: Needs Review)
- Custom field version adjusted to 1.3.6
- Issue close_status updated to: None

5 years ago

Metadata Update from @firstyear:
- Issue close_status updated to: invalid
- Issue status updated to: Closed (was: Open)

5 years ago

So how would you implement this in 389 with nunc-stans?
What I mean is - you still need some sort of design to handle connection storm cases, no matter what the event framework implementation is.

Right now we don't have multithread accept. That's reality. Today when we turn on nunc-stans, we get a huge performance boost, but we still don't have multithread accept. Even if we did enable a multithread accept, we are going to be bottle necked on the connection table and so many other parts, the work will show no benefit. It's a great idea but there are so many other lower barriers, we should follow them first. If we do this, it's years away, because we have years worth of clean up to do first before we can architecturally be at a point where multi thread accept might help us.

So I want to close this until then, because I don't want us to look at something like this and say "it's the silver bullet" when it's not. What we need is to roll up our sleeves and clean out some broken old crap, and get it out. Nunc-stans is mind blowingly fast already, which means any limitation when we drop it into DS is on the DS side.

In the future, when we go to do this properly, this means multiple threads of event workers, because else we will have bottlenecks in nunc-stans. It's not as simple as marking the job PERSIST|THREADED.

I don't know what about this issue makes you think it is some sort of "silver bullet". It is just like any other issue in the issue tracker. It is here so that we can be reminded that real customers have run into the connection storm problem, and that we should eventually solve this problem, in the order and with the priority that makes the most sense. Are you saying that we should delete this ticket and forget about this issue for now, and remember to add it back at some point in the future? I don't really see the logic in that.

So then we need an issue that says "dealing with connection storms", rather than "here is a highly specific way we might deal with that issues. Then we can link other issues to it, like multi thread accept, conn table rework, nunc-stans etc.

This issue is not about connection storms, it says in the title, make TCP accept multi threaded. So looking at this issue directly as stated I can see, it's been open 3 years ago, we have since replaced connection management with nunc-stans, and that brings it's own methods of operation seperate to this.

I think that having a ticket like this does distract us from the real issues, because we may just finger point and say "hey, why aren't we doing that, it seems awesome and cool and fast and complex, complex is good right".

Well, that's the thing. The way that nunc-stans and libevent work actually negate the need for this ticket. Libevent can only maintain a single thread of events and event management (unless you are on windows with IOCP). As a result, every single IO event that occurs is going to go through that one loop.

So then, what we have to do is if we want to make accept multithreaded, we have an event that occurs, we wake the event thread, we queue that to a work queue, then the work queue will accept on the event, and it will rearm ready for the next accept event.

Contrast to how nunc-stans works now, where an event occurs, the event thread wakes, we accept and rearm ready for the next accept.

The multithread model has a few issues. The overall time to accept on a socket may be higher due to the queue and dequeue to the work queue. There is no guarantee that a worker thread is ready (maybe we are getting smashed by searches), but there is a guarantee the event loop thread is ready (it woke and queued us). We also add latency to accept relative to the event occuring. Plus, once we do the accept, we have to take the new fd and give it BACK to the event thread so it can be armed. So we have ANOTHER queue and dequeue event.

So a better idea is to make accept as short and tight as possible inside of the event loop, so that even while being blitzed with searches and other events, we are still able to accept and queue new connections, and we can even continue to queue new IO events from other threads to workers.

This idea does not work with the design of nunc-stans, and if it's to be made to work with nunc-stans, that means we need to re-write and rethink nunc-stans completely.

So there are two view point.s. Either the issue is valid and we rewrite nunc-stans to work with this, or it's not valid because of what I have said here.

So then we need an issue that says "dealing with connection storms", rather than "here is a highly specific way we might deal with that issues.

https://pagure.io/389-ds-base/issue/49128 - which also includes a suggestion for a highly specific way to deal with the issue.

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/710

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: invalid)

2 years ago

Login to comment on this ticket.

Metadata