Driving a thread pool from libev

Marc Lehmann schmorp at schmorp.de
Tue Feb 17 13:59:09 CET 2015


On Sun, Feb 15, 2015 at 03:55:14PM +0100, Rick van Rein <rick at openfortress.nl> wrote:
> I overlooked that pthread_yield() is a technique from single-core CPU days.

Whats worse, it's only really defined for realtime threads.

> I fear I cannot continue to use one thread per TLS session; this works fine for
> the short-lived connections of HTTP and SMTP, but the thread stack requirements
> are likely to become a bottleneck for IMAP, XMPP or other long-lived, low-traffic
> connections.

Are you sure the thread stack requirements (probably 8k when your kernel
dynamically allocates it) are so big compared to the whole TLS state you
have to keep?

Of course, one (kernel) thread per connection is indeed the wrong
approach for almost anything, except when you optimise for simplicity or
quick-coding for example (valid goals :).

Two common models I mentioned in passing before are using one loop per thread
or leader-follower.

For the former, you'd have one thread that simply calls accept, and then
queues the new connection to some thread from your threadpool, likely
waking it up with an ev_async watcher. You could have 1-2 threads per cpu
core (more than one per core because you might not distribute the load
evenly). if you have many connections and you evenly distribvute them to
your workers it is unlikely that you will get large imbalances, sop this
is likely workable.

The alternative is harder to get right. basically, when you start doing
processing in a watcher, you pass the loop to another thread. you can
either do this explicitly yourself, or you cna use a mutex to lock the
loop, and unlock it in the watcher, reacquiring it before you leave it.

then you'd lock your mutex and call ev_run in all your threads - wshen a
watcher is running, other threads will take over the loop.

this is obviously more tricky as you want the other loop to not handle your
I/O. you could either just stop the I/O watcher while you process the data or
you could try to optimise by setting a flag and only stop the watcher when
you get more I/O notifications.

while this will incur overhead, it might not be that high (libev will only
call into tghe kernbel when it receives further I/O notifications, so when
you read all data before passing the loop and no more data is received,
as is common, there will be no or little syscall overhead). similarly, a
thread switch is hardly cheap either (especially not if its not on the
same cpu, as threads were designed for single-core usage).

i'd start with the main-thread-accepts-and-distributes approach, because
it will likely work well and is much easier to get right (but almost
nothing is easy when threads are involved).

-- 
                The choice of a       Deliantra, the free code+content MORPG
      -----==-     _GNU_              http://www.deliantra.net
      ----==-- _       generation
      ---==---(_)__  __ ____  __      Marc Lehmann
      --==---/ / _ \/ // /\ \/ /      schmorp at schmorp.de
      -=====/_/_//_/\_,_/ /_/\_\



More information about the libev mailing list