multi-threading

Marc Lehmann schmorp at schmorp.de
Mon Jan 28 11:51:08 CET 2008


On Sun, Jan 27, 2008 at 09:25:14PM -0800, Eric Brown <yogieric.list at gmail.com> wrote:
> The docs suggest one loop per thread. That makes sense, but there are a few
> options for how to distribute load between N loops/threads. I'm not sure
> which is best.

Its hard to say, as this differs depending on what exactly you do, how long
it takes, the architetcure/os you are running on and which backend is in use.

There are no easy obvious answers :)

> 1. When I get a new incoming connection, I could keep track of which loop
> has the fewest connections and assign the connection appropriately. I think
> I'd have to use a mutex to lock all access to a loop's watchers.

Or, if you have relatively long-lived connections, you might use a pipe per
thread and wake that other thread up that way.

Locking the loop itself against other threads is pretty futile, you may
try, but it will likely result in a mess.

In fact, I could envisage that this functionality (signaling a loop
asynchronously) could become part of libev proper if there is demand, as
libev already has to handle asynchronous signals at leats in one loop
and therefore needs the mechanism already (and that way it could e.g. be
optimised by using an eventfd or other os-specific means).

> 2. I could use SO_REUSEPORT to have my accept logic operate on every loop.

Not sure what you mena by thta, but I am sure SO_REUSEPORT will be of no help
whatsoever there.

Do you mean waiting for the same fd in multiple threads and accepting
locally? Thats possible to do, but might result in a wake-up herding
problem, depending on the backend.

> There'd be no mutexes, but I'd have little control over how connections are
> balanced between loops/threads.

You could pass around a token so only one thread accepts conenctions,
preferably the one with least work, with some infrequent election. Not
easy to do, but doable.

> 3. I could establish a pipe (or socket) between the server loop and all
> other loops. When I get an incoming connection, I send a message (say the
> peer IP address & socket number) over the pipe to my other loop/thread.
> libev will awake and process the incoming connection as if it came directly
> to it.

It's usually better only to use the pipe as wakeup mechanism, pass only
a dummy byte and use regular locking. One might even be able to not pass
anything, depending on how the other side(s) work.

> 4. Simulate a backlog for each loop - just a list of new connections to
> process. On accept, choose a loop, lock a mutex and add a value to the list
> of connections to process. On a loop's callback, lock same mutex and check
> for new connections to process. Could do this in a timer too instead of at
> every callback.

Yes, one could.

> Any suggestions or recommendations?

It depends very much on your workload - if you have got real cpu to
burn on requests and not just ultra-quick transaction processing, you
might also go for a single loop for the I/O and a thread pool for the
processing. This makes it easy to make intelligent decisions about
workload (in fact, you don't usually have to at all).

Again, there is no best way, and in the real world, you usually get to
chose between one thread doing I/O (or distributing I/O), all threads
doing it, a central dispatcher or some leader-follower pattern.

It is hard to get efficient while being portable, too.

-- 
                The choice of a       Deliantra, the free code+content MORPG
      -----==-     _GNU_              http://www.deliantra.net
      ----==-- _       generation
      ---==---(_)__  __ ____  __      Marc Lehmann
      --==---/ / _ \/ // /\ \/ /      pcg at goof.com
      -=====/_/_//_/\_,_/ /_/\_\



More information about the libev mailing list