Eric Brown yogieric.list at
Mon Jan 28 21:07:34 CET 2008

Hi Marc,

On Jan 28, 2008 2:51 AM, Marc Lehmann <schmorp at> wrote:

> On Sun, Jan 27, 2008 at 09:25:14PM -0800, Eric Brown <
> yogieric.list at> wrote:

> > The docs suggest one loop per thread. That makes sense, but there are a
> few
> > options for how to distribute load between N loops/threads. I'm not sure
> > which is best.

> In fact, I could envisage that this functionality (signaling a loop
> asynchronously) could become part of libev proper if there is demand, as
> libev already has to handle asynchronous signals at leats in one loop
> and therefore needs the mechanism already (and that way it could e.g. be
> optimised by using an eventfd or other os-specific means).

That occurred to me as well.

> 2. I could use SO_REUSEPORT to have my accept logic operate on every loop.
> Not sure what you mena by thta, but I am sure SO_REUSEPORT will be of no
> help
> whatsoever there.
> Do you mean waiting for the same fd in multiple threads and accepting
> locally? Thats possible to do, but might result in a wake-up herding
> problem, depending on the backend.
> > There'd be no mutexes, but I'd have little control over how connections
> are
> > balanced between loops/threads.
> You could pass around a token so only one thread accepts conenctions,
> preferably the one with least work, with some infrequent election. Not
> easy to do, but doable.

I don't like the idea of using SO_REUSEPORT as it just doesn't seem that
elegant. But it also seems like it might be relatively simple. And the
tokens are a good idea too. I don't mind if it isn't perfectly balanced, but
I'd like the different loops to have equal load within a few hundred
connections each.

> 3. I could establish a pipe (or socket) between the server loop and all
> > other loops. When I get an incoming connection, I send a message (say
> the
> > peer IP address & socket number) over the pipe to my other loop/thread.
> > libev will awake and process the incoming connection as if it came
> directly
> > to it.
> It's usually better only to use the pipe as wakeup mechanism, pass only
> a dummy byte and use regular locking.

Interesting. Though I'd then have to lock twice - once to add the connection
to some data structure in the first thread, and once to retrieve the
connection from the data structure. But it certainly is easier than dealing
with making sure I read N bytes, etc.

One might even be able to not pass
> anything, depending on how the other side(s) work.

I'm not sure I follow.

My targets shifted from bsd to linux now, but I was trying to stay away from
a pipe in case I ever wanted to move back to kqueue. But, alas, as you say,
it is hard to stay portable.

> 4. Simulate a backlog for each loop - just a list of new connections to
> > process. On accept, choose a loop, lock a mutex and add a value to the
> list
> > of connections to process. On a loop's callback, lock same mutex and
> check
> > for new connections to process. Could do this in a timer too instead of
> at
> > every callback.
> Yes, one could.
> > Any suggestions or recommendations?
> It depends very much on your workload - if you have got real cpu to
> burn on requests and not just ultra-quick transaction processing, you
> might also go for a single loop for the I/O and a thread pool for the
> processing. This makes it easy to make intelligent decisions about
> workload (in fact, you don't usually have to at all).

Lots of connections with keep alive firing requests every so often. Each
request requires some amount of CPU. Single threaded without processing, I
handle 35,000 request-responses/second. With processing, it drops to about
5000 request-responses/second.

Maybe a thread-pool is the way to go. I'd just thought since I (1) have
keep-alive and (2) the thread-pool has to re-engage the I/O thread to send
the response, it would be more logical to have one loop / thread than to use
a thread-pool.

I may just have to try it different ways. (I'm just not sure I have time
before my wife delivers and I have to stop playing around with this stuff.)

-------------- next part --------------
An HTML attachment was scrubbed...

More information about the libev mailing list