Using libev with pthreads and one listening socket

Arlen Cuss celtic at sairyx.org
Mon Mar 14 08:14:00 CET 2011


I've partly resolved my own complain; I've actually started instead to
watch the listener fd on *all* loops at once. It works on Linux at
least, though I've yet to try it on my other target (BSD). I'm trying
for portability, so if it doesn't cut it, I'll have to look elsewhere.

I consider this only a partial solution, however, since it appears - at
least on Linux - that when a connection is received, about half the time
it'll end up triggering one random watcher (ideal), but the other half
of the time, all watchers. Of course, it's a matter of whichever thread
gets to the trigger first as to who gets the fd (which I don't mind),
but it's nicer to be able to avoid the excess computation of the other 3
(on a 4-core) threads all trying to accept().

Any suggestions would be warmly appreciated.

Cheers,
Arlen

On Mon, 2011-03-14 at 11:47 +1100, Arlen Cuss wrote:
> Hi all,
> 
> I apologise if this question's been asked before -- I've had a Google
> around, and haven't found a satisfactory answer (yet).
> 
> I'm writing a HTTP proxy using libev (code here: [1]), and I'm now
> trying to adapt it to run with an arbitrary number of threads, ideally
> so that it would be able to use all the available cores/CPUs on a
> system.
> 
> The problem I'm having is with choosing a method to actually get clients
> on the different threads. I'm using one event loop per thread, and so
> far the best I've been able to come up with is to "round-robin" the
> ev_io watcher on the listening socket itself [2]:
> 
>     if (!terminating && threads > 1) {
> 	++current_thread;
> 	if (current_thread == threads)
> 	    current_thread = 0;
> 
> 	/* Stop w (listener_watcher) on this loop, and signal the next
> 	 * loop to pick it up. */
> 	ev_io_stop(EV_A_ w);
> 	ev_async_send(thread_loops[current_thread],
> 		&thread_asyncs[current_thread]);
>     }
> 
> When the watcher for the listening socket is triggered, we accept()
> until there's nothing left to accept, then stop the watcher (in this
> loop and thread), and trigger the ev_async for the next thread, which
> then uses the same watcher, on the new loop [3]:
> 
> static void async_handler(EV_P_ ev_async *w, int revents) {
> [...]
> 	ev_io_start(EV_A_ &listener_watcher);
> [...]
> }
> 
> This works, but it actually ends up being *slower* than using a single
> thread for the test cases I've come up with so far. It's probably
> because, right now, there's not an awful lot of computation (it's not
> proxying yet) happening for the clients, so it's possibly an artificial
> limit -- but if not, then my guess is that it's the
> ev_io_stop/ev_io_start, and maybe the ev_asyncs (though they'll be
> unavoidable no matter how I do the round-robin, I think), that's causing
> it to be slower.
> 
> If anyone has any other thoughts for how to divvy up the clients between
> threads, I'd be most interested in hearing them. It may involve leaving
> the listener's watcher on one thread only, then feeding the accepted fds
> to other threads? I can imagine I could do that by maintaining a
> list/array of waiting fds in an array for each thread, appending to that
> (via mutex), then triggering the ev_async to tell the thread to pick it
> up. I'm not sure at this stage if that would be slower, but it's
> probably worth testing.
> 
> Cheers, and thanks to Marc for a great library and most amusing
> documentation.
> 
> Arlen
> 
> [1] https://github.com/celtic/gadogado2
> [2] https://github.com/celtic/gadogado2/blob/master/src/listener.c#L216
> [3] https://github.com/celtic/gadogado2/blob/master/src/main.c#L38
> _______________________________________________
> libev mailing list
> libev at lists.schmorp.de
> http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev


-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <http://lists.schmorp.de/pipermail/libev/attachments/20110314/aa4623fb/attachment.pgp>


More information about the libev mailing list