ev_async_send() not trigerring corresponding async handler in target loop ?

Marc Lehmann schmorp at schmorp.de
Wed Dec 30 03:40:46 CET 2009


On Tue, Dec 29, 2009 at 03:53:59PM +0100, Pierre-Yves Kerembellec <py.kerembellec at gmail.com> wrote:
> Thanks for your fast answer. If I correctly understand what you're saying, the same ev_async watcher started ev_async_init()-ed and ev_async_start()-ed
> in each worker thread has to be passed to the ev_async_send() call ? It means it cannot be a local variable anymore and has to be accessible from the
> worker threads and the main (accept) thread ?
> 
> I changed my code a little bit so that the ev_async watchers are now "global" and persistent, and the whole thing seems to work (but keep reading):

Without analysing your code in detail, I think you might be entrenched too
much in the special semantics of ev_async.

ev_async, really, is like any other watcher, you ev_async_init it before
use, and then you ev_async_start it with some loop.

When you then want the callback invoked, you ev_async_send it some signal.
This can be done at any time from any other thread.

It probably helps to think of ev_async watchers as if they were some kind
of ev_signal watchers: instead of sendign a signal, you ev_async_send.

> Most of the time, this will work, but sometimes it'll get stuck before the 10k requests are actually done. My guess is I have an overwrite condition while
> passing the accepted file descriptors form the main thread to the worker thread, using the "handle" member of the global CONTEXT structure (one per

It seems to me that this has nothing to do with libev, right?

> So I guess I'm stuck back to my piping queue mechanism in this case, because a simple eventfd counter is not enough to hold an high-rate fd flow from

What's wrong with using, well, a queue (C++ has some, there are many
libraries that have some, it's easy to make one yourself using a
doubly-linked list etc.) instead of a pipe? That way, you can easily pass
around normal data structures inside your process, without having to do
syscalls in common cases (I mean, you already use threads, so what's wrong
with using a mutex and a queue?)

> the main thread. As you pointed out in your documentation
> (http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod#Queueing),
> "ev_async does not support queuing of data in any way".

Yes, which is mostly because libev doesn't know about the threading model you
use.

Since you know what threading model you use, it's trivial to create a
queue - ev_async is still the fastets way to wake up an event loop.

> Or I could use multiple av_async structures per worker threads, but it's
> just an ugly patch and will also eventually fail as the connection rate
> increases.

Yeah, spoiling the bug in your design even more will not be helpful. Just
use a queue, it's the right thing to queue data :)

There are other designs possible, see the other replies. Also, you could
wait for your workers to finish before you give them new jobs, but I
think it's easiest to use a queue. If you are unsure about how to queue
using threads, you can look at libeio, which implements a threadpool that
handles queued requests:

   http://cvs.schmorp.de/libeio/eio.c?view=markup

   - etp_submit submits a request to the pool.
   - etp_poll handles results returned from the pool.
   - etp_proc is the worker thread loop that reads requests.

The manpage (http://pod.tst.eu/http://cvs.schmorp.de/libeio/eio.pod)
briefly explains how ev_async would be plugged into this system.

(It actually is a planned goal of libeio to be split into a reusable
threadpool-part and an io part, but it's not there yet).

Now, as a word of advice: multithreading is (imho) extremely complicated,
expect that you will have to learn a lot.

> accept performance. I could also combine the pipe (for accepted handles queuing) and the eventfd signaling mechanism (indicating that the pipe needs
> to be read for new fds), but it's probably enough to just add the reading side of each pipe to an ev_io watcher in each worker thread (I know this is less
> optimal than an eventfd, but still seems to deliver great performance (with 4 worker threads on a dual bi-core server (4 way), I get about 16.000 req/s,
> which is not bad afterall)).

Using a pipe feels difficult to maintain, especially when you need to pass
more interestign data to your worker threads (but you can always write
addresses of request structures into your pipe and then read them from
your worker threads).

> In simple words, each communication pipe is acting as a buffering queue between the main thread and the worker threads. In the main thread, I've got
> something like this (in the network accept callback) :

*If* I would use pipes, then I would use only a single pipe.

> Is there a chance to see a similar generic API directly into libev sometime soon ?

The generic api is called ev_async, really. You probably think too
complicated: If you pay the inefficiencies of threads for a shared address
space, why not use it to handle your data?

Keep also in mind that threads are not very scalable (they are meant to
improve performance on a single cpu only and decrease performance on
multiple cores in general), and since the number of cores will increase
more and more in the near future, they might not be such a good choie.

> It would avoid duplicating this code all over the place as I think this
> is a common pattern in high performance network servers where the load
> has to be spreaded among multiple threads to benefit from multiple cores
> servers.  Or maybe you know of a different approach ?

A pattern is not code. That pattern can be implemented in a myriad different
ways, and actually has to be implemented differently on many platforms.

libev already provides all you need in the form of ev_async - anything
else is depending too much on your threading library and other factors.

-- 
                The choice of a       Deliantra, the free code+content MORPG
      -----==-     _GNU_              http://www.deliantra.net
      ----==-- _       generation
      ---==---(_)__  __ ____  __      Marc Lehmann
      --==---/ / _ \/ // /\ \/ /      pcg at goof.com
      -=====/_/_//_/\_,_/ /_/\_\



More information about the libev mailing list