sub milisec socket server write

Marc Lehmann schmorp at schmorp.de
Tue Feb 16 09:46:42 CET 2010


Please don't wrap e-mails you receive in the horrible way you do - that
makes your e-mails needlessly hard to read - not a good way to encourage
replies. Especially as your e-mail client itself creates extremely long
lines itself, which again makes your mails harder to read.

On Mon, Feb 15, 2010 at 06:18:26PM -0800, Neeraj Rai <rneeraj at yahoo.com> wrote:
> > ev_async doesn't necessarily use a pipe (it uses eventfd if
> > available),
> > and if it uses one, it never writes more than a single
> > octet.
> is this 1 octet per write ? or is it 1 octet till it is consumed and then
> next octet ?

there is usually just one single octet in the pipe, but it's impossible
to avoid having multiple octets in there (there is a race when multiple
threads try to write at the same time) in general. the latter should be
very rare, and doesn't affect correctness, so "one octet max at any time"
is a reasonable expectation.

> > Note that pipes are quite slow compared to memory
> > manipulation (and
> > eventfd is even slower if your threads run on different
> > cores, but that
> > should be avoided in general!).
> Sorry, I'm confused by this. So what's the best way to use ev_async ? 

Well, just use it as often as you want/need. ev_async is optimised quite
well, but it still incurs a kernel syscall once per loop iteration, and
waking up the loop blocking in the kernel.

> Or are you hinting that I shouldn't worry about this as it will be slow no
> matter what.

I am saying that you need to kepe in mind that event processing is not
free - you still need to ask the kernel for events, and you still need a
pipe write (or eventfd write).

The trade-offs are often not clear - evetnfd is way faster than a pipe
when you keep your threads on a single core (which is what threads are
made for), and pipes are much faster than eventfd when the threads are
spread over multiple cores.

YMMV.

> One more complication here that I should have mentioned is that single thread handles many connections. Each connection has an input queue. These
> are polled non-blocking (peek and continue) for data. If data not found, next queue is checked. And this might still work well with what you suggest- poll queue and write blocking.

Well, that doesn't sound efficient, but maybe you have to do that to
achieve some latency requirement, that's not clear from what you write.

Achieving low latency (<< 10ms) on a non-rt kernel (linux) is hard in any
case, and libev doesn't even attempt to give you 1ms accuracy. You will
have to experiment a lot.

> We tried blocking writes stress test a while back ( and I had even little experience then... so it could have been some stupid obvious bug),
> and got a deadlock with channel full.

Blocking I/O is not viable when you have more than one socket per thread
to tend to.

-- 
                The choice of a       Deliantra, the free code+content MORPG
      -----==-     _GNU_              http://www.deliantra.net
      ----==-- _       generation
      ---==---(_)__  __ ____  __      Marc Lehmann
      --==---/ / _ \/ // /\ \/ /      schmorp at schmorp.de
      -=====/_/_//_/\_,_/ /_/\_\



More information about the libev mailing list