sub milisec socket server write
Neeraj Rai
rneeraj at yahoo.com
Tue Feb 16 03:18:26 CET 2010
--- On Mon, 2/15/10, Marc Lehmann <schmorp at schmorp.de> wrote:
> From: Marc Lehmann <schmorp at schmorp.de>
> Subject: Re: sub milisec socket server write
> To: "Neeraj Rai" <rneeraj at yahoo.com>
> Cc: libev at lists.schmorp.de
> Date: Monday, February 15, 2010, 12:14 AM
> On Sun, Feb 14, 2010 at 05:25:10PM
> -0800, Neeraj Rai <rneeraj at yahoo.com>
> wrote:
> > There is a trade off between latency and cpu usage. If
> we sleep 1ms after each queue drain, cpu usage is almost nil
> on no activity, but latency is ~1ms.
> > If sleep is omitted, the cpu usage is 100% crowding
> out other procs on same box.
> > Q1. is this the best way to do this ?
>
> Depends - the best baseline is to use an event-based
> approach, and
> if you need better latency, an extra real-time thread. In
> general,
> sub-millisecond stuff can't really be done on a normal
> multiuser kernel
> with high confidence - most kernels still work with time
> resolutions of
> around 100hz.
>
> Since you already have a thread for that socket, you should
> try switching
> to blocking I/O: You already *pay* the thread overhead, so
> you might as
> well let the kernel do the blocking for you.
That sounds like a good idea. we'll try it. However, I am currently limited
in resource and not sure when we'll get to it.
>
> > Q2. we tried registering for EV_WRITE but had same
> issue.
>
> I have no clue what "same" refers to in your mail - what
> issue did you get?
Sorry ;-) The issue was same cpu vs latency payoff as write in idle event.
>
> > Q3. ev_async seems to something that might help - each
> write to queue can
> > call ev_async to wake the loop.
> However, before we knew about ev_async,
> > we had attempted similar solution
> (write to pipe and register ev_READ on
> > the pipe. This sometimes ran into
> deadlock when pipe was full. I think
> > pipes are 4K by default and we
> probably went over it.
>
> Yeah, the dreaded netscape bug :)
>
> > Does the ev_async already take care
> of pipe full ?
>
> ev_async doesn't necessarily use a pipe (it uses eventfd if
> available),
> and if it uses one, it never writes more than a single
> octet.
is this 1 octet per write ? or is it 1 octet till it is consumed and then
next octet ?
>
> Note that pipes are quite slow compared to memory
> manipulation (and
> eventfd is even slower if your threads run on different
> cores, but that
> should be avoided in general!).
Sorry, I'm confused by this. So what's the best way to use ev_async ?
Or are you hinting that I shouldn't worry about this as it will be slow no
matter what.
>
> So if you already pay for a thread and the thread
> switching, use it by
> letting the thread read a queue and do blocking writes.
One more complication here that I should have mentioned is that single thread handles many connections. Each connection has an input queue. These
are polled non-blocking (peek and continue) for data. If data not found, next queue is checked. And this might still work well with what you suggest- poll queue and write blocking.
We tried blocking writes stress test a while back ( and I had even little experience then... so it could have been some stupid obvious bug),
and got a deadlock with channel full.
The write blocked on the sender and the recver had read some data but trying to write more and blocked. Moving to non-block solved it as read and write were alternating and channel was being cleared when writes blocked.
thanks for the prompt response.
Neeraj
>
> --
> The
> choice of a Deliantra, the
> free code+content MORPG
> -----==-
> _GNU_
> http://www.deliantra.net
> ----==-- _
> generation
> ---==---(_)__ __ ____
> __ Marc Lehmann
> --==---/ / _ \/ // /\ \/ /
> schmorp at schmorp.de
> -=====/_/_//_/\_,_/ /_/\_\
>
More information about the libev
mailing list