Getting timeout even though there is data available
schmorp at schmorp.de
Tue Jul 7 03:34:22 CEST 2009
On Mon, Jul 06, 2009 at 01:04:14PM -0700, Eric Brown <yogieric.list at gmail.com> wrote:
> I'm running libev 3.49 on OS X Leopard 10.5.7. I use libev in both my server
(As a sidenote, I doubt that platform has the necessary timing accuracy
> In failure, the server sends its response 2-3ms later (according to its
> logs) but the client does not see it in 50ms and my timeout-callback is
> triggered. I've gone as far as setting libev's flags to use poll and
> modifying libev to dump out what it is calling POLL on -- verifying that it
(Note that poll seems to be implemented in terms of buggy kqueue on OS X,
so is not really recommended unless you really only use e.g. tcp sockets).
> is basically calling poll(1, (<myfd>, 0x1, ...), 50ms). (The problem happens
> with select() too, but its parameters are a bit more difficult to log.) My
> sense is that I'm seeing some issue in OS X's loopback driver, but this is a
> very important piece of code and if anybody has some suggestions one way or
> the other, I'd be very appreciative.
Yes, barring a bug in your code somewhere, I would say that expecting OS X to
get such a good timing accuracy is unreasonable - what if some other process
takes over the CPU for 100ms?
50ms is IMHO far too small a timeout on a non-real-time system, especially
OS X - there are lots of reasons why your process could be delayed that
long, or longer.
The choice of a Deliantra, the free code+content MORPG
-----==- _GNU_ http://www.deliantra.net
----==-- _ generation
---==---(_)__ __ ____ __ Marc Lehmann
--==---/ / _ \/ // /\ \/ / pcg at goof.com
More information about the libev