alternate approach to timer inaccuracy due to cached times
Shaun Lindsay
srlindsay at gmail.com
Wed Oct 12 02:15:54 CEST 2011
Hello,
I've been working with gevent for a while, which uses libev as the
underlying event library. As alluded to here,
http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod#The_special_problem_of_time_updates,
there are some issues that arise relating to timeouts when there is a
significant number of small events or a smaller number of expensive events.
The specific way this manifests is setting a 100ms timeout on a connection
after processing the results from a database query. The actual value of the
timeout will end up being 100ms minus the time taken previously in the
processing. If the processing takes longer than 100ms or the total time
taken by all the events in that dispatch cycle prior to calling
ev_timer_start() is greater than 100ms, that timer will immediately trigger.
As mentioned in the above link, you can force an update of the cached time
via ev_update_now(), but this doesn't practically address the issue. In my
case, the triggering of any of the timeouts would constitute an error case,
so I'd need to call ev_update_now() before every timer start. There are no
timeouts where immediate, nondeterministic expiration is an acceptable
condition, so use of ev_update_now() degenerates to refreshing the cached
time for every timer anyway. From a gevent perspective, it also becomes
difficult to expose the update call in a sane way to the user, since the
event architecture is hidden beneath the greenlet/coroutine abstraction.
Deferring calls ev_timer_start to the end of the dispatch cycle, then
starting them in a batch would fix this issue. Instead of starting the
timer when a callback requests it, we can add it to a queue of timers. When
we've finished the event callbacks, we can refresh the time once, then go
through the queue and start all the timers. The timer disparity would then
be limited to time taken to push all the timers on to the heap.
One nice advantage of this approach is that under heavy load, the timeouts
will occur later than the requested time, rather than before the requested
time, which seems saner from an application perspective -- I'd rather have a
100ms timeout fire at 200ms than fire in 0ms.
Perhaps this has already been discussed and rejected for some reason, but it
does seem to be an effective solution to the issue.
Since talk is cheap, I'm attaching a patch that does the deferred timer
queueing. It does a few things that might not fit the libev model and will
probably need modification to be consistent, like adding a struct ev_timer
*next to all ev_timers for use in a linked list, for instance. I'm also not
sure that I'm starting the deferred timers in the right place in ev_run().
Now that I think about it, timers started by callbacks from other timers
might not get picked up until the next event cycle, which might be bad, I'm
not sure. Somebody more familiar with the code can probably point me in the
right direction on that.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schmorp.de/pipermail/libev/attachments/20111011/79b685bc/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: deferred_timers.patch
Type: application/octet-stream
Size: 3233 bytes
Desc: not available
URL: <http://lists.schmorp.de/pipermail/libev/attachments/20111011/79b685bc/attachment.obj>
More information about the libev
mailing list