schmorp at schmorp.de
Thu Dec 2 06:01:32 CET 2010
On Wed, Dec 01, 2010 at 02:49:29PM -0300, José Micó <jose.mico at gmail.com> wrote:
> >Could you post the benchmark table as a table? As it is now it's very hard
> >to read.
> name sockets create request
> EV 16 34.48 13.37
> POE::XS::Loop::Poll 16 34.24 13.40
> POE::XS::Loop::EPoll 16 34.36 13.20
> Perl 16 44.49 24.44
> Glib 16 56.12 30.92
> Event 16 149.61 69.43
> POE 16 400.60 343.99
> The Perl backend is faster than any other backed, excepting EV?
Yes, this particular benchmark doesn't depend on a scalable kernel
interface much (i.e. for few active fds and many inactive ones), so
select() (used by the perl backend) does quite well.
(One also has top keep in mind that in many real-world situations select()
indeed performs much better than epoll).
The purpose of this benchmark is to illustrate this in the specific case
of a small-sized server.
- Glib is slower because it's the prime example of how to get even C
dog-slow (it goes through all watchers multiple times in each iteration,
and calls a lot of methods that do very little).
- Event is slower because of the enourmous overhead (multiple dynamic
method calls) it is implemented with (but it beats Glib single-handedly
when the number of watchers increase)
- Looking at your POE::XS::* numbers it's virtually certain now that
all you did is benchmark EV under a different name (the results are
basically identical). To see what went wrong one would need to see how
you changed the benchmark to use those modules.
An obvious problem would be how you keep POE from automatically loading
these modules, so my guess is that POE::XS::* is really EV, while your
POE benchmark is really POE+POE::XS::Loop::Poll/POE::XS::Loop::EPoll.
The choice of a Deliantra, the free code+content MORPG
-----==- _GNU_ http://www.deliantra.net
----==-- _ generation
---==---(_)__ __ ____ __ Marc Lehmann
--==---/ / _ \/ // /\ \/ / schmorp at schmorp.de
More information about the anyevent