libev / multiple threads / multiple loops
Jamie Doran
jamie.doran81 at gmail.com
Mon Nov 3 16:49:34 CET 2014
Hi John,
I agree with everything you say about the cost of context switching if the
amount of work being done is very small, and the value of keeping the
processing on a single OS thread. But I need to understand how libev can be
used to take advantage of multiple cpu cores and was wondering why my test
didn't work.
I now have a solution which I will post to this thread.
Again, thank you for taking the time out to answer by query and for your
valuable insight.
Jamie.
On Mon, Nov 3, 2014 at 1:29 PM, John Newton <jnewton at gmail.com> wrote:
> I am not an authority on libev, but have used it in a socket/proxy program
> that handles millions of simultaneous open sockets. Libev is designed so
> the libev "threads" run on the same O/S thread. Libev is best used where
> the application has a large number of waiting threads, and when a thread is
> ready to execute, it does not need a lot of time to perform it's task. The
> balance between number of number of waiting threads, and duration of each
> task will determine the "latency" added to a libev "thread", when it is
> ready to execute, but other libev threads are running. The point of libev
> is to reduce O/S and C library latency in switching O/S threads to a ready
> thread (in a cross platform way). For example, If my program had to have 1
> million O/S threads, the memory cost would be significantly increased, and
> thread switching cost would be much greater than then actual work time per
> thread . In my application, I have one O/S thread per CPU core, and let
> libev manage all of the waiting sockets on individual libev "thread".
>
> So you are correct, your worker tasks have to cooperate. If your worker
> thread has to do 100 ms of work, all your other waiting libev threads will
> have to wait.
> You have to balance:
> - amount of CPU for workers needed when "ready to run".
> - min/max/avg number of waiting tasks vs "ready to run" tasks
> - number of CPU O/S threads you have to use O/S (or library)
> multi-threading between ready tasks (I have multi-cores available)
> - Memory available, and how much context switching cost vs. amount of work
> a "ready" task will need.
>
> I should note the third reason for libev is the abstraction of event
> (poll, kevent, etc.), while dealing with some O/S and platform differences
> (memory barriers, bugs, etc.).
>
> I am sure there are more reasons libev exists, but these are the ones that
> were important to my application.
>
>
> On Mon, Nov 3, 2014 at 4:39 AM, Jamie Doran <jamie.doran81 at gmail.com>
> wrote:
>
>> Hi,
>>
>> I am looking at using libev in a multi-threading process and am trying to
>> understand how callbacks are invoked on individual threads.
>>
>> My test program (see below) creates a number of worker threads, each with
>> its own dynamic loop. As a test, I create a periodic timer on each thread
>> and when it expires do some work.
>>
>> Because each worker thread has its own loop instance and is polling that
>> i.e. loop.run(0) then it should operate independent of the main thread
>> where the default loop is ? i.e. the invocation of the callback would be
>> done on the "worker" thread. Instead I see that the callbacks for all
>> workers are in fact executed on the main thread where the default loop is
>> running.
>>
>> Perhaps I am misunderstanding something fundamental about libev and my
>> apologies if I am but I would be grateful if you could point this out to me
>> and why it works this way.
>>
>> The attached example is only just that to illustrate the point but I
>> would like something where a specific worker with its own thread could do
>> some socket io independent of the main thread.
>>
>> Does this make sense?
>>
>> Thanks,
>> Jamie
>>
>>
>> #include <stdio.h>
>> #include <stdlib.h>
>> #include <ev.h>
>> #include <ev++.h>
>> #include<pthread.h>
>>
>> pthread_barrier_t b;
>>
>> void do_some_work(int id)
>> {
>> printf("%s Id= %d thr_id= %lu\n\n", __FUNCTION__, id, pthread_self());
>> long data = 230000;
>> for (int i=0;i<2000;i++) {
>> for (int j = 0; j < 1000; j++) {
>> for (long z = 0; z < 300; z++) {
>> data *= j;
>> data <<8>>8;
>> }
>> }
>> }
>> }
>>
>> class Worker {
>> int m_id;
>> ev::dynamic_loop m_loop;
>> ev::timer m_timer;
>>
>> void timeout_cb(ev::timer &watcher, int revents) {
>> printf("Worker: %s : Id= %d <%lu>\n", __FUNCTION__, m_id,
>> pthread_self());
>> do_some_work(m_id);
>> }
>>
>> public:
>>
>> Worker(int id) : m_id(id) {
>> m_timer.set<Worker,&Worker::timeout_cb>(this);
>> m_timer.start(0., 1);
>> }
>> void run_event_loop() {
>> printf("Worker: %s : id= %d thr_id= %lu\n", __FUNCTION__, m_id,
>> pthread_self());
>> pthread_barrier_wait(&b);
>>
>> m_loop.run(0);
>> }
>> };
>>
>> void* start_worker(void* data)
>> {
>> long id = (long)data;
>> Worker *worker_module =
>> new Worker(id);
>> worker_module->run_event_loop();
>> }
>>
>> int main(int argc, char **argv)
>> {
>> ev::default_loop loop;
>>
>> int err, numThreads = 1;
>> if (argc>1)
>> numThreads = strtol(&argv[1][0], NULL, 10);
>>
>> pthread_barrier_init(&b, 0, numThreads+1);
>>
>> pthread_t *thread_ids = (pthread_t
>> *)malloc(numThreads*sizeof(pthread_t));
>> for (int i=0;i<numThreads;i++) {
>> pthread_create(&thread_ids[i], NULL, start_worker, (void *)i);
>> }
>> printf("%s thr_id = %lu\n", __FUNCTION__, pthread_self());
>> pthread_barrier_wait(&b);
>> loop.run(0);
>>
>> for (int i=0;i<numThreads;i++) {
>> pthread_join(thread_ids[i],NULL);
>> }
>>
>> return 0;
>> }
>>
>> _______________________________________________
>> libev mailing list
>> libev at lists.schmorp.de
>> http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schmorp.de/pipermail/libev/attachments/20141103/246c361c/attachment-0001.html>
More information about the libev
mailing list