Piggy-backing lock-free queue synchronization on ev_async membar

Konstantin Osipov kostja.osipov at gmail.com
Tue Mar 18 11:48:08 CET 2014


In our program we have 2 threads, both running ev_loops, and would
like to organize a simple producer-consumer message queue from one
thread to the other.
For the purpose of this discussion let's assume a message is a
simple integer.

It seems that we could implement such a data structure
in a completely lock-less manner.

Please consider the following implementation:

enum { QUEUE_SIZE = 100 };

struct queue {
        int q[QUEUE_SIZE];
        int wpos;  /* todo: cacheline aligned. */
        int rpos; /* todo: cacheline aligned */
        ev_async async;

/* Done in the producer */
queue_init(struct queue *q, ev_loop *consumer)
        q->rpos = q->wpos = 0;
        ev_async_init(&async, consumer);

/* For use in the producer thread only. */
queue_put(struct queueu *q, int i)
        if (q->wpos == QUEUE_SIZE)
                q->wpos = 0;
        q->q[q->wpos++] = i;

 * For use in the consumer thread only, in the event handler
 * for q->async.
queue_get(struct queue *q)
        if (q->rpos == QUEUE_SIZE)
                q->rpos = 0;
        return q->q[q->rpos++];

Let's put aside the problem of rpos and wpos running over each other,
for simplicity. 
The question is only, provided that QUEUE_SIZE is sufficient for
our production loads, would memory barrier built into 
ev_async_send be sufficient to ensure the correct read ordering
of this queue?

http://tarantool.org - an efficient, extensible in-memory data store

More information about the libev mailing list