RPOPLPUSH source destination
Time complexity: O(1)
Atomically returns and removes the last element (tail) of the list stored at , and pushes the element at the first element (head) of the list stored at destination
.
For example: consider source
holding the list a,b,c
, and holding the list x,y,z
. Executing RPOPLPUSH results in source
holding a,b
and holding c,x,y,z
.
If source
does not exist, the value nil
is returned and no operation is performed. If and destination
are the same, the operation is equivalent to removing the last element from the list and pushing it as first element of the list, so it can be considered as a list rotation command.
: the element being popped and pushed.
redis> RPUSH mylist "two"
- (integer) 2
redis> RPUSH mylist "three"
redis> RPOPLPUSH mylist myotherlist
redis> LRANGE mylist 0 -1
redis> LRANGE myotherlist 0 -1
- 1) "three"
However in this context the obtained queue is not reliable as messages can be lost, for example in the case there is a network problem or if the consumer crashes just after the message is received but it is still to process.
RPOPLPUSH (or for the blocking variant) offers a way to avoid this problem: the consumer fetches the message and at the same time pushes it into a processing list. It will use the LREM command in order to remove the message from the processing list once the message has been processed.
An additional client may monitor the processing list for items that remain there for too much time, and will push those timed out items into the queue again if needed.
Using with the same source and destination key, a client can visit all the elements of an N-elements list, one after the other, in O(N) without transferring the full list from the server to the client using a single LRANGE operation.
The above pattern works even if the following two conditions:
- There are multiple clients rotating the list: they'll fetch different elements, until all the elements of the list are visited, and the process restarts.
Note that this implementation of workers is trivially scalable and reliable, because even if a message is lost the item is still in the queue and will be processed at the next iteration.