multi_socket supports multiple parallel transfers—all done in the same
single thread—and have been used to run several tens of thousands of
transfers in a single application. It is usually the API that makes the most
sense if you do a large number (>100 or so) of parallel transfers.
Event-driven in this case means that your application uses a system level
library or setup that “subscribes” to a number of sockets and it lets your
application know when one of those sockets are readable or writable and it
tells you exactly which one.
This setup allows clients to scale up the number of simultaneous transfers
much higher than with other systems, and still maintain good performance. The
“regular” APIs otherwise waste far too much time scanning through lists of all
the sockets.
There are numerous event based systems to select from out there, and libcurl
is completely agnostic to which one you use. libevent, libev are libuv three
popular ones but you can also go directly to your operating system’s native
solutions such as epoll, kqueue, /dev/poll, pollset, Event Completion or I/O
Completion Ports.
Just like with the regular multi interface, you add easy handles to a multi
handle with . One easy handle for each transfer you
want to perform.
You can add them at any time while the transfers are running and you can also
similarly remove easy handles at any time using the
call. Typically though, you remove a handle only after its transfer is completed.
It also needs to tell libcurl when its timeout time has expired, as it is
control of driving everything libcurl can’t do it itself. So libcurl must tell
the application an updated timeout value, too.
libcurl informs the application about socket activity to wait for with a
callback called
. Your
application needs to implement such a function:
Using this, libcurl will set and remove sockets your application should
monitor. Your application tells the underlying event-based system to wait for
the sockets. This callback will be called multiple times if there are multiple
sockets to wait for, and it will be called again when the status changes and
perhaps you should switch from waiting for a writable socket to instead wait
for it to become readable.
When one of the sockets that the application is monitoring on libcurl’s behalf
registers that it becomes readable or writable, as requested, you tell libcurl
about it by calling and passing in the affected
socket and an associated bitmask specifying which socket activity that was
registered:
The application is in control and will wait for socket activity. But even
without socket activity there will be things libcurl needs to do. Timeout
things, calling the progress callback, starting over a retry or failing a transfer that
takes too long, etc. To make that work, the application must also make sure to
handle a single-shot timeout that libcurl sets.
libcurl sets the timeout with the timer_callback
CURLMOPT_TIMERFUNCTION:
When the event system of your choice eventually tells you that the timer has
expired, you need to tell libcurl about it:
…in many cases, this will make libcurl call the timer_callback again and
set a new timeout for the next expiry period.
When you have added one or more easy handles to the multi handle and set the
socket and timer callbacks in the multi handle, you are ready to start the
transfer.
To kick it all off, you tell libcurl it timed out (because all easy handles
start out with a very, very short timeout) which will make libcurl call the
callbacks to set things up and from then on you can can just let your event
system drive:
The ‘running_handles’ counter returned by holds the
number of current transfers not completed. When that number reaches zero, we
know there are no transfers going on.
Each time the ‘running_handles’ counter changes, will
return info about the specific transfers that completed.