backend, frontend: parallel handling of actions
This commit is adding a new abstraction named "WorkerManager" which is
able to spawn generic workers/daemons on background (and collect
results) according to given parameters (max workers, timeouts, etc.).
This class should be reusable by other queue-oriented logic in copr
code, namely for build tasks and for import tasks on dist-git. The
benefit of WorkerManager is that once the task is spawned as a
background process, the backend daemon process(es)
(copr-backend.service) can be safely restarted and the tasks themselves
won't be affected _at all_.. of course unless the whole box is rebooted
(in which case the pending job are terminated and re-executed after the
reboot).
So the action_dispatcher.py is rewritten to use WorkerManager now, and
for the initial attempt there's the default of max 10 concurrent action
workers.
Even though there's additional concurrency in action processing now,
after quick discussion we don't think we need explicit locking at this
point (the action handlers should not collide with each other
dramatically, in the worst case some action can fail because other
action predates it, e.g. build-delete vs package-delete).
The action handler executes the workers through a new
/bin/copr-backend-process-action script. That is designed to be mostly
self-standing command which can be executed by WorkerManager (as
--daemon) but also by copr administrator.
This change on backend required us to change the frontend part as well,
we needed two new backend routes, one for getting the list of pending
actions and second for fetching the concrete action tasks.
Fixes: #169
Merges: #1007