| |
@@ -109,7 +109,12 @@
|
| |
|
| |
self.log.info("BUILDER CMD: "+cmd)
|
| |
|
| |
- stdin, stdout, stderr = conn.exec_command(cmd)
|
| |
+ try:
|
| |
+ stdin, stdout, stderr = conn.exec_command(cmd)
|
| |
+ except paramiko.SSHException as err:
|
| |
+ raise RemoteCmdError("Paramiko failure.",
|
| |
+ cmd, -1, as_root, str(err), "(none)")
|
| |
+
|
| |
rc = stdout.channel.recv_exit_status() # blocks
|
| |
out, err = stdout.read(), stderr.read()
|
| |
|
| |
Warning: Untested, I'm not running from the latest-greatest copr
release (I'm still using obsoleted ansible's python API).
Sometimes even when the VM allocation succeeds (spawn_playbook),
it might happen that ssh fails to respond a few moments later.
Not handling this properly means that the exception even leaks out
from do_job() call -> which means that (a) frontend is not
informed about build failure and (b) the worker/builder might not
be deallocated.
I observed similar issues when we were using ansible python API;
previously we did:
.. with unhandled VmError from check_for_ans_error(). This was
replaced by:
Without handling paramiko.SSHException however -- so I suppose the
problem is still there and thus fixing it.