#7487 ostree creation on ppc64le fails for Fedora-27-updates(-testing) composes with error Failed to register machine
Closed: It's all good 5 years ago Opened 5 years ago by sinnykumari.

ostree creation from nightly compose Fedora-27-updates-20180506.0 and Fedora-27-updates-testing-20180506.0 failed for ppc64le.

By looking at root.log, following is the error message:

DEBUG util.py:522:  Executing command: ['/usr/bin/systemd-nspawn', '-q', '-M', '5172e5707e4842ce8a760ff36fe96cd1', '-D', '/var/lib/mock/f27-build-12352048-906323/root', '-a', '--setenv=TERM=vt100', '--setenv=SHELL=/bin/bash', '--setenv=HOME=/builddir', '--setenv=HOSTNAME=mock', '--setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin', '--setenv=PROMPT_COMMAND=printf "\\033]0;<mock-chroot>\\007"', '--setenv=PS1=<mock-chroot> \\s-\\v\\$ ', '--setenv=LANG=en_US.UTF-8', '/bin/sh', '-c', '{ rm -f /var/lib/rpm/__db*; rm -rf /var/cache/yum/*; set -x; pungi-make-ostree tree --repo=/mnt/koji/compose/atomic/repo --log-dir=/mnt/koji/compose/updates/Fedora-27-updates-20180506.0/logs/ppc64le/Everything/ostree-2 --treefile=/mnt/koji/compose/updates/Fedora-27-updates-20180506.0/work/ostree-2/config_repo/fedora-atomic-host-updates-stable.json --extra-config=/mnt/koji/compose/updates/Fedora-27-updates-20180506.0/work/ostree-2/extra_config.json; } < /dev/null 2>&1 | /usr/bin/tee /builddir/runroot.log; exit ${PIPESTATUS[0]}'] with env {'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;<mock-chroot>\\007"', 'PS1': '<mock-chroot> \\s-\\v\\$ ', 'LANG': 'en_US.UTF-8'} and shell False
DEBUG util.py:296:  Unsharing. Flags: 134217728
DEBUG util.py:439:  Failed to register machine: File exists
DEBUG util.py:439:  Parent died too early
DEBUG util.py:577:  Child return code was: 1

From the error message, It looks like systemd-nspawn command failed when trying to register container as machine.
More details are available at https://pagure.io/dusty/failed-composes/issue/256


Yeah, this looks like a mock/nspawn issue. See also:

https://github.com/rpm-software-management/mock/issues/150
and
https://github.com/systemd/systemd/issues/7236

If we want to try --register=no or the like, we could add that in as a koji patch when using --new-chroot (which is an option we pass to run-root plugin now to use the nspawn chroot).

In the short term we are going to be updating builders to f28. I can go ahead and do these today or at least reboot them and that should get us back on track for a bit hopefully.

Yeah, this looks like a mock/nspawn issue. See also:
https://github.com/rpm-software-management/mock/issues/150
and
https://github.com/systemd/systemd/issues/7236
If we want to try --register=no or the like, we could add that in as a koji patch when using --new-chroot (which is an option we pass to run-root plugin now to use the nspawn chroot).
In the short term we are going to be updating builders to f28. I can go ahead and do these today or at least reboot them and that should get us back on track for a bit hopefully.

Did you reboot ppc64le buildvm-ppc64le-0{1,2}.ppc.fedoraproject.org builders?

ppc64le ostree creation for both Fedora-27-updates and Fedora-27-updates-testing from next compose(20180507.0) succeeded.

I think let's wait for a while and see if this issue appears again in any of our composes. Accordingly we patch koji later.

Did you reboot ppc64le buildvm-ppc64le-0{1,2}.ppc.fedoraproject.org builders?

I think they're currently moving them all to a F-28 base so I suspect they've been rebuilt.

Did you reboot ppc64le buildvm-ppc64le-0{1,2}.ppc.fedoraproject.org builders?

I think they're currently moving them all to a F-28 base so I suspect they've been rebuilt.

Oh, I thought this will be done in few days.

Yeah, they all should be re-installed with f28 now. :)

So, should we close this now? or is it happening again/still?

Yeah, they all should be re-installed with f28 now. :)
So, should we close this now? or is it happening again/still?

So far, It's not happening again. we can close it now and reopen if it happens again.

Metadata Update from @sinnykumari:
- Issue close_status updated to: It's all good
- Issue status updated to: Closed (was: Open)

5 years ago

Login to comment on this ticket.

Metadata