#1773 Fedora-Rawhide-20190330.n.0 DOOMED
Opened 5 years ago by kevin. Modified 5 years ago

(filing this manually, I think @dustymabe's fedmsg listener is down)

Compose failed due to kde live:
https://koji.fedoraproject.org/koji/taskinfo?taskID=33823987

DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:00,088: Starting installer, one moment...
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:00,088: terminal size detection failed, using default width
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:00,089: anaconda 31.6-1.fc31 for Fedora-KDE-Live Rawhide (pre-release) started.
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:00,089: 09:35:00 Not asking for VNC because of an automated install
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:00,089: 09:35:00 Not asking for VNC because we don't have Xvnc
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:00,266: Processing logs from ('127.0.0.1', 47004)
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,824: Anaconda received signal 11!.
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,824: /usr/lib64/python3.7/site-packages/pyanaconda/_isys.so(+0x15a4)[0x7f555fc2d5a4]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,824: /lib64/libc.so.6(+0x3af80)[0x7f556ee92f80]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,824: /lib64/libcurl.so.4(+0x3783b)[0x7f5556e4f83b]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,824: /lib64/libcurl.so.4(+0x3a079)[0x7f5556e52079]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,824: /lib64/libcurl.so.4(+0x247db)[0x7f5556e3c7db]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,825: /lib64/libcurl.so.4(curl_easy_cleanup+0x49)[0x7f5556e4a509]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,825: /lib64/librepo.so.0(+0xca1b)[0x7f555719ca1b]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,825: /lib64/librepo.so.0(lr_download+0x25a)[0x7f555719ea2a]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,825: /lib64/librepo.so.0(lr_download_single_cb+0xe0)[0x7f55571a0590]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,825: /lib64/librepo.so.0(lr_yum_download_repo+0x93)[0x7f55571b1423]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,825: /lib64/librepo.so.0(lr_yum_perform+0x795)[0x7f55571b1c85]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,825: /lib64/librepo.so.0(lr_handle_perform+0x1ad)[0x7f55571a56cd]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,826: /lib64/libdnf.so.2(_ZN6libdnf4Repo4Impl15lrHandlePerformEP9_LrHandleRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEb+0x2f9)[0x7f55572fefa9]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,826: /lib64/libdnf.so.2(_ZN6libdnf4Repo4Impl5fetchERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEOSt10unique_ptrI9_LrHandleSt14default_deleteISB_EE+0x303)[0x7f55573012b3]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,826: /lib64/libdnf.so.2(_ZN6libdnf4Repo4Impl4loadEv+0x144)[0x7f5557301e24]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,826: /usr/lib64/python3.7/site-packages/libdnf/_repo.so(+0x1a8ab)[0x7f55568258ab]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,826: /lib64/libpython3.7m.so.1.0(_PyMethodDef_RawFastCallKeywords+0x261)[0x7f556ec0ea01]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,826: /lib64/libpython3.7m.so.1.0(_PyCFunction_FastCallKeywords+0x23)[0x7f556ec0eb33]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,826: /lib64/libpython3.7m.so.1.0(+0x13bd72)[0x7f556ec47d72]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,826: /lib64/libpython3.7m.so.1.0(_PyEval_EvalFrameDefault+0x58d2)[0x7f556ec8e572]
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,837: gcore exited with status 256
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,891: Running anaconda failed: process '['unshare', '--pid', '--kill-child', '--mount', '--propagation', 'unchanged', 'anaconda', '--kickstart', '/chroot_tmpdir/koji-image-f31-build-33824021.ks', '--cmdline', '--loglevel', 'debug', '--dirinstall', '--remotelog', '127.0.0.1:35851']' exited with status 1
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,891: Shutting down log processing
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,942: Install failed: novirt_install failed
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,943: Removing bad disk image
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,943: ERROR: Image creation failed: novirt_install failed
DEBUG util.py:554:  BUILDSTDERR: 2019-03-30 09:35:03,944: Image creation failed: novirt_install failed

Sadly, all of libdnf, librepo, curl, and python3 changed, so I am not sure who is at fault here.

I'd untag the offender if I knew it.


I was able to reproduce in a Rawhide VM, and downgrading librepo to 1.9.5 fixed it.

Just to clarify how I reproduced it, /var/cache/dnf must have a directory with the zchunk repodata files already downloaded, but without any generated solv files.

With librepo-1.9.5, it realizes that the data is already there and only downloads a minimal amount to verify that the files are valid, while 1.9.6 dies with lots of fireworks.

Awesome. I'll untag librepo-1.9.5 then.

Can you file a bug on librepo? Or if you like I can do so...

Sorry, librepo-1.9.6 rather.

Unfortunately, this build of librepo has made it into F30 stable, so I think we need to turn off zchunk metadata for updates and updates-testing.

Bummer. Did we get zchunks last night? Looks like yes with f30-updates-testing.

I'll disable it and do a new push now.

Yes, that's how I found it. I'm writing a PSA for fedora-devel on how to work around the problem in F30 until the non-zchunked metadata becomes available.

Thanks for getting the new push out there.

librepo-1.9.6 has been pushed to stable in F30. Am I right in thinking that there's no way to pull it out?

Correct. We have no way to pull stable updates... and even if we could, people may/will have installed it by then.

Yeah, that's what I thought. I just wanted to make sure I've got my info right for the PSA

Hum, you know, I could just rm -f *.zck now right? Or would that break any of the other repodata?

Sure, but you'll also need to remove the zchunk metadata from repomd.xml. It would be that easy, though.

yeah, that won't work, because mirrormanager has that repomd checksum for the metalink. If we tamper with it in any way it won't match...

I'll just do a real push as soon as I can.

Ok, well, there goes that idea. Yeah, I guess a real push is the best way to go, then.

Login to comment on this ticket.

Metadata