This is similar to the situation on ppc64le in #425. The s390x qcow2 images when converted to a raw image contains no partitions and can't be under libvirt/KVM to install a VM as the rootfs can't be found.
[sharkcz@rock-kvmlp-fedora ~]$ qemu-img convert -f qcow2 -O raw Fedora-Cloud-Base-Generic.s390x-40-1.14.qcow2 cloud-f40.raw [sharkcz@rock-kvmlp-fedora ~]$ file cloud-f40.raw cloud-f40.raw: data [sharkcz@rock-kvmlp-fedora ~]$ qemu-img convert -f qcow2 -O raw Fedora-Cloud-Base-39-1.5.s390x.qcow2 cloud-f39.raw [sharkcz@rock-kvmlp-fedora ~]$ file cloud-f39.raw cloud-f39.raw: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 10485759 sectors
The boot sequence looks like
... [ OK ] Created slice system-modprobe.slice - Slice /system/modprobe. Starting modprobe@configfs.service - Load Kernel Module configfs... [ OK ] Finished modprobe@configfs.service - Load Kernel Module configfs. [ 1.676345] virtio_scsi virtio0: 1/0/0 default/read/poll queues [ 1.677978] scsi host0: Virtio SCSI HBA [ 1.678377] scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 [ OK ] Stopped systemd-vconsole-setup.service - Virtual Console Setup. Stopping systemd-vconsole-setup.service - Virtual Console Setup... [ 1.681442] virtio_blk virtio1: 1/0/0 default/read/poll queues Starting systemd-vconsole-setup.service - Virtual Console Setup... [ 1.681847] virtio_blk virtio1: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) [ 1.725538] sr 0:0:0:0: Power-on or device reset occurred [ 1.726890] sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray [ 1.726898] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 1.727505] sr 0:0:0:0: Attached scsi generic sg0 type 5 [ 1.756603] alg: No test for crc32be (crc32be-vx) [ OK ] Finished systemd-vconsole-setup.service - Virtual Console Setup. Mounting sys-kernel-config.mount - Kernel Configuration File System... [ OK ] Mounted sys-kernel-config.mount - Kernel Configuration File System. [ 138.356333] dracut-initqueue[488]: Warning: dracut-initqueue: timeout, still waiting for following initqueue hooks: [ 138.356827] dracut-initqueue[488]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fdisk\x2fby-uuid\x2fe2d29979-23d3-432e-a363-ab1c099d27e5.sh: "if ! grep -q After=remote-fs-pre.target /run/systemd/generator/systemd-cryptsetup@*.service 2>/dev/null; then [ 138.356880] dracut-initqueue[488]: [ -e "/dev/disk/by-uuid/e2d29979-23d3-432e-a363-ab1c099d27e5" ] [ 138.356919] dracut-initqueue[488]: fi" [ 138.357841] dracut-initqueue[488]: Warning: dracut-initqueue: starting timeout scripts [ 138.943725] dracut-initqueue[488]: Warning: dracut-initqueue: timeout, still waiting for following initqueue hooks: [ 138.944373] dracut-initqueue[488]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fdisk\x2fby-uuid\x2fe2d29979-23d3-432e-a363-ab1c099d27e5.sh: "if ! grep -q After=remote-fs-pre.target /run/systemd/generator/systemd-cryptsetup@*.service 2>/dev/null; then [ 138.944418] dracut-initqueue[488]: [ -e "/dev/disk/by-uuid/e2d29979-23d3-432e-a363-ab1c099d27e5" ] [ 138.944464] dracut-initqueue[488]: fi" ...
I wonder if the image is created as a dasd image instead of a "scsi" disk image ...
Here's our s390x configuration: https://pagure.io/fedora-kiwi-descriptions/blob/rawhide/f/teams/cloud/cloud.xml#_132-151
Looks like based on the docs, we are created a DASD system. Should we do something else?
Ping @osinside
my guess is that we should use SCSI instead of CDL for the targettype to be compatible with the F<=39 images
targettype
You want to send a pull request to make the change? You can also test to see if it fixes it by doing a local build (we don't have CI for s390x or ppc64le images since they aren't available in Zuul CI).
sure, I can give it a try. How the kiwi command line for a local run should look like?
kiwi
I see your target blocksize is set to 4096 and your boot mode is CDL. This is a typical DASD layout and will not work on any 512byte block storage.
For use of an image in qemu or on a SCSI type storage don't set any blocksize or targettype. I think the integration test definition from here should help:
See the difference between Virtual and Physical
The conversion of a DASD image into a qcow2 image produces garbage :-)
Hope this helps
For building s390 I have access to the IBM self service. Not sure how you can access s390 machines. If you have one the build should be:
sudo dnf install kiwi kiwi-systemdeps git clone https://pagure.io/fedora-kiwi-descriptions.git cd fedora-kiwi-descriptions sudo kiwi-ng --profile Cloud-Base-Generic \ system build --description `pwd` --target-dir /tmp/myfedora
I'm not entirely sure about the correct profile selection, but Neal can help with that
There is also the kiwi-build script in the git
I see your target blocksize is set to 4096 and your boot mode is CDL. This is a typical DASD layout and will not work on any 512byte block storage. For use of an image in qemu or on a SCSI type storage don't set any blocksize or targettype. I think the integration test definition from here should help: https://github.com/OSInside/kiwi/blob/main/build-tests/s390/rawhide/test-image-disk/appliance.kiwi See the difference between Virtual and Physical The conversion of a DASD image into a qcow2 image produces garbage :-) Hope this helps
it confirms my suspicion, thanks :-) We want to target the "virtual" environment with the qcow2 image.
I am using sudo ./kiwi-build --output-dir=/var/tmp/work --image-type=oem --image-profile=Cloud-Base-Generic as shown in the descriptions repo, but it wasn't clear to me originally what the values for type and profile should be.
sudo ./kiwi-build --output-dir=/var/tmp/work --image-type=oem --image-profile=Cloud-Base-Generic
type
profile
diff --git a/teams/cloud/cloud.xml b/teams/cloud/cloud.xml index aa928dd..fac7b40 100644 --- a/teams/cloud/cloud.xml +++ b/teams/cloud/cloud.xml @@ -132,12 +132,12 @@ <preferences profiles="Cloud-Base-Generic" arch="s390x"> <type image="oem" format="qcow2" filesystem="btrfs" btrfs_root_is_subvolume="true" btrfs_set_default_volume="false" fsmountoptions="compress=zstd:1" - kernelcmdline="no_timer_check console=tty1 console=ttyS0,115200n8 dasd_mod.dasd=ipldev systemd.firstboot=off" - devicepersistency="by-uuid" target_blocksize="4096" + kernelcmdline="no_timer_check console=ttyS0 systemd.firstboot=off" + devicepersistency="by-uuid" bootpartition="true" bootpartsize="1000" bootfilesystem="ext4" rootfs_label="fedora" > - <bootloader name="zipl" targettype="CDL" timeout="1"/> + <bootloader name="zipl" timeout="1"/> <size unit="G">5</size> <systemdisk> <volume name="@root=root"/>
gives me a btrfs based image, that's bootable and gets further than the F-40 image, but it fails with
... [ OK ] Reached target initrd-switch-root.target - Switch Root. Starting initrd-switch-root.service - Switch Root... [FAILED] Failed to start initrd-switch-root.service - Switch Root. See 'systemctl status initrd-switch-root.service' for details.
Maybe something btrfs related?
Looks like the root device did not appear. How do you boot the image ? I normally go with the following qemu call on a KVM enabled s390 zVM or LPAR (/dev/kvm should exist)
sudo qemu-system-s390x -m 4096 --nographic --enable-kvm -drive if=none,id=dr1,file=kiwi-test-image-disk.s390x-1.15.1.qcow2 -device virtio-blk,drive=dr1,bootindex=1
if it does not work, rebuild the image and set in the kernelcmdline rd.debug rd.shell rd.break=pre-mount This should drop you into the initrd and further debugging becomes possible
rd.debug rd.shell rd.break=pre-mount
I am using virt-install on a LPAR, eg. virt-install --name localcloud-41 --memory 4096 --noreboot --graphics none --os-variant detect=on,name=fedora-unknown --cloud-init user-data="/home/sharkcz/cloudinit-user-data.yaml" --disk=size=10,backing_store="/var/lib/libvirt/images/Fedora.s390x-Rawhide.qcow2" after copying the qcow2 image to the libvirt's store.
virt-install
virt-install --name localcloud-41 --memory 4096 --noreboot --graphics none --os-variant detect=on,name=fedora-unknown --cloud-init user-data="/home/sharkcz/cloudinit-user-data.yaml" --disk=size=10,backing_store="/var/lib/libvirt/images/Fedora.s390x-Rawhide.qcow2"
first step in https://pagure.io/fedora-kiwi-descriptions/pull-request/62
The rootfs not found issue sounds like rootflags=subvol=root missing on the kernel parameter line ...
rootflags=subvol=root
and yes, it is that.
umount /sysroot mount -o subvol=root /dev/vda2 /sysroot exit
in the rescue shell made the boot continue correctly and present a login prompt at the end
I'm still debugging why the rootflags are missing...
I found the issue. There is one method in kiwi that was not properly moved into the bootloader_spec_base base class. This missing function caused the missing rootflags for the BLS bootloaders that we support in kiwi.
I will create a patch on the kiwi sode. Sorry for this one
https://github.com/OSInside/kiwi/pull/2562
I tested on my s390 machine with the fedora description
https://github.com/OSInside/kiwi/pull/2562 I tested on my s390 machine with the fedora description
awesome and successfully tested
yay, thanks for testing :)
Hi,
This error seems quite similar to the one I am currently facing. When I run the following command with the Fedora 40 s390x cloud image: """ qemu-system-s390x -M accel=kvm -m 2048 -smp 2 -hda Fedora-Cloud-Base-Generic.s390x-40-1.14.qcow2 -net nic -net user -nographic """
I receive the same error output as shown in the first message. Fedora 39 and 41 work fine. Do you have any insights on what could be causing this issue?
I think we released the broken F-40 image as the fix in kiwi has been implemented after the F-40 GA. Can you give https://kojipkgs.fedoraproject.org/compose/cloud/Fedora-Cloud-40-20250228.0/compose/Cloud/s390x/images/Fedora-Cloud-Base-Generic.s390x-40-20250228.0.qcow2 a try? It's an updated cloud image.
I tested the "kiwi" image, and it works perfectly. Do you have a timeframe for the next release?
I don't think I understand the question. Updated F-40 images are being released daily at https://kojipkgs.fedoraproject.org/compose/cloud/, the F-40 GA image won't be replaced, also F-40 is slowly reaching its end of life (May 28th 2025). F-41, which is working for you, was released end of October last year.
Log in to comment on this ticket.