Learn more about these different git repos.
Other Git URLs
Based on the investigation on this bug, the squashfs with xz decompression causes a significant CPU hit and is expected.
Use this releng issue to track changes and testing for a switch to something else, possibly zstd, which is known to have far less resource requirements for decompression, and see if this can be improved on the client side without unacceptable negatives on the compose side.
To be determined. Depends on how testing goes.
Status quo. Clients will continue to experience high CPU load during installations. It's been this way for a while so it's not the end of the world, but thousands of Fedora users, and openQA VMs, get hit with this every time they do an installation, every day, weeks on end.
Metadata Update from @mohanboddu:
- Issue tagged with: meeting
Looks like the minimum squashfs-tools needed to support this feature is 4.3-21
Mon Jun 24 2019 Bruno Wolff III email@example.com - 4.3-21
- Add zstd compression support (Sean Purcell via github.com/plougher/squashfs-tools)
It's possible the nested ext4 image in the squashfs image is contributing to this problem. Opened issue #8646 to separately track the possibility of using plain squashfs images.
Update on the testing here, summary is that it works and /dev/loop1 burn drops 10-30% CPU instead of 65-85% CPU.
Update to that update. Size change. Still using Fedora-Workstation-Live-x86_64-Rawhide-20190816.n.0.iso as the basis.
6981419008 rootfs.img, is the ext4 file system image in the squashfs.img, that's what's being ZSTD compressed by mksquashfs and compared. All percentages are relative to the original ISO's squashfs.img and represents change in final ISO size.
1927286784 +0%, squashfs.img on ISO
2138419200 +11% squashfs.img using level 15 (mksquashfs default for zstd)
2430496768 +26% squashfs.img using level 1, ~5x faster than level 15
2372837376 +23% squashfs.img using level 1, plain squashfs (omits embedded ext4 image but contains its payload)
2326519808 +21% squashfs.img using level 1, plain squashfs, 256K block size
2020712448 +5% squashfs.img using level 22, plain squashfs, 256K block size, ~9x slower than level 1
Some testing to see if block size 512K helps knock down that 5% more without the consumer side memory+cpu taking an excessive hit. And I'll ask squashfs folks if it's possible to unlock the higher compression levels above 22. The mksquashfs with zstd does compress in parallel, by default using the number of CPUs, I expect on the build side to get the same or better compression compared to xz, it will take more RAM and CPU. But on the consumer side, higher compression levels have minimal impact on resource consumption. It's a balancing act.
Question: Does koji need a way to set compression type and level? What amount of control does releng want?
There is a koji issue to add the plain squashfs image type:
But to switch from xz to zstd I'm guessing that needs a separate ticket; and could be tacked onto the work implied by koji issue 1622. But releng should decide what level of control they want and where.
Update: OH yeah, koji folks will probably prefer squashfs upstream to actually merge zstd support. Right now it's just a set of patches that Fedora is carrying on top of upstream; otherwise koji devs could end up doing a bunch of work that they'll have to revist depending on what upstream squashfs does differently.
Update 2: Looks like upstream squashfs-tools does have zstd support in 4.4, but we still have 4.3 in koji.
It would be nice to have a way to set those things yes... type of compression and level... but I guess it's up to what koji upstream will implement.
So... should we keep this ticket open / around? Or wait for koji to implement something?
Is there any proposed actions we can take now?
I'm not sure. The plain squashfs feature for Fedora 34 doesn't completely solve this problem because it retains xz compression, which is always going to be a lot more computationally expensive for decompression compared to zstd.
Closing since last activity is a year ago. If there are any developments or steps we can take reopen, please.
Metadata Update from @humaton:
- Issue close_status updated to: It's all good
- Issue status updated to: Closed (was: Open)
to comment on this ticket.