#160 Test LVM resizing
Closed: Fixed 3 years ago by lruzicka. Opened 3 years ago by adamwill.

A couple of late bugs in Fedora 32 involved LVM resizing. We don't currently test this (I think it wasn't actually supported when we did the resize tests). We probably should! On both 'regular' and 'blivet' custom partitioning paths.


Metadata Update from @lruzicka:
- Issue assigned to lruzicka

3 years ago

I am taking the test and will work on it when we release F32.

I have been thinking about this test and I would like to get your opinion, @adamwill, on what to test exactly. The options are:

  • test resizing inside the LV - the bug was about this case.
  • test resizing the whole LV partition on the PV? Is it possible in Blivet?
  • test both?

test resizing the whole LV partition on the PV? Is it possible in Blivet?

It seems that it is not possible to resize the whole LV partition on the physical volume in Blivet. At least, the Resize option is greyed out. So that leaves me with the first scenario.

Test plan

  1. createhdds.py has a desktop preparation with 20Gib partition, so we will use this one.
  2. We will use Blivet to resize the current image.
    • The root partition will be shrinked to approximately 13 Gib and mounted as /.
    • A new LV partition will be created in the free space and mounted as /home.
    • the primary partition on the PV will be mounted as /boot
  3. Installation will be carried out using the above settings.

So, currently, in the blivet_reset_resize branch, a working workflow is ready to test the partition resizing for Blivet. I will now implement the same approach using the custom partitioning.

The custom partitioning is now working in my home machine. I will try to run it with a different ISO, both of the tests, prepare the templates, test in staging and open a PR soon.

Thanks. Just as a heads-up, I rebased your branch to master today as I sent a few changes to master.

So, I was hoping for the best, but it seems that the installations behave a little bit differently so there is a bunch of bugs that need to get fixed. I am working on it.

Well, the installations do not behave differently, they are completely different, which will require some attitude changes:

  • The tests are working for MBR x86_64 and failing for everything else.
  • I found out that I was using the MBR backing image, so that explains why uefi does not work and why probably aarch64 and ppc64 are failing, too.
  • It will be easy to replace the image used as an underlying scheme for UEFI because I believe createhdds.py will is making one?
  • How does it work with the rest of the Arches, which images do they need? Or does it make sense to have the resizing test for them, too?

Let me know, @adamwill , thank you.

Well, aarch64 uses UEFI, ppc64 has its own thing called OFW (Open FIrmware). But if you look at the existing shrink tests, they just use a base image which is not bootable but MBR-labelled and contains a single partition (ext4 or NTFS). I hadn't thought about the bootloader implications here, but it seems like we run the existing shrink tests on both aarch64 and ppc64le and they pass, so it seems like this should work - for aarch64 I'm not sure if anaconda is relabelling the disk or if it's just installing UEFI-native on the MBR label, we could take a look at the logs I guess, but it seems to be working anyway.

What base disk image were you trying to use here? How are you doing things differently from the existing shrink tests?

Originally, I thought that mostly people will resize partitions on some Fedora Workstation or KDE system, because on servers things usually are much more likely to be calculated and decided from the start.

For the x86_64 resize tests (blivet and customs) I am using the disk_f%CURRREL%_desktop_4_x86_64.img. At home, I am developing on a x86_64 mbr based VM, so everything went smoothly. Then I added the tests into templates and they started failing. I was trying and retrying several times, at first, the original tests were too fast paced, so I had to slow it down using wait_still_screen and then, when the pace was ok, I realized the test fail on wrong partition setup and started to investigate that and realized that the image must be wrong.

So, if there are shrink tests already (and I should have probably noticed) why do we need resize tests? My approach is that there is an old installed system, which gets altered a bit using custom or blivet and a new system is installed partly into the existing and partly into the changed disk scheme. I think this is what I would do when reinstalling a system if I did not want to "just delete everything".

I will check the shrink tests, too.

The existing shrink tests test resizing of plain ext4 and ntfs partitions, there's no LVM involved.

The existing shrink tests test resizing of plain ext4 and ntfs partitions, there's no LVM involved.

Ok, the workstation uses LVM, so that was also the reason I chose that image.

the obvious thing to do would be to add a libguestfs base image containing an LVM layout, I guess. if libguestfs supports that. Basically, I was expecting the approach here to be "look at the existing shrink tests and do more or less the same, only for LVM".

Well, reclaiming space and resizing the whole LVM partition and leaving some space on the disk is just one thing. Resizing the partitions inside the VM is another one. I believe that both should be tested and we are not doing it; ergo my approach has not been wrong. There will be just more work on it than expected in the beginning which is fine. The LVM shrink test will need to be created, too. I don't mind.

I do not understand in which a libguestfs images differ from those created with the createhdds.py script, which is mentioned in the OpenQA Fedora guide to prepare images for OpenQA. The workstation one, I am using now, has a classic Fedora LVM setup (which makes sense to me), it has enough space to resize the partitions adding new ones without complaining about the lack of space. I don't think I know how to get the libguestfs image into OpenQA.

libguestfs images are created by createhdds. createhdds can build images using libguestfs or virt-install. If we just need a disk with some partitions on it - but not an actual bootable OS - that's a libguestfs image. All the images in the guestfs dict in hdds.json are of this type.

I see. So it is like the mock windows ntfs image I added there once. What is the creation policy for those hdds in hdds.json. Which are created for the OpenQA to use? All of them? Manually specified? Only those mentioned in the templates? Because the disk_f%CURRREL%_desktop_4_x86_64.img clearly exists and can be used by a test. So what is the difference in using the mock ones over the real ones? Providing, you do not need any special layout that is not provided on those real ones? And by real I mean those bootable ones.

Every image specified in the config is built.

The two types of images are just...different, for different purposes. The virt-install ones contain complete bootable operating systems: notably this means they have packages and a bootloader installed. Both of these things are arch and/or platform-specific. The libguestfs images are effectively "noarch": they're just partitions, in some cases a couple of dummy files.

The general approach is, we use virt-install images when we need to start a test with an actual bootable OS in place; we use libguestfs images when we just need a disk with some specific partition layout to do something to.

I suppose you could see this case as either. We could want to start with an actual bootable OS, resize it and install a new one next to it, and then check that both boot. But I don't think that's really how I saw this initially, I was thinking more along the lines of "just do the same as the existing shrink tests, using a new libguestfs base image with a dummy LVM layout".

I do think it would be interesting to add some other tests that follow on from our storage install tests, boot the installer again with the installed hard disk attached and check the installer can recognize and manipulate the installed system...but those would be START_AFTER tests rather than tests using a pre-built base image, and I saw it as something different from this...

START_AFTER tests for Blivet and Custom resizing is a good idea. Yesterday at night, I also checked that minimal and minimal-uefi can be used as base images for the tests, so that will not interfere with any extra space needed to create them. I guess we can decide, where it will be running.

I also wanted to checked if the tests could be created according to your suggestions, i.e. similarly to the shrink tests, but I realized the following:

  1. The partition could only be shrunk in Automatic mode. In Blivet or Custom mode, you have to manually redesign the partitions.
  2. So yeah, we can try shrink the LVM using Automatic to see if it works. The result is, we cannot do it, while according to the famous volume shrinker, Anne Conda, it is **Not resizeable". Now, when I am thinking about that, it makes sense because in LVM, if you wanted to shrink that, you would have to shrink the inside partitions first, make some space and then shrink the entire LV and without user intervention, the program is unable to guess how to recalculate.

I have another ace up in my sleeve that I want to try out, but so far, using Custom or Blivet to reformat partitions, reassign them, shrink one of them, add a new one, and reinstall the system seems to me as the valid option.

So the ace in the sleeve vanished ..... in Blivet, I tried to shrink the inside partitions, making up some free space and then attempted to shrink the entire LV, but Blivet still does not let me do it. So I guess, I will finish what I have and then we will see if something else is needed.

@vtrefny confirmed that one cannot resize the whole LVG in Anaconda or Blivet. I have opened an RFE to make it possible -> https://bugzilla.redhat.com/show_bug.cgi?id=1835689

I also realized that I cannot use the minimal images, because the minimal-uefi does not use any LVM setup. Here, we have several options:

  • The non-uefi can run on desktop or minimal images (they have LVM)
  • We need another image that has some space to resize and uses LVM. Both desktop and minimal are 20GiB in size, so that means eating that amount from the diskspace (that is sparce on staging)
  • To save the space, we can convert minimal-uefi to use LVM, too, but I think there might be a reason (other tests perhaps) why it does not use LVM.
  • We can shrink the desktop, minimal and minimal-uefi to use less space and create some for another image.

What do you think, @adamwill ?

Or, I can try to make them the START_AFTER tests in Workstation product, for instance.

Well, after today, I must admit that I am somehow still treading in the muds of Waterloo. I currently have two scripts disk_cutom_resize and disk_custom_blivet_resize but I cannot manage to fit them in the OpenQA scheme. There are several ways to put them there, but I was not happy about any of them:

  1. When I put them into the universal product, they need their base images, because otherwise there is nothing to modify. For x86_64, I am using the desktop image that is available all the time and has a size of 20GiB, which is what I need, so that there is enough space to resize the LVM partitions and still be able to install into it. For UEFI, I have manually created the desktop_uefi image and got one pass, then the image got removed (probably by a new daily run of createhdds.py. I have created another image in hdds.json, but I haven't opened a PR because we have not confirmed if we want to go this way. Also, having two different base images requires two test different test suites to run.
  2. I placed them as start_after test for Workstation, but the default installed Workstation only has 10GB, which is terribly little. The solution would be to increase the install_default_upload to 20GiB, but this would eat so much space on the drive that it is probably not possible.
  3. I created another instance of install_default_upload and named it install_default_upload_resize and let it run in the universal product, but the tests fail with HDD_1 handling Cannot find HDD_1 asset hdd/disk_universal_64bit.qcow2. It looks the upload test leaves the asset, but the follow-up test does not find it.

I will need to check more thoroughly, but I cannot mimic that locally, so I need to wait until the whole compose testing stops in staging to see the outcomes of the testing, which takes like ages.

I guess I'd say either just do 1) and don't worry about UEFI, or do 2) - 2) may actually be fine, as we use sparse disk allocation and overlays. The file that gets uploaded may not actually be much or any larger just because the image size is 20GB, with sparse allocation. Can you try it both ways and compare the actual image file sizes after upload?

Hum, so I merged the PR, but now I see this ticket again, I wonder about option 1) again. Since we ditched the UEFI test, could we look at switching to that?

Do you mean this?

When I put them into the universal product, they need their base images, because otherwise there is nothing to modify. For x86_64, I am using the desktop image that is available all the time and has a size of 20GiB, which is what I need, so that there is enough space to resize the LVM partitions and still be able to install into it. For UEFI, I have manually created the desktop_uefi image and got one pass, then the image got removed (probably by a new daily run of createhdds.py. I have created another image in hdds.json, but I haven't opened a PR because we have not confirmed if we want to go this way. Also, having two different base images requires two test different test suites to run.

Do you mean this?

When I put them into the universal product, they need their base images, because otherwise there is nothing to modify. For x86_64, I am using the desktop image that is available all the time and has a size of 20GiB, which is what I need, so that there is enough space to resize the LVM partitions and still be able to install into it. For UEFI, I have manually created the desktop_uefi image and got one pass, then the image got removed (probably by a new daily run of createhdds.py. I have created another image in hdds.json, but I haven't opened a PR because we have not confirmed if we want to go this way. Also, having two different base images requires two test different test suites to run.

So, put them into universal, run them on the desktop mbr image and do not worry about UEFI? Please confirm and I will make the adjustments.

yeah, that's what I was meaning. It feels nicer to have the test in universal - that's where it "belongs", for me - and we have the desktop base image for all three arches, so we can potentially run the test on all arches.

This has been fixed and merged into master.

Metadata Update from @lruzicka:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

3 years ago

Login to comment on this ticket.

Metadata
Related Pull Requests
  • #165 Merged 3 years ago