#89 Software Raid on a UEFI system using GPT partitioning scheme
Opened 2 years ago by pboy. Modified 2 years ago

While we want to make GPT the default partitioning, we want to ensure with all probs with GPT and BiosBoot software Raid that it works on UEFI systems, too.

Testcase 1 (former just testcase)

  1. Provide a UEFI boot system with at least 2 connected hard disks
  2. Delete any existing partition table on all hard disks by overwriting the first mbs
    [...]# dd if=/dev/zero of=/dev/sd[a|b|c] status=progress
  3. Boot the installation disk
  4. In the summery screen select "Installation Destination", tick all (at least 2) harddisks, and custom storage configuration. Select "DONE"
  5. The partitioning form opens.
  6. Tick the "+" sign. A form "Add a new Mount Point" opens
    (a) select /boot/efi as the mount point
    (b) enter 600 MiB as size
    (c) Tick "Add Mount Point"
    A new mount point is created on sda1 as standard partition. It is displayed on the left side beyond "New Fedora installation" and SYSTEM
  7. The "Device Type" is shown as "Standard Partition"
    (a) Modify to RAID and select appropriate RAID level (e.g. 1) in the box that shows up on its right side
    (b) check that "File System" is still at EFI System Partition
    (c) Select "update settings"
    Device on the left column changes from sda1 to "boot_efi" (beyond "New Fedora x installation")
  8. Use the "+" sign to add another Mount Point
    (a) select /boot as the mount point
    (b) enter 1 GiB as size
    (c) Tick "Add Mount Point"
    A new mount point is created on sda2
  9. The "Device Type" is shown as "Standard Partition"
    (a) Modify to RAID and select appropriate RAID level (e.g. 1)
    (b) update settings
    Device on the left column changes from sda2 to "boot" (beyond "New Fedora x installation")
  10. Use the "+" sign to add another Mount Point
    (a) select / as the mount point
    (b) enter 15 GiB as size (the same as a default installation would choose)
    (c) Tick "Add Mount Point"
    You find a new mount point of device type LVM, a VG fedoara_fedora of the exact size you entered above
  11. Modify Volume Group
    (a) Select Raid level RAID1 (or according to your number of disks)
    (b) leave size policy on "Automatic"
    (c) Select Save to update the choices
    On the left side you find a device, "fedora_fedora-root
  12. Select DONE
    (a) You get a list of upcoming modifications which which looks fine as intended
    (b) Accept changes
  13. You get back to the summary screen.
    Complete all the other required configurations, e.g. user accounts.
  14. Installation begins and completes without any issue.

Test 1-1
Reboot the system.
Expected result: The system starts without problems

Test 1-2
Detach each disk one at a time and reboot (simulating a one disk failure).
Expected result: The system starts nevertheless without issues.

Testcase 2

  1. Provide a UEFI boot system with at least 2 connected hard disks
  2. Delete any existing partition table on all hard disks by overwriting the first mbs
    [...]# dd if=/dev/zero of=/dev/sd[a|b|c] status=progress
  3. Boot the installation disk
  4. In the summery screen select "Installation Destination", tick all (at least 2) harddisks, and "Advanced Custom (Blivet-GUI)" storage configuration. Select "DONE"
  5. The Blivet GUI partitioning form opens.
  6. Tick the "+" sign. A partition form opens
    (a) keep sda activ
    (b) keep Device Type "Partition"
    (c) Change size to 600 MiB
    (d) Change "/"Filesystem" to EFI System Partition"
    (e) Change Label to "efi"
    (f) Change Mountpoint to "/boot/efi"
    (g) Click OK
    A new partition is created on sda1 as standard partition.
  7. Select the other disk (sdb) and
    (a) Fill the entry fields the same way as previously
    (b) On OK an error messages shows (mount point already set)
    (c) Leave the mount point field empty
    (d) On OK a partition of Format "efi" is created on sdb as well.
  8. Select sda again and click into the free space area
    (a) Click on the "+" sign
    (b) Select Device Type "Software Raid"
    (c) In the updated form select sda & sdb / raid1
    (d) Set size to 1 Gib for sda and click into field "Label", the size of sdb is adjusted automatically
    (e) Enter boot into field "Label"
    (f) Enter Boot into field "Name"
    (g) Set mountpoint to /boot
    ON OK partitions are created and a RAID device boot
  9. Click into free space area on sda
    (a) select Device Type RAID
    (b) Tick sda & sdb & select raid1
    (c) Leave size as is (the maximum)
    (d) Filesystem "physical volume (LVM)"
    (e) Name "syspv"
    On OK a new array device is created as syspv
  10. Click into syspv and select "+" sign
    (a) Leave Device Type LVM2 Volume Group and anything else as well, besides
    (b) Name "sysvg"
    On OK a new LVM device sysvg is created.
  11. Select sysvg and click onto the "+" sign
    (a) Leave Device Type "LVM2 Logical Volume"
    (b) Set size to 15 GiB as it is Server defaut for root
    (c) Lebel "root"
    (d) Name "root"
    (e) Mountpoint "/"
    "OK" creates a logical volume sysvg-root
  12. A click on Done gives you a warning "boot loader stage2 device boot is on a multi-disk array, but boot loader stage1 device sda1 is not. A drive failure in boot dould render the system unbootable"
  13. Click "Close" and "Done" again.
  14. Accept the Summary of changes (looks good)
  15. Fill out the remaining required configuration options (esp. user) and start installation.
  16. Installation completes without any further warning or error message.

Test 2-1
Reboot the system, system boots without warning.
Expected result: The system starts without problems

Test 2-2
Disk sdb detached (the one with efi partition w/o mount point)
Result: system starts up with sda

Test 2-3
Disk sda detached (the one with efi partition with mount point)
Result: system not bootable (No Bootable Device Detected! Please reboot and check)

  • Expected (required) result with all tests: The system starts nevertheless without issues.

Testcase 3

  1. Provide a UEFI boot system with at least 2 connected hard disks
  2. Delete any existing partition table on all hard disks by overwriting the first mbs
    [...]# dd if=/dev/zero of=/dev/sd[a|b|c] status=progress
  3. Boot the installation disk
  4. Select "Installation Destination", tick all (at least 2) HDs, and custom storage configuration. Select "DONE"
  5. The partitioning form opens. Tick the "+" sign and a form "Add a new Mount Point" opens
    1. select /boot/efi as the mount point
    2. enter 600 Mib as size
    3. Tick "Add Mount Point"
      A new mount point is created on sda1
  6. Ensure, Device Tape: Standard Parttion and File System automatically to "EFI System Partition"
  7. Tick the "+" sign again and a new a form "Add a new Mount Point" opens
    1. select /boot as the mount point
    2. enter 1 Gib as size
    3. Tick "Add Mount Point"
    A new mount point is created on sda2
  8. The "Device Type" is shown as "Standard Partition"
    1. Modify to RAID and select appropriate RAID level (e.g. 1)
    2. update settings
      Device on the left column changes from sda2 to "boot" (beyond "New Fedora x installation")
  9. Use the "+" sign to add another Mount Point
    1. select / as the mount point
    2. enter 15 Gib as size (the same as a default installation would choose)
    3. Tick "Add Mount Point"
      You find a new mount point of device type LVM, a VG fedoara_fedora of the exact size you entered above
  10. Modify Volume Group
    1. Select Raid level RAID1 (or according to your number of disks)
    2. leave size policy on "Automatic"
    3. update the choices
      On the left side you find a device "fedora_fedora:root
  11. select DONE
    1. You get a message "boot loader stage2 device boot is on a multi-disk array, but boot loader stage1 device sda1 is not. A drive failure in boot could render the system unbootable"
    2. Close gets you back to configuration screen
    3. Click DONE again
    4. Accept the Summary of changes, everything is as you configured.
  12. select DONE again, you get back to the summary screen.
  13. Complete remaining required configuration steps as usual (esp. User creation) and begin installation.
  14. Begin installation
  15. Installation completes without any additional warnings.

Test 3-1
Reboot the system, system boots without warning.
Expected result: The system starts without problems
* Disk sda includes 3 partitions (boot/efi; boot; LVM), sdb 2 partitions (boot; LVM) as configured.

Test 3-2
Detached sdb (with just 2 partitions)
system boots w/o complains, but needs a lot longer to complete booting.

Test 3-3
Detached sda (with 3 partitions incl. efi)
Message "No Bootable Device Detected"

  • Expected (required) result with all tests: The system starts nevertheless without issues.

Metadata Update from @pboy:
- Issue tagged with: in progress

2 years ago

Rawhide 2022-06-21
Both tests passed successfully

Fedora 36
Both tests passed successfully

Step 2 is unnecessary, the installer properly wipes all selected disks from prior signatures.

./blivet/formats/init.py:571: rc = run_program(["wipefs", "-f", "-a", self.device])

Step 7 is not an acceptable way of making an EFI System partition per upstream mdadm developers. First, it makes the type of partitions mdadm raid members, not EFI System. Using the EFI partition type GUID for them is wrongly typing them; whereas making them Linux RAID partition type GUID means some firmware could (properly) ignore them. Second the EFI System partition can legitimately be written to by the firmware and EFI programs, and in such a case it makes the file systems on the two mirrors out of sync, and since those writes happen outside of md RAID, it's completely ambiguous which blocks are correct. Doing an md scrub check will show the two partitions are out of sync (check mismatches).

It's technically untenable as well as unsupported upstream. Allowing users to create it should be removed from the installer because it sets the wrong expectations, as if this is a valid layout. It's actually fragile and won't consistently work as users expect.

I have extended the test cases and gone through all the available variations.

According to the warnings Anaconda issues, Anaconda explicitly expects that in the case of boot on a raid, efi should also be on a raid device.

Didn't the Anaconda developers read the specs? Rather unlikely.

So the question is, how do we want to proceed?

Metadata Update from @pboy:
- Issue tagged with: meeting

2 years ago

Metadata Update from @pboy:
- Issue untagged with: meeting

2 years ago

Issue tagged with: in progress

2 years ago

Login to comment on this ticket.

Metadata
Boards 1
Works in progress Status: Postponed