#12 Boot Counting and Success Determination
Closed 5 years ago Opened 5 years ago by lorbus.

There is a lot of movement in this area so I'd like to bring together people working on all related things here.

GRUB: @jwrdegoede @javierm @pjones
systemd: @lennart
atomic/coreOS/IoT: @walters @dustymabe @jlebon @pbrobinson

Summary of current state

Proposals

  • Add a new grubenv var boot_counter (or similar), a decrement function and logic to switch saved_entry upon reaching 0 in the boot_counter to implement boot counting.

  • Use greenboot for determining boot success (setting the boot_success grubenv var to 1 on green boot status)

Questions

  • Can we agree on not putting the counter in the bootentry filename when using GRUB?

  • Will we be able to reuse systemd's boot-complete.target?

  • Where should the logic to switch saved_entry live?

Please add your corrections of the above as well as any thoughts and questions you have regarding the implementation.
Guidance on any of this is also highly appreciated.


GRUB recently introduced a boot_success grubenv var and an increment function for grubenv vars.

Neat!

Can we agree on not putting the counter in the bootentry filename when using GRUB?

How does resetting the counter work? E.g. with the filename approach, this is done implicitly when creating a new entry. If we keep a single variable separately, then e.g. would OSTree need to learn to reset the counter when creating a new deployment?

Would it be possible to keep the counter in the BLS entry file itself instead? Then new files implicitly start from the default count.

Will we be able to reuse systemd's boot-complete.target?

Yeah, it'd would be nice to standardize on it! It seems like that's mostly orthogonal to the bootloader details of the equation?

another question via IRC:
<dustymabe> where are grubenv vars stored? and how/when can/do they get modified/accessed ?

Hi All,

Can we agree on not putting the counter in the bootentry filename when using GRUB?

I think we cannot only agree on that, I think that we must do that since the grub filesystem drivers don't support renames AFAIK.

Will we be able to reuse systemd's boot-complete.target?

I think that is something which should be strived for. The workstation case is a bit special because we want to signal success from within the user-session (we want to be sure the display output and keyboard input work, iow we want to be sure the user can interact with the system). So we have a systemd timer activated user systemd unit which sets boot_success in the grubenv.

But as said, this is a bit special, for non interactive systems having some target which when reached signals success seems to be the way to go and standardizing that seems like a good idea.

Where should the logic to switch saved_entry live?

grub.cfg syntax is quite bash like, I think you can probably do all this with a grub.cfg snippet. currently grub.cfg files is generated by sh scripts living under /etc/grub.d we are thinking about moving to doing this build-time though, since once we have bls there will be no more need to modify grub.cfg

where are grubenv vars stored? and how/when can/do they get modified/accessed ?

These are stored in /boot/grub2/grubenv, which on EFI systems is a symlink to /boot/efi/EFI/fedora/grubenv. This is a simply text file, but there are some special rules about its contents. It should only be modified through grub2-editenv.

How does resetting the counter work? E.g. with the filename approach, this is done implicitly when creating a new entry. If we keep a single variable separately, then e.g. would OSTree need to learn to reset the counter when creating a new deployment?

That is a good question, I think the counter should be put in the grubenv and indeed needs to be reset after installing an update.

Would it be possible to keep the counter in the BLS entry file itself instead? Then new files implicitly start from the default count.

That would require 2 things:

1) Modifying the bls spec to allow for this
2) Modifying grubs bls parser to support not only reading, but also modifying and writing back the bls file. Note that grub's filesystem code does not allow resizing files, so it would only be able to modify the count if it is already there (so there is pre-existing space for it)

All in all I don't think this is a good idea.

Regards,

Hans

p.s.

where are grubenv vars stored? and how/when can/do they get modified/accessed ?

They can be modified from within grub during boot (e.g. the menu auto hide support clears the boot_success flag at boot before doing anything else) and from within the running os using grub2-editenv.

@jwrdegoede @jlebon

Where should the logic to switch saved_entry live?

grub.cfg syntax is quite bash like, I think you can probably do all this with a grub.cfg snippet. currently grub.cfg files is generated by sh scripts living under /etc/grub.d we are thinking about moving to doing this build-time though, since once we have bls there will be no more need to modify grub.cfg

Changing grub.cfg is not needed for this, though...or is it?
Doesn't grub.cfg just point to the saved_entry var from grubenv for its current entry?

I'm guessing OSTree should change the entry from userspace, along with resetting the counter when doing the rollback.
And when should the counter decrementaion happen? At boot, if boot_successful != 1? Or also from userspace?
I think if we do it at boot time, we could in userspace do if [boot_counter=0 && boot_successful=0] then rpm-ostree rollback

@jwrdegoede @jlebon

Where should the logic to switch saved_entry live?

grub.cfg syntax is quite bash like, I think you can probably do all this with a grub.cfg snippet. currently grub.cfg files is generated by sh scripts living under /etc/grub.d we are thinking about moving to doing this build-time though, since once we have bls there will be no more need to modify grub.cfg

Changing grub.cfg is not needed for this, though...or is it?

Even when you plan to reuse the saved_entry variable (or just set default) to fallback to the previous boot entry, you need some logic in grub.cfg to set to the previous entry in the case of a boot_successful=0.

Doesn't grub.cfg just point to the saved_entry var from grubenv for its current entry?

Yes, default is set to saved_entry. But it also depends on other variables set on grubenv and /etc/default/grub when the grub.cfg is generated. You can take a look to its logic in /etc/grub.d/00_header.

I'm guessing OSTree should change the entry from userspace, along with resetting the counter when doing the rollback.

Currently it doens't AFAICT, so in this case default is 0 which means that the first entry will be booted.

And when should the counter decrementaion happen? At boot, if boot_successful != 1? Or also from userspace?

The boot counter decrement should be done by grub before attempting to boot the current default entry, user-space should just reset that in the case of a successful boot.

I think if we do it at boot time, we could in userspace do if [boot_counter=0 && boot_successful=0] then rpm-ostree rollback

You can't do it from user-space if the default entry failed to boot (at least without user intervention, a user could choose to do it).

Another thing to take into account is that we should probably remove the failed entry, otherwise a ostree upgrade will leave the system with a known-to-not-boot entry and a new one that wasn't tested yet. IOW, the known-to-boot entry will be not be available anymore.

So we do if (boot_successful=0 && boot_counter=0) then saved_entry++ at boot time

Another thing to take into account is that we should probably remove the failed entry, otherwise a ostree upgrade will leave the system with a known-to-not-boot entry and a new one that wasn't tested yet. IOW, the known-to-boot entry will be not be available anymore.

@wjt pointed me out that I was wrong on this. On deploy ostree doesn't use the last two deployments but the booted one + the new deploy, so the failed BLS won't be present in the new BLS entries directory for the latest deployment.

so the failed BLS won't be present in the new BLS entries directory for the latest deployment.

Right...I'm thinking any time ostree changes the bootloader configuration, the saved entry state should be reset and the first entry should be the default.

One thing that came out of a meeting today is that ostree should probably learn how to parse the grubenv data around boot counting, so we can render it in rpm-ostree status. It's really important to me that rpm-ostree status describes "truth" of system state. It'd be very confusing if e.g. when I reboot the first deployment isn't what I boot into because the try counter is zero for it because it failed.

And following up on my previous comment; is grub responsible for resetting the grubenv somehow any time the bootloader configuration changes, or does ostree need to learn how to do that?

And following up on my previous comment; is grub responsible for resetting the grubenv somehow any time the bootloader configuration changes, or does ostree need to learn how to do that?

I think ostree would need to learn how to do that. i.e. if I rpm-ostree upgrade ostree needs to set up the grubenv appropriately before (or during) reboot. right?

Right, grub never changes grubenv itself, unless you tell it so in grub.cfg
e.g. for the auto-hide-menu stuff landing in F29 I've added:

set boot_success=0
save_env boot_success

Which resets the boot_success variable to 0 on every boot,
changes which only need to happen on an upgrade / update are
probably best done from the upgrade scripts. But you could have
some logic detecting an update has happened in grub.cfg
(it is a sh-like script) and make grub do it there.

Doing it from the upgrade scripts seems better though.

I drew up a state machine diagram, with 2 boot attempts and 2 boot loader entries.
Legend:
s: boot_success
c: boot_counter
e: saved_entry

Note that the rollback edge is assumed to take place during boot in GRUB here, as opposed to in userspace by rpm-ostree (which AFAIKT rewrites the zero entry instead of incrementing it)

Edit: Note 2: boot_counter counts down here. Are there any reasons to have a second counter counting up?

grub_boot_counter_states.png

Questions:

  • What has to happen in reset e? We want to move up the entry x (here 1) to 0 in the boot loader menu entry index.

  • When attempting to increment e during rollback (GRUB), how can be determined how many entries there are, ie. when to stop?

  • If we add a var in grubenv, e.g. boot_counter=0, up to what number can we increment it without needing to resize the block (which GRUB doesn't support)

  • What are the fs limitations for grubenv? Any at all? VFAT only?

What has to happen in reset e? We want to move up the entry x (here 1) to 0 in the boot loader menu entry index.

i think that point is where we want to have rpm-ostree/ostree do some logic: i.e. perform rollback (not just boot old deployment) and fixup the grub env vars.

When attempting to increment e during rollback (GRUB), how can be determined how many entries there are, ie. when to stop?

I think we only want to consider the0 and 1 entries. 1 is where we were at before the upgrade so it should boot since it booted before.

If we add a var in grubenv, e.g. boot_counter=0, up to what number can we increment it without needing to resize the block (which GRUB doesn't support)

don't know. @jwrdegoede @javierm @pjones ?

What are the fs limitations for grubenv? Any at all? VFAT only?

same: @jwrdegoede @javierm @pjones ?

@lorbus - a few questions for you:

  • can you share the source to the state machine? I might propose some edits
  • i think we might need a boot_failed var or maybe we can detect that information from the other ones.

On 18-07-18 14:42, Dusty Mabe wrote:

If we add a var in grubenv, e.g. boot_counter=0, up to what number can we increment it without needing to resize the block (which GRUB doesn't support)

don't know. @jwrdegoede @javierm @pjones ?

It gets stored as a text string, grubenv itself is
1024 bytes in size and typically has plenty of free space,
I guess the limit will more be the size of the integer used
then the grubenv size, although that could be a problem too
if there are lots of variables in there.

What are the fs limitations for grubenv? Any at all? VFAT only?

same: @jwrdegoede @javierm @pjones ?

ext2/3/4 and vfat should all be fine, I do not know about xfs,
last time I checked GRUB's btrfs code did not support writing but
that may have changed since.

@dustymabe diagram source is a LibreOffice Draw odg file (attached). I tried doing it with graphviz, which would've made making changes to it easier, but I couldn't get it to position the nodes where I wanted them and thus made it harder to grasp visually.
grub_boot_counter_states.odg

i think that point is where we want to have rpm-ostree/ostree do some logic: i.e. perform rollback (not just boot old deployment) and fixup the grub env vars.

Can we safely assume we reach userspace on a failed boot, though? If that is the case we wouldn't even need to increment saved_entry, as ostree will rewrite 0th entry in rpm-ostree rollback.
The current proposal wouldn't call rpm-ostree rollback at all and instead call some new logic from userspace to move entry 1 to 0 after the fact.

Pros I see for this:

  • No changes needed in ostree.
  • GRUB rollback function usable on non-ostree platforms.
  • Unsuccessful entry stays in place for potential further failure analysis.

Can we safely assume we reach userspace on a failed boot, though?

i think i was talking about the case where we reach userspace after we've chosen entry 1 because entry 0 failed to boot. This is not the same as a rollback. We would still need to tell ostree to do a rollback I believe. @jlebon would be able to weigh in.

We discussed this in a meeting today, though for posterity:

This is not the same as a rollback. We would still need to tell ostree to do a rollback I believe.

That's correct, we'd need to change the bootloader order to make sure entry 1 was back to being the default again.

Metadata Update from @pbrobinson:
- Issue assigned to lorbus
- Issue tagged with: GSoC

5 years ago

We came up with a simplified state diagram for this. we might be able to not use saved_entry but boot_once instead:

Legend:
s: boot_success
c: boot_counter

lorbus_grub_boot_counter_states.png

Metadata Update from @lorbus:
- Issue status updated to: Closed (was: Open)

5 years ago

Hi all,

I realize this is closed now, but I think the following systemd RFE:
https://github.com/systemd/systemd/issues/9897
"RFE: system-boot-success target"

Is of interest for those involved in this, the purpose here is to offer a standard framework decoupling how boot-success is detected vs how it is signaled to the boot-loader, the idea being that success leads to reaching a new "system-boot-success" target in systemd and that once that target is reached a boot-loader specific oneshot service will be run which communicates this to the bootloader.

Please read the above systemd RFE and add any comments you may have there.

Regards,

Hans

hey @jwrdegoede, thanks. @lorbus, do you mind taking a look?

Login to comment on this ticket.

Metadata