As discussed on mailing list
Metadata Update from @pboy: - Issue tagged with: pending activity
Issue tagged with: pending activity
Metadata Update from @pboy: - Issue tagged with: meeting
Issue tagged with: in progress
Metadata Update from @pboy: - Issue untagged with: meeting
So, I think this was in response to https://lists.fedoraproject.org/archives/list/server@lists.fedoraproject.org/message/7APWI3EDFHJL6ALA7EGNDFJNMPWP7YG3/ , right?
Looking at the specific issues there - SBC support was well-discussed, but ultimately we're kinda at the 'mercy' of the ARM maintainers there. They decide which platforms are 'supported', and realistically, they're the ones who have to stand behind that, as outside of that group we don't really have the hardware and expertise to fix problems.
In QA we have a kinda random assortment of SBCs from over the years, but testing them is frankly a pain, and they often go out of support pretty fast. Realistically we mostly can only do systematic testing on Raspberry Pi most cycles.
On Cockpit, we absolutely have space to extend what we consider 'release blocking' in terms of specific Cockpit functionality, and extend the test case and openQA testing. Currently it only really does basic checks of the Logs, Services and Updates features (it dates back to a time when Cockpit was much simpler). Upstream does have extremely good tests, but these won't catch cases where something downstream-specific like the Server DVD soft dependency problem breaks stuff.
I don't recall what the systemd-resolved issues were, but "elaborate use of KVM virtualization on a server" does sound kinda like it might be out of scope for release validation at least. We can't test/catch/block on everything - there does have to be some kinda cutoff.
To go over what we test currently: you can see the release validation tests at https://fedoraproject.org/wiki/Test_Results:Current_Server_Test (also the Server columns at https://fedoraproject.org/wiki/Test_Results:Current_Base_Test ). Those tests are pretty much 100% automated at this point, and run on every compose (in virtualization, none of this is bare metal testing). We run most of the same tests on every update to a package in the critical-path-server group - the Cockpit, FreeIPA, postgresql, Samba AD tests, and most of the 'base' tests.
The automated tests cover what is described in the wiki tests pretty closely: if something is mentioned in the wiki tests, assume the automated test covers it; if something isn't mentioned in the wiki test, you can't assume the automated test is covering it.
Log in to comment on this ticket.