#84 Define install matrix baseline
Closed: Fixed None Opened 13 years ago by jlaska.

= problem =

  • We almost didn't find [http://bugzilla.redhat.com/578633 RHBZ#578633 - Unable to enter passphrase to unlock encrypted disk partitions]
  • When re-using test results from a previous test run, it's possible to include results for tests that should be re-tested. This would be bad.

= analysis =

  • F-13 Beta candidate#3 didn't include the correct version of ''plymouth'' (see RHBZ #578633).
  • Since it was Beta#3 and not much was supposed to change from Beta#2, some test results from the previous Beta#2 candidate were carried forward.
  • Thankfully, QA found the problem before release while running the QA:Testcase_Anaconda_autopart_(encrypted)_install test.

= enhancement recommendation =


This was the scariest part for me in F13 validation tests.:o I often hesitated whether some certain test results should be moved or not. Here are some draft thoughts from me:

    1. Release-eng(or the anaconda group) tells the detailed changes in anaconda and packages (related to install and reboot such as plymouth. We can also diff package changes) between versions. Only results from unchanged tests can be moved.
    1. Pass results in previous version can be moved, but once they failed in pre-previous(or before) version in this candidate, they can't be moved.
    1. At least keep one test in each test area to be tested without moving result.
    1. Tests grouped by five media can not moved.(Should they?)
    1. Some "particular" cases like shrink, encrypted and ...(?) should not be moved.

These are just some thoughts on my mind now. Feel free to correct and add. And I think this part should be added in [https://fedoraproject.org/wiki/QA/SOP_Release_Validation_Test_Event SOP_Release_Validation_Test_Event]

While we weren't able to actively pursue this ticket during Fedora 14, some late discussion between rhe and robatino helped identify the core set of tests that must be re-run on any respin.

To summarize, I believe we want to re-run all of the boot-media (boot.iso, DVD, CD, Live and PXE) sections, and a '''select''' group of tests from the ''General Tests''. The choice of tests from ''General Tests'' is a decision left to the installation test leads and should be based on the changes introduced since the previous release candidate. The command ''repodiff'' can be useful for listing changes between two release candidates.

rhe + robatino - does that correctly summarize our findings? If so, what's the best way to resolve this ticket? Would it makes sense to update the ''Risks and Contingencies'' (see [https://fedoraproject.org/wiki/QA:Fedora_14_Install_Test_Plan#Risks_and_Contingencies F14 install plan]) section of the Fedora 15 installation plan with the appropriate wording?

Replying to [comment:2 jlaska]:

While we weren't able to actively pursue this ticket during Fedora 14, some late discussion between rhe and robatino helped identify the core set of tests that must be re-run on any respin.

To summarize, I believe we want to re-run all of the boot-media (boot.iso, DVD, CD, Live and PXE) sections,

Agree.

and a '''select''' group of tests from the ''General Tests''. The choice of tests from ''General Tests'' is a decision left to the installation test leads and should be based on the changes introduced since the previous release candidate. The command ''repodiff'' can be useful for listing changes between two release candidates.

Yeah, I think the most important is to measure the differences between these two candidates especially the changes in anaconda.

rhe + robatino - does that correctly summarize our findings? If so, what's the best way to resolve this ticket? Would it makes sense to update the ''Risks and Contingencies'' (see [https://fedoraproject.org/wiki/QA:Fedora_14_Install_Test_Plan#Risks_and_Contingencies F14 install plan]) section of the Fedora 15 installation plan with the appropriate wording?

Risks_and_Contingencies has briefly described this condition, but I can rewrite it with more details. Also do you think it makes sense to mark the tests whose results were moved from before?

Replying to [comment:3 rhe]:

Also do you think it makes sense to mark the tests whose results were moved from before?

Sure, that seems like useful information to provide. Is there a way we can carry forward the results from a previous test run, without influencing the execution metrics? Meaning, we wouldn't want to double count a contribution. Perhaps some notation where we use ''{{result}}'' but remove the username. For example, see https://fedoraproject.org/wiki/User:Jlaska/Draft where I try to demonstrate a previous test result, and new results from users.

Replying to [comment:4 jlaska]:

Replying to [comment:3 rhe]:

Also do you think it makes sense to mark the tests whose results were moved from before?

Sure, that seems like useful information to provide. Is there a way we can carry forward the results from a previous test run, without influencing the execution metrics? Meaning, we wouldn't want to double count a contribution. Perhaps some notation where we use ''{{result}}'' but remove the username. For example, see https://fedoraproject.org/wiki/User:Jlaska/Draft where I try to demonstrate a previous test result, and new results from users.

Really nice idea. I can't think out any drawback of this method so far as long as the testers know about this and add their username each time they contribute their own results. It can be explained in the Key section and in the announcements at the beginning.

Just do some cleanup, I'd like to close this ticket as the moved results can be identified by [https://fedoraproject.org/wiki/QA:Fedora_15_Install_Results_Template#Key its format] now. Feel free to add comments for it.

Login to comment on this ticket.

Metadata