#164 Automate User Switching test case
Opened a year ago by kparal. Modified a year ago

We have a freshly minted user switching criterion. This would be a great candidate for automation. We don't have a manual test case yet, but I requested one in https://pagure.io/fedora-qa/issue/630 .


It is actually mentioned in the desktop login test case - we could probably just expand on that a bit, rather than creating a new test case.

It is also already automated in the openQA version of that test, so I don't think there's anything to do here. @lruzicka wdyt?

It is actually mentioned in the desktop login test case - we could probably just expand on that a bit, rather than creating a new test case.

It's better to have it separated, both in test cases and automation, I believe.

It is also already automated in the openQA version of that test, so I don't think there's anything to do here. @lruzicka wdyt?

Once the test case is written, the code should reflect the test case steps, so that we can reliably mark it as passing. So I'd leave the question of whether it needs adjustments or not after the new test case is ready. But what can be done right now is to remove this:

    if ($desktop eq "gnome") {
        # Because KDE at the moment (20200403) is very unreliable concerning switching the users inside
        # the virtual machine, we will skip this part, until situation is better. Switching users will
        # be only tested in Gnome.

We now need to test it on KDE as well.

It's better to have it separated, both in test cases and automation, I believe.

I have always thought that we automate stuff described in test cases according to "no written test case, no wiki entry". And most of the OpenQA results end up in the wiki anyway.

It is also already automated in the openQA version of that test, so I don't think there's anything to do here. @lruzicka wdyt?

Well, I tried to cover everything that I considered important for such an automation. I guess someone else might have other requirements, too.

Once the test case is written, the code should reflect the test case steps, so that we can reliably mark it as passing.

What I am interested in is who is going to write the test case and how long does it take to create it? The wording "once the test is written" is quite vague here. So, should I read the line as "Kamil is working on it" or "We are looking for a volunteer to do it" ? If the latter, I might apply.

... remove this.

if ($desktop eq "gnome") {
# Because KDE at the moment (20200403) is very unreliable concerning switching the users inside
# the virtual machine, we will skip this part, until situation is better. Switching users will
# be only tested in Gnome.

With big pleasure.

I do not think that user switching needs a standalone OpenQA test. What user switching means is log in as a user, hit user switching and log in as a different user. I agree that we might be doing it more thoroughly perhaps, but while the desktop_login could work without user_switching, it could not vice versa. For user switching, we still need lots of desktop logins which basically will mean doubling the content in the user_switching script.

What I am interested in is who is going to write the test case and how long does it take to create it? The wording "once the test is written" is quite vague here. So, should I read the line as "Kamil is working on it" or "We are looking for a volunteer to do it" ? If the latter, I might apply.

See https://pagure.io/fedora-qa/issue/630 . We're looking for a volunteer.

I do not think that user switching needs a standalone OpenQA test.

From the standard test methodology standpoint, it's better to have test cases separated. The reason is that when one of the test scenarios fails, you can still see results for the other scenarios. If you bundle several test scenarios together into a single automated test case, you lose the ability to distinguish failures and one failure will prevent you from detecting other failures (until the first one is fixed). Of course, in the real world™, very similar or close scenarios are bundled together, either for performance reasons or just because it's easier to write/manage. (I'm not talking just about OpenQA here, also standard unit/functional tests suites). So in the end, it's up to you how to implement it, you're the ones writing it. I'm just explaining why it's better to have them separated at least on a theoretical level.

In terms of wiki test cases, I'd rather to see them separated even in practice, because it allows us to a) test them in parallel b) report and view failures separately c) keep test cases short and simple instead of overly long and complex beasts (people avoid those, myself included).

See https://pagure.io/fedora-qa/issue/630 . We're looking for a volunteer.

I have seen and acquired that task already.

From the standard test methodology standpoint, it's better to have test cases separated. The reason is that when one of the test scenarios fails, you can still see results for the other scenarios. If you bundle several test scenarios together into a single automated test case, you lose the ability to distinguish failures and one failure will prevent you from detecting other failures (until the first one is fixed).

Point taken.

In terms of wiki test cases, I'd rather to see them separated even in practice, because it allows us to a) test them in parallel b) report and view failures separately c) keep test cases short and simple instead of overly long and complex beasts (people avoid those, myself included).

Yeah, here I agree without further objections. I will keep the test cases separated in Wiki.

BTW, I have removed the KDE lock and currently I am testing how that goes without it.

"From the standard test methodology standpoint, it's better to have test cases separated. The reason is that when one of the test scenarios fails, you can still see results for the other scenarios. If you bundle several test scenarios together into a single automated test case, you lose the ability to distinguish failures and one failure will prevent you from detecting other failures (until the first one is fixed)."

We do have some flexibility here with openQA thanks to the "test modules" concept. A good example is the somewhat-misleadingly-named realmd_join_cockpit test, which actually tests FreeIPA enrolment using Cockpit...and the FreeIPA web UI, and FreeIPA password changes. This is for a similar reason to the one @lruzicka mentioned: splitting those all into separate tests would be pretty inefficient and involve replicating a lot of work between all the separate tests. So instead we have them as three separate test 'modules' within the one test, realmd_join_cockpit, freeipa_webui and freeipa_password_change.

openQA gives you useful choices for each test module. A test module can be fatal, which means if it fails, the test is immediately abandoned (no further modules are run) and the overall result is fail. A test module can also be marked ignore_failure, which means if it fails, the rest of the test is still run, and the result of that module is ignored in the overall result calculation (so if all non-ignore_failure modules pass, the test as a whole is considered to have passed). It can also be neither fatal nor ignore_failure, which means if it fails, the rest of the test is still run, but the overall test result will be fail. Also, when you query a completed openQA job via the API, you get the result of each module as well as the overall result, so you can base stuff off those results directly.

So with realmd_join_cockpit we have the freeipa_webui and freeipa_password_change tests marked as ignore_failure, so even if they both fail, the overall test result is pass - but our wiki result forwarder can actually be configured to consider those individual module results. For reporting the QA:Testcase_FreeIPA_web_ui and QA:Testcase_FreeIPA_password_change results to the wiki, fedora_openqa will not report a pass unless the relevant test module result is 'pass'.

We could conceivably use a similar setup here, if we think it's more efficient than splitting off user switching into a separate openQA test. We don't have to, but it's an option.

If we do decide to go with a separate openQA test, of course we should move the relevant subroutines out of desktop_login.pm and into utils.pm or another shared module so both tests can use them.

Interesting, thanks, Adam, for the explanation.

Login to comment on this ticket.

Metadata