#483 Help design a new release validation testing system
Closed: Fixed 2 years ago by duckution. Opened 7 years ago by adamwill.

  • What's your deadline (could be date, could be Fedora release milestone)

No strict deadline.

  • Who's the developer writing the code (IRC nick + email + wiki profile page URL)

For now, me (adamw / adamwill / adamwill@fp.o)

  • If you can, please provide us with example URLs of web designs that are similar to the result you're looking for

Well, the thing I wanna replace is:
https://fedoraproject.org/wiki/Test_Results:Current_Summary
There's nothing precisely similar to what I want instead (which is kinda why we're thinking of making a new thing), but in the same vein, there's Ubuntu's QAtracker:
http://iso.qa.ubuntu.com/
and Moztrap, which we were looking at using for a while:
https://moztrap.mozilla.org/

  • What type of web project is this?

A system for reporting and viewing Fedora release validation testing results.

  • Wireframes or mockups for a website / web application

Will attach my SUPER AWESOME literally-on-the-back-of-an-envelope sketch.

  • Is this for a new or existing site? (if existing, provide URL)
    Do you need CSS/HTML for the design?

This would be an entirely new webapp.

  • Provide a link to the application project page or github page

Don't have one yet.

  • Provide a link to the theming documentation if available

  • Provide a link to the deployment to be themed, if available

Ditto.

  • Set up a test server and provide connection/login information

App doesn't exist yet. :)

So for a long time we (Fedora QA) have been using the wiki for storing validation test results. There is a hilariously complex mess of stuff - clever wiki templates, python-wikitcms, and the relval fedmsg consumer - all conspiring to produce all the wiki validation pages for new Fedora composes when appropriate, then we ask the squishy humans who actually do (some of) the testing to either edit the wiki pages directly or use the relval report-results command (basically a crappy TUI which knows how to edit the wiki) to report their results.

We don't like this for one really important reason and a few less important ones. The really important reason is, it's a terrible interface for humans to report test results; needlessly hard to understand and easy to get wrong (wiki syntax is awful). The less important reasons are, it needs an awful lot of complicated (and just plain dumb) code to keep it all working, and it's a really stupid way to store results, which makes pretty much any kind of analysis of said results more work than it ought to be.

So we'd quite like to come up with a completely new way to do release validation testing. We've gone through several versions of this plan in the past and none has quite worked out. The current idea is to write a new webapp from scratch which would be tuned to the release validation workflow and would store the results in ResultsDB (which will make it easy to consolidate them with results from automated test systems in future).

My current very rough idea for approximately how this could look is in the image I'm gonna attach, if you can read it. It's basically somewhat similar to how the wiki pages look, but smarter.

The basic flow would be that you'd pick a deliverable and report results for that deliverable. The 'pick a deliverable' stuff would happen at the top of the page: my first thought is to have two lists (drop-downs?) side-by-side, one for 'arch' and one a list of deliverables; picking an arch would cause the deliverable list to only show deliverables for that arch. In the arch list, release-blocking arches would have clear prominence over non-blocking arches, and similarly in the deliverable list, release-blocking deliverables would have clear prominence over non-blocking.

Once you'd picked a deliverable we'd show a download button with the image size, and show a table of the test cases that can be run with that deliverable.

The default sort for the test cases would prioritize important tests that had not yet been run: there's kinda a few different properties of tests that could be used for sorting, I'm not sure yet exactly how to combine them and whether to offer any sorting options to the user. But there's the test's 'milestone' - in the current implementation these are Alpha, Beta, Final and Optional, that's basically the effective order of importance - whether the test has been run by anyone else, and the test's 'type' (Installation, Base, Server etc - we aren't tied to these test types for the new system, but they are actually not a bad concept and could probably stand to stick around).

One more concept we'd need to keep in mind is that running every relevant test for every image is likely impossible, so we need to be smart about saying 'as long as this test has been run for any live image, we're OK' or things along those lines. In the wiki system we mess around with the result columns to achieve this - if you look at the titles of the columns where results go they flip around constantly, sometimes we use arch, sometimes 'product', there's all kinds. In the new system I want the user to only have to worry about what ISO they're testing, but for the admins and people involved in the release process, we'll need it to be possible to specify 'groups' of images for each test case. So say for a single test case we'd set up four groups of images and say 'as long as the test has been run with at least one image from each group, we're covered'. The way this would be significant to the user is that it'd be less important for them to run a given test on their chosen ISO if its 'group' was already covered, so we'd ideally indicate that somehow.

I figured it would be good to get some help with the design before we run off and start coding stuff, which is why I'm opening this ticket; I'm basically hoping you folks can help us think intelligently about what we're going to build before we start, and come up with some nice blueprints for us to work off (nicer than my envelope...)

Thanks!


my extremely rough initial design sketch
blocker.png

oh, in my sketch, the image selection dropdowns have percentages. These would be indicators of how complete testing for that arch/image already is, the idea being to help testers pick a deliverable that needs testing the most.

er...also, the words 'deliverable', 'image' and 'ISO' are basically interchangeable in the description. sorry about that.

Hopping on here as I'll be doing the actual write.

Just an update - shall have mockups completed within a week or so. Shall post them here when I have put them together. I am using the Fedora bootstrap theme for consistency with other Fedora applications. Please get in touch if you want to discuss anything. :)

I haven't had much of a chance to work on the mockup due to other
commitments. Its still a work in progress. I shall get it completed in the
next fortnight if that is ok.

Kindest regards
Kathryn

On 4 Nov 2016 12:58 AM, <"M=C3=A1ir=C3=ADn Duffy <pagure@pagure.io&
gt;"@pagure.io> wrote:

duffy added a new comment to an issue you are following:
@kathryng hey any updates?

To reply, visit the link below or just reply to this email
https://pagure.io/design/issue/483

@kathryng Totally cool! It just came up in our triage meeting this week.

hey this just came up in triage, let us know if there are any updates or if you'd like to pass on to another contributor!

Hello, I have put together some mockups reflecting what was asked in the rough sketches. Any feedback greatly appreciated.

I have also included 2 images which briefly outline the design decisions made.

Mockup 1. Image tab selected

Mockup 2. Group tab selected + Sort dropdown expanded

Design notes 1. Image tab selected

Design notes 2. Group tab selected

My sincerest apologies for the delay in completing this task.

Thanks a bunch for the mockups! I'll take a detailed look at them tomorrow. One immediate note: we're going to need to at least display the identifier for the current compose - so people who already have images downloaded can check if they have the right one.

I'm not 100% sure if we want to allow reporting results against anything but the current compose, but I think we probably will. If we're going to load every single new compose into this system, we're probably going to want to allow people to report results against the last few days' worth of composes, so people who want to grab an image and test it over a couple of days can do so. It doesn't seem nice to always require testers to have the absolute latest nightly. So, we'll need to have some kind of compose selection interface as well. Not sure exactly how it should relate to the 'flavor' selection.

Thank you for the feedback @adamwill
For clarification, what do you mean by the identifier for the compose?
The compose would be the the Arch + Image name correct? I just want to make sure I am understanding things correctly before continuing on with additional mockups for you.
And by flavor do you mean Fedora version (eg Fedora 24)?

No, I mean like 20161120.n.0 or RC-1.2, the 'version' being tested. Every day we get a new set of images - a new 'compose' - generated automatically, and a few times each cycle we get a 'candidate' compose - an Alpha-1.2 or a Beta-1.1 or an RC-1.4 or whatever.

In the wiki-based system we have a tool that automatically creates validation events, but not one for every single compose because it'd get a bit overwhelming in some areas of the wiki system. For this new system I think we could just load in the information for every single compose that happens. Then whenever you access this system, it'll default to entering results for either the most recent nightly compose, or the most recent candidate compose. (We'll still need some kinda heuristic to decide when to switch from a candidate compose back to showing nightlies, but that's an implementation detail you don't have to worry about). My concern is that if we don't offer any way of entering results for composes from the previous two or three days, we're kinda requiring people to download new images for testing an awful lot - you can't download an image and then enter several results for it over the next few days.

While writing this comment I had a few thoughts about how this could look, so I'll do another quick envelope mockup for you in a bit :)

Oh ok I understand what you mean by compose now and the validation process is clearer. Some quick mockups would be great. I will spend some more time looking at the validation process in the meantime.

So basically what I'm imagining is there'd be a choice at the 'identify image to be tested' step. One choice is what we already mocked up - choose an arch, choose an image, get a download link. The other choice is 'I already have an image and want to submit more tests' (or however we want to word it.

These would be alternatives, however we want to present that - whichever you pick, the other is inactive or hidden, you can't do anything with it.

The 'already have an image' interface would just be a text box (maybe also a file picker? hm..) where you could enter the name of your existing image, and we would determine what it is. If it's too old we'd show a message and require you to enter a new 'existing image' filename, or switch to the 'download an image for testing' workflow.

Once you pick an image either way, we'd show you the image filename, description, and checksums (ideally). Show a download link for sure on the 'download an image' flow; if we came from 'already have an image', maybe we still should show it in case you check the checksum of your downloaded copy and it's wrong?

There are obviously a few choices for how to present this...could be all as one basically always-visible interface with bits appearing and getting inactivated, could be a sort of 'revelatory' flow where more stuff shows up as you work through the stages, could just be a two-stage thing where you do 'image selection' then bonk a button and move on to 'test result entry'.

Is that clear enough? can you work it in? Thanks!

So one thing I notice in the mockups: I don't think we want users to interact directly with the 'groups' concept. There should not be a 'group' picker. Users strictly pick an image to test (well, there's probably going to have to be a fudge for tests which aren't actually related to an image, but we'll probably deal with that later).

The groups are going to be defined strictly by admins and that can be in some kind of backend UI (or may not even have a UI). They're not likely to change very much or very often. The only point where the concept becomes relevant to the user is we want to communicate to the user which of these states a test they could submit a result for (with their chosen image) is in:

  1. No results yet submitted for this image or any other in the relevant image group for the test
  2. Result(s) not yet submitted for this image, but submitted for another image in the relevant group
  3. Result(s) already submitted for this image

Because when a test is in state 1, we really really need the user to run it. When it's in state 2, we don't need it, but it'd be kinda cool. When it's in state 3, meh, more data is always good. That's what the 'group' concept is for.

I guess I might be being a bit too prescriptive here in terms of exactly what we want to signal to the user. There may be better ways of doing it, especially since there's a lot of information that feeds into the 'which tests do we most need this person to run right now' calculus that it's quite hard to convey in a small space.

The real high-level requirement is: we want to make sure testers run the tests we most badly need for them to run, and ultimately, we want the system to help us ensure that we get the coverage that the test admins decide we need, with the 'groups' concept taken into consideration. Hope that's not too fuzzy...

I can definitely work on this. It may take another week or so to put together. I hope that's ok.

So basically what I'm imagining is there'd be a choice at the 'identify image to be tested' step. One choice is what we already mocked up - choose an arch, choose an image, get a download link. The other choice is 'I already have an image and want to submit more tests' (or however we want to word it.
These would be alternatives, however we want to present that - whichever you pick, the other is inactive or hidden, you can't do anything with it.
The 'already have an image' interface would just be a text box (maybe also a file picker? hm..) where you could enter the name of your existing image, and we would determine what it is. If it's too old we'd show a message and require you to enter a new 'existing image' filename, or switch to the 'download an image for testing' workflow.
Once you pick an image either way, we'd show you the image filename, description, and checksums (ideally). Show a download link for sure on the 'download an image' flow; if we came from 'already have an image', maybe we still should show it in case you check the checksum of your downloaded copy and it's wrong?
There are obviously a few choices for how to present this...could be all as one basically always-visible interface with bits appearing and getting inactivated, could be a sort of 'revelatory' flow where more stuff shows up as you work through the stages, could just be a two-stage thing where you do 'image selection' then bonk a button and move on to 'test result entry'.
Is that clear enough? can you work it in? Thanks!

Aha, that's much clearer, especially about the groups. Should I include indicators next to each image that show what state (group) the test is in or colour code them? Or would you prefer the groups were excluded from this interface? Apologies for my confusion.

So one thing I notice in the mockups: I don't think we want users to interact directly with the 'groups' concept. There should not be a 'group' picker. Users strictly pick an image to test (well, there's probably going to have to be a fudge for tests which aren't actually related to an image, but we'll probably deal with that later).
The groups are going to be defined strictly by admins and that can be in some kind of backend UI (or may not even have a UI). They're not likely to change very much or very often. The only point where the concept becomes relevant to the user is we want to communicate to the user which of these states a test they could submit a result for (with their chosen image) is in:

No results yet submitted for this image or any other in the relevant image group for the test
Result(s) not yet submitted for this image, but submitted for another image in the relevant group
Result(s) already submitted for this image

Because when a test is in state 1, we really really need the user to run it. When it's in state 2, we don't need it, but it'd be kinda cool. When it's in state 3, meh, more data is always good. That's what the 'group' concept is for.
I guess I might be being a bit too prescriptive here in terms of exactly what we want to signal to the user. There may be better ways of doing it, especially since there's a lot of information that feeds into the 'which tests do we most need this person to run right now' calculus that it's quite hard to convey in a small space.
The real high-level requirement is: we want to make sure testers run the tests we most badly need for them to run, and ultimately, we want the system to help us ensure that we get the coverage that the test admins decide we need, with the 'groups' concept taken into consideration. Hope that's not too fuzzy...

I'm honestly not sure what would be best. My initial thought is something like 'green for all good, yellow for mixed, red for bad', with heavier colors if the information we have covers the specific image, lighter colors if it only covers the group. But there's really a lot of information we could theoretically try to pack into a small space there, and it might take some experimentation to make it as useful as possible. I mean, what if we have a pass for the same test with the same image, but two fails for the same test with another image in the group? How do we color code that? It's a bit tricky. Maybe we split the box and have two indicators, one for 'image', one for 'group'? I dunno.

Having two indicators - one for image and one for group would probably be best which will open a page of detailed test results for each image or group. I will have a fiddle and see what else works.
Mockups will be completed first week of December.

I have noticed quite a few things I need to fix in my latest designs, shall have updated designs uploaded in a day or so.

Hello,

Here are some more mockups which I have revised over the past week, hopefully more along the track of what you are looking for. The example in 3. outline the steps to downloading and viewing results for the compose RC-1.2. I included the compose names in the 'select an image' dropdown - I hope I have understood this correctly - happy to make changes as necessary.

The results table now only appears once a pre-existing compose image is selected, and the steps 2-3 should show only after each previous selection.

Feedback greatly appreciated.

  1. Landing

  2. Choose an image
    Notification

  3. Run more tests - inital view
    Run more tests - 1
    Run more tests - 2
    Run more tests - 3
    Run more tests - tooltip, dropdowns, notes

Thanks a lot Kathryn! Sorry I didn't reply to your previous question, I've been working on some other stuff lately. I'll take a detailed look at these when I get through a few other things. Does anyone else have thoughts? I know some other folks are following the ticket.

No worries @adamwill. Fyi, I have based the design off the Fedora Bootstrap 4 design.

Will this issue involve coding also? And other pages needing a design?
I am just trying to work out the scope of this task. Regardless of it's length, happy to help.

Hello @adamwill @kathryng
@kathryng , Thankyou for working on mockups
I have some ideas on new mockup designs:
The mockups are great, we can add few things:-

  • We can keep the first page as the FAS login page.

  • The main page
    1. We can add a profile bar, displaying the no. of validation test done by user, (maybe we can also display something like a level completer of game - '3 more test to earn your next badge')
    2. Instead of showing completing in drop down as in [1], there could be a list of images with the % of completion(with a filter option), arranged in the order of last uploaded and can add a ‘ sort by option’, where sorting can be done as priority, last uploaded, milestone, GROUP, something like [2]
    3. Instead of complete page, we can add a bar to check the current downloaded image with the buttons, ‘Verify’ to verify image by checksum and ‘Test’ button which can be activated when the image is verified and needs testing
    4. We can add a way to show notification of the events like - ‘XYZ test day is coming up in 2 day’, ‘Congrats, we have completed the test for image ABC’,’A new image QRS is ready to test’ etc, which can boost up testers to test images that requires testing and also reminds them for test days.

[1]Run more tests - 2
[2]https://moztrap.mozilla.org/results/runs/

These are rough Ideas. I'm not sure how that would look exactly, though… hopefully someone can give some more specific feedback.

I'd rather we start with a minimal design and implementation which only does what we actually need - allow people to report results - rather than baking in nice-but-not-necessary stuff like gamification from the start. We never, ever, ever have enough time to do everything we want to do; it's very important to keep the amount of work as small as possible to achieve the most important desired result.

I think the names of 'Choose an image' and 'Run more tests' are reversed? "Choose an image" actually looks like the "I've already got an image and I just want to enter its name so the system knows which image it is and which tests to show me" workflow. 'Run more tests' actually looks like the "Pick an image to download and run tests on" workflow. If that's the case, then I agree with @a2batic that in "Run more tests - 2" the drop down should not allow choosing between compose versions - when the user wants to download an image to test, we should always only show them images for the most recent compose.

Basically I think there's a bit of confusion arising from the fact that the 'workflow streams' seem to have gotten crossed at some point :)

Oh, to answer your questions - absolutely there's going to be coding involved here, but that's kind of our job to do, I think. I guess we may ask for help with UI coding, depending on what kind of expertise we have for that. I think the current scope is great, my basic idea was that we'd get mockups from you then use them to work from while actually implementing the system.

Thanks for your feedback and clarification. I shall revise my mockups in a week or so. Sorry for the confusion, it's much clearer now. Thank you for your patience.

Thanks for the update! Sorry for the late reply, the holiday break happened. The new mockups look good, thanks for all the work so far. I think next step we'll want to start working on the code a bit, then any further design work we need should become apparent as we go along...

Sounds like a plan. :thumbsup:

@kathryng Hey, this ticket came up in triage today and I wanted to give you some feedback on the mockups from another UX eye and perhaps some new ideas to explore if you'd like.

  • First off, I love the logo! it matches the logo style we're trying to roll out to fedora apps. a few minor nits - we usually have the logomark in shades of grey instead of blue, and i think there's something off in the type, kerning or something. but easily fixed, and i have a template i can send you if it's helpful.

  • im not sure about the visual design of the nested horizontal navs. nested horizontal navs with the fedora-bootstrap/pagure look is something i wrestled a lot with for fedora hubs. what i ended up settling on is one top level horiz nav, and a left sidebar. looks like this... and this is the style i'm going to recommend for 2 layer nav in the fedora bootstrap style guide:

Screenshot_from_2017-01-26_11-01-45.png

Okay so now I'm going to make a high level suggestion.... bubble wrap as an interface. :) Bubble wrap kind of invites you to pop the bubbles, you see them all laid out, which ones are still filled with air which ones are not, and it's hard to resist popping the ones just sitting there filled with air :)

So what if we took a bubble wrap approach here, and when you first loaded this system, you would get, per release (I'm assuming by default would be the latest development release) a chart similar to the wiki page, but maybe more nicely visually designed, with higher priority cases somehow visually made to stand out: https://fedoraproject.org/wiki/Test_Results:Fedora_26_Rawhide_20170125.n.0_Summary?rd=Test_Results:Current_Summary)

Tests that have already been performed would be visually de-emphasized... eg if there is a minimum requirement that a given image for that test be tested on one platform,then once that minimum req has been met, that one becomes less visually pronounced. (wouldn't grey it out, but something to deemphasize it just a smidge.) Tests that have no platforms tested for it would be made to stand out - maybe a red border, maybe some kind of highlight / bright color, bold text, something like that (think gestalt principles.) As the tester, you have your choice of platform (across the cols) to run that test. you click on the table cell of the platform x test you want to run, and that starts you on the workflow for testing and reporting your results. (perhaps when you click on the cell you get a popover that asks, would you like to use your own image or download one?) or something like that.

Is this too inside out or does it make sense?

Quick note there: 'upload an image' isn't really the right terminology there, we're not uploading anything. All we're going to do is read the filename and figure out what image you have (by comparing it with the filenames we know about in the system's own list of images).

on another topic: part of the point of this system was to avoid having the 'row of cells' thing from the wiki system: my naive original idea was that if we know what image you have, we always know what your test environment is, so there's always only one test per 'row' in the interface. This doesn't quite work out perfectly, though - for instance sometimes the environment is 'BIOS' vs. 'UEFI', and you use the same image to test in both those environments. So we're gonna have to address that...somehow. I should probably think about it a bit more, but maybe rows of environments is the best choice after all, in whatever cases remain where we need it.

I guess one thing I really ought to do to help both the designers and @a2batic (who is now tasked with doing an initial implementation of the web UI, with some bits of data faked in) would be to go through the existing test cases and decide which ones are actually going to be in this system; part of the plan here is that this system will not include any tests we are happy with having only automated testing for, so quite a lot of the test cases that are in the wiki but are tested by openQA will not be in this system.

Do you have any ideas, Mo, for solving the 'different kinds of priority' conundrum? We kind of have two major different inputs to the topic of 'what test is it most important to do next?' ...

1) When was the test (where 'test' is, I guess, a single environment for a single test case, if we're keeping the environment concept) most recently run...
i) on this medium?
ii) on a related/similar medium? (the whole 'groups' thing)
iii) on any medium?

2) How important is the test inherently, at this stage in the release cycle?

so I'm not sure how best to handle that (either by somehow combining them into a single decision for which tests to emphasize, or to somehow indicate both angles with different emphasis...)

another thing to emphasize (sorry if I mentioned this already, I can't remember) is that this system isn't intended as a viewer for the results, unlike the wiki system, where the same wiki page serves as both the submission and the primary 'viewing' interface. I think we're inevitably going to wind up indicating the existing results somehow, but the actual requirement is "indicate to the user which tests are most important to run" (and maybe "indicate known failures", which can be important to a tester too). This will probably inevitably wind up in us exposing information on existing results to some extent, but it's only a consequence of the requirement. I'm expecting there to be a different interface/dashboard/something for viewing the results (which will pull from ResultsDB and synthesize the results that come from this system with results coming from other systems).

oh, sorry, missed a bit - Mo, in some ways I like your idea, but I do have a bit of a problem with it: I was kinda envisaging that the list of tests shown would be based on the 'flavor' of image being tested (we show only the tests relevant to the image the user actually has / chooses to test). Not all tests are going to be relevant to all images; for instance, we don't want to ask people to run the FreeIPA tests on the Workstation images, or the browser test on the Server images.

If we go to your idea of doing it the other way around...hum...I suppose we can show all the tests, and when the user picks one, we can deal with the 'what image' question...

this is stream of consciousness, but the app could actually keep track of what images the user has? okay, so...when you click on a test, we get the list of image 'flavors' the test is relevant for, and show a list of images...all images we 'know' the user has that are less than a week old are at the top, and below that, all the relevant images from the most recent compose that aren't in the 'user has this image' list, and if they pick one of those, we show a download link? something like that?

I guess fundamentally it comes down to: do users think more "I want to pick a test to run, then an environment to run it in" or "I want to pick an image to test, then a relevant test to run"? I honestly don't know which is more common. We could ask people!

@kathryng Hey, this ticket came up in triage today and I wanted to give you some feedback on the mockups from another UX eye and perhaps some new ideas to explore if you'd like.

No worries @duffy, I always appreciate feedback. I am a newbie to design in the Fedora team so it is really useful to hear from people with a much better idea of what is required.

First off, I love the logo! it matches the logo style we're trying to roll out to fedora apps. a few minor nits - we usually have the logomark in shades of grey instead of blue, and i think there's something off in the type, kerning or something. but easily fixed, and i have a template i can send you if it's helpful.

Thanks for the feedback, I would love a template to work with

im not sure about the visual design of the nested horizontal navs. nested horizontal navs with the fedora-bootstrap/pagure look is something i wrestled a lot with for fedora hubs. what i ended up settling on is one top level horiz nav, and a left sidebar. looks like this... and this is the style i'm going to recommend for 2 layer nav in the fedora bootstrap style guide:

Aha, understood. I will make sure to use this layout.

the "download image workflow" text in this screen (https://pagure.io/design/issue/raw/files/30aece16b3486679f76d483ee676cd99ace97ab78969f3ae675bd275b7637bbc-test-1-0.png) i think might be a bit too jargony. Maybe something simpler like, "If you'd rather not upload an image, we have a library you can download from (Go there now.)" or something like that?

Good idea. I will try to use simpler language.

Okay so now I'm going to make a high level suggestion.... bubble wrap as an interface. :) Bubble wrap kind of invites you to pop the bubbles, you see them all laid out, which ones are still filled with air which ones are not, and it's hard to resist popping the ones just sitting there filled with air :)
So what if we took a bubble wrap approach here, and when you first loaded this system, you would get, per release (I'm assuming by default would be the latest development release) a chart similar to the wiki page, but maybe more nicely visually designed, with higher priority cases somehow visually made to stand out: https://fedoraproject.org/wiki/Test_Results:Fedora_26_Rawhide_20170125.n.0_Summary?rd=Test_Results:Current_Summary)
Tests that have already been performed would be visually de-emphasized... eg if there is a minimum requirement that a given image for that test be tested on one platform,then once that minimum req has been met, that one becomes less visually pronounced. (wouldn't grey it out, but something to deemphasize it just a smidge.) Tests that have no platforms tested for it would be made to stand out - maybe a red border, maybe some kind of highlight / bright color, bold text, something like that (think gestalt principles.) As the tester, you have your choice of platform (across the cols) to run that test. you click on the table cell of the platform x test you want to run, and that starts you on the workflow for testing and reporting your results. (perhaps when you click on the cell you get a popover that asks, would you like to use your own image or download one?) or something like that.
Is this too inside out or does it make sense?

That makes sense and is a great idea. I shall mock something up soon to make sure we are on the same page :)

@adamwill, thanks for the feedback and your patience,

Quick note there: 'upload an image' isn't really the right terminology there, we're not uploading anything. All we're going to do is read the filename and figure out what image you have (by comparing it with the filenames we know about in the system's own list of images).

Understood – select an image might be better unless you have a better suggestion

on another topic: part of the point of this system was to avoid having the 'row of cells' thing from the wiki system: my naive original idea was that if we know what image you have, we always know what your test environment is, so there's always only one test per 'row' in the interface. This doesn't quite work out perfectly, though - for instance sometimes the environment is 'BIOS' vs. 'UEFI', and you use the same image to test in both those environments. So we're gonna have to address that...somehow. I should probably think about it a bit more, but maybe rows of environments is the best choice after all, in whatever cases remain where we need it.

@duffy 's idea should resolve a lot of these issues. Ie. an interface with tests appearing as styled bubbles aligned by platform.

I guess one thing I really ought to do to help both the designers and @a2batic (who is now tasked with doing an initial implementation of the web UI, with some bits of data faked in) would be to go through the existing test cases and decide which ones are actually going to be in this system; part of the plan here is that this system will not include any tests we are happy with having only automated testing for, so quite a lot of the test cases that are in the wiki but are tested by openQA will not be in this system.

Good idea. I am happy to use dummy content in my designs at the moment

another thing to emphasize (sorry if I mentioned this already, I can't remember) is that this system isn't intended as a viewer for the results, unlike the wiki system, where the same wiki page serves as both the submission and the primary 'viewing' interface. I think we're inevitably going to wind up indicating the existing results somehow, but the actual requirement is "indicate to the user which tests are most important to run" (and maybe "indicate known failures", which can be important to a tester too). This will probably inevitably wind up in us exposing information on existing results to some extent, but it's only a consequence of the requirement. I'm expecting there to be a different interface/dashboard/something for viewing the results (which will pull from ResultsDB and synthesize the results that come from this system with results coming from other systems).

Understood - its for working out what tests to run and does not need to show the test results. I appreciate all the feedback

@adamwill

this is stream of consciousness, but the app could actually keep track of what images the user has? okay, so...when you click on a test, we get the list of image 'flavors' the test is relevant for, and show a list of images...all images we 'know' the user has that are less than a week old are at the top, and below that, all the relevant images from the most recent compose that aren't in the 'user has this image' list, and if they pick one of those, we show a download link? something like that?

I like this approach as you can make the system seem like it is doing some of the thinking for the user and keep track of past images.

I guess fundamentally it comes down to: do users think more "I want to pick a test to run, then an environment to run it in" or "I want to pick an image to test, then a relevant test to run"? I honestly don't know which is more common. We could ask people!

I would be interested to know what approach is more common for users. Are you able to gather any information on this? Or direct me to the right person to ask?

I shall mock up some revised designs in the next week or two if that is ok with you. If you need it sooner, don't hesitate to ask.

I have a comment on the original sketch which I don't see anything about further... in the future, at least as I've heard from Dennis and it seems reasonable to me, archs may be not release-blocking overall, but per image. For example, Server might want aarch64 to be release blocking, but Workstation not. (So, if GNOME is totally borked on that arch for some reason, it wouldn't stop the release overall, since GNOME is not release-blocking for Server.)

So I mailed test@ to ask which order people pick usually go with - pick image then pick test, or pick test then pick image. So far feedback is 100% 'image, then test'.

@mattdm and @adamwill, thanks for the clarification

Apologies for not providing any updates recently. have been much busier than expected. I may take another week or two to produce mockups. If there is any urgency, just tell me and I will rearrange a few things.

No, it's totally fine, we have pretty much the same problem on our end :)

@adamwill, mockups are coming within the next two weeks, just got another design ticket to work on first. Sketches are done, just need to put together a polished design mockup.

Updated mockups

Ok, so here's my first take on the bubble concept. The current image with its relevant tests appear in one column. Tests for other images are faded, and tests that have been completed appear as ‘popped’ or flat bubbles. Tests surrounded with a red ring are high priority as noone has tested them yet. Please note that some images do not need tests run.

  1. Initial screen - select image or go to list of provided images
  2. Test grid
  3. Test grid - with example tooltips/active elements
  4. Key for icons - since tooltips aren't available in mockup :P

A few notes

  • I was unsure whether this would be featured in Hubs or if it would just use a similar design.
  • Added tabs so users can also see a list of images they have and also a log of tests run. I know this is beyond what has been asked but I thought it may be beneficial for users to have access to this information.
  • The description under the heading here is used in the mockups and could probably do with some work
  • The tests and flavours are just provided as examples and are inaccurate.

... any feedback at all?

Thanks @kathryng, the designs look great!
I think we can make the grid page bit more simple, as the design now is looking good but it will take time for testers to understand it.
@adamwill and @duffy will be able to provide more detailed feedback.

@kathryng hey some feedback from this morning's triage meeting (sorry it's been so long :( we're catching up as we've gotten the flock material designs mostly sorted):

  • the app definitely doesn't need to be in hubs. the reason i brought up hubs was more to suggest the model of using a left nav bar + tabs to do 2-level navigation. but you don't need to bring the rest of hubs in with it, if that makes sense. I will sketch out what i meant using your latest mockups later today and post here.

  • I like the grid a lot, although i would like to see @adamwill 's feedback on functionally how it would work for him. visually we talked about the bubble styles in the design meeting - the general thought is that they are a bit large, maybe not quite fedora styled? but this is something we can help you with. i like how you've chunked the large list into more digestible sections using color coding - the color codes might be better reinforced / readable if maybe you coded the title that applied to the color with that color too? (EG make the "user interface" title text purple or a shade of purple"

  • I've asked @maryshak1996 to play around with some vis design ideas for the bubbles to help out.

Hope this helps, thank you for your patience @kathryng we are very impressed with your work on this and other tickets!

Hi @kathryng , I put together a set of circular/bubble icons that might be a little bit more more intuitive in terms of what each of the icons are representing. If you post the SVG of the Test Grid that you posted, I can try plugging my new icons in so you can get a feel for this alternative :)
Here's the kind of "guide" or "key" to the icons I made... Hope they can be of some use!

releasevalidation_bubbles1.png
releasevalidation_bubbles1.svg

@duffy

@kathryng hey some feedback from this morning's triage meeting (sorry it's been so long :( we're catching up as we've gotten the flock material designs mostly sorted):

No worries, I understand that the team must be flat out.

the app definitely doesn't need to be in hubs. the reason i brought up hubs was more to suggest the model of using a left nav bar + tabs to do 2-level navigation. but you don't need to bring the rest of hubs in with it, if that makes sense. I will sketch out what i meant using your latest mockups later today and post here.

Understood. Shall update my mockups. No need to provide sketches unless you really want to.

I like the grid a lot, although i would like to see @adamwill 's feedback on functionally how it would work for him. visually we talked about the bubble styles in the design meeting - the general thought is that they are a bit large, maybe not quite fedora styled? but this is something we can help you with. i like how you've chunked the large list into more digestible sections using color coding - the color codes might be better reinforced / readable if maybe you coded the title that applied to the color with that color too? (EG make the "user interface" title text purple or a shade of purple"

I shall update the text colour coding

I've asked @maryshak1996 to play around with some vis design ideas for the bubbles to help out.

Saw these, shall respond to her post.

Hope this helps, thank you for your patience @kathryng we are very impressed with your work on this and other tickets!

Happy to help.

Hi @kathryng , I put together a set of circular/bubble icons that might be a little bit more more intuitive in terms of what each of the icons are representing.

Wow @maryshak1996, these look fantastic. Thank you for your work on this. I did have concerns about the lack of variety in my icons, which you have resolved, great work.

If you post the SVG of the Test Grid that you posted, I can try plugging my new icons in so you can get a feel for this alternative :)

Sure, I will have to play with the sketch file a bit and export it. Should have this done within 24 hours.

Here's the kind of "guide" or "key" to the icons I made... Hope they can be of some use!

Thanks, looks great and is very clear.

@maryshak1996, here is the svg file. I hope this is what you are after.

fedora-testing-template.svg

If not, just leave a comment and I will update the image.

Here is the above template broken up into pngs, you will notice that the grid has the icons removed that I created.

Landing page

Test list

Tooltips - test types and button state

Also, I am unsure what to put in the navigation, @duffy: could you provide some information on the navigation (ie. other pages), point of entry.

Hi @kathryng for posting your SVGs! I mocked up the grid with my icons so you can see what it would look like and I made a couple of very minor changes to the icons I previously posted (to make a little bit easier to see), made the column widths for the grid fit the icons well, and added an outline that could make it a little easier to tell what column represents the "current image" (but that's just an idea!) So below I have a mockup with just the filled in grid, a mockup with the grid and the 'active element' info boxes (which I think look really awesome), an updated version of the bubble/icons on their own, as well a mockup that highlights any changes that I made so you can see the updates that were made :)

releasevalidation_grid_1.svg
releasevalidation_grid_1.png
releasevalidation_bubbles2.png
releasevalidation_grid_feedback.png
releasevalidation_grid_activeelements_1.png

Hi folks! Thanks a lot for all the work on this; I've been sidetracked with other tasks lately, I don't know if @a2batic has been looking at them. Once F26 is done (hopefully next week) I should be able to get back to this.

@maryshak1996 that looks great. thanks for your hard work.

Hey @maryshak1996 @kathryng, thanks for working on it, they are looking amazing. I really like the colors used. I have tried to make some mockups on other parts of the project for an idea. Since I am aware of patternfly, so I used that.
I have some queries:-
1. Does fedora-bootstrap have graph components?
2. If yes, how effective it is, if we compare with mockups?
3. If no, can we make a library for that?

Following are mockups for testing images:-
1. Image_test_page.png
2. Image_test_page_warning_popup.png
3. Image_test_page_warning_popup-filled.png
4. New_image_test.png
5. New_image_test-collapsed-menu.png

Looking forward for your thoughts, Thanks

@a2batic
No, fedora-bootstrap uses the bootstrap 4 library which does not include graphs by default.
The mockups you have created use Patternfly which is based off Bootstrap 3, with the additional
C3.js library. See the Code tab on the Patternfly website here for code examples.

Another library that is available, that is very popular is Chart.js

A comparison of the two libraries can be found here. And here - this one also mentions D3, which is the foundation of the C3 library, but has a steeper learning curve.

I think that since Patternfly uses C3, that we should look at using that if Chart.js isn't appropriate.

So to answer your third question, libraries exist to create graphs and we can use these instead of writing something from scratch.

Metadata Update from @duckution:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

2 years ago

Login to comment on this ticket.

Metadata