relval is a CLI tool, using python-wikitcms, which aids in the creation of the wiki pages that are used to track the results of Fedora release validation events, in creating statistics like the heroes of Fedora testing and test coverage statistics, and also can report test results. If you're interested in relval, you may also be interested in testdays, which is to Test Day pages as relval is to release validation pages.
Put simply, you can run
relval compose --release 25 --milestone Final --compose 1.1 and all the wiki pages that are needed for the Fedora 25 Final 1.1 release validation test event will be created (if they weren't already). The
testcase-stats sub-commands handle statistics generation, and
report-results can report test results.
relval is packaged in the official Fedora and EPEL 7+ repositories: to install on Fedora run
dnf install relval, on RHEL / CentOS with EPEL enabled, run
yum install relval. You may need to enable the updates-testing repository to get the latest version. To install on other distributions, you can run
python setup.py install.
You can use the relval CLI from the tarball without installing it, as
./relval.py from the root of the tarball. You will need all its dependencies, which are listed in
You can file issues and pull requests on Pagure.
The validation event SOP provides the correct invocation of relval to use when you simply wish to create the pages for a new compose (the most common use case).
The following applies for all commands that require login - anything that writes to the wiki, currently
Since early 2018, the Fedora wikis use OpenID Connect-based authentication. When you first use any of the commands that require login, a browser window will open and walk you through the authentication process; this will create a login token that is valid for a while, and subsequent use of these commands will work transparently. After a while the token will expire, and the next time you try to use one of these commands, you will go through the authentication process again.
--password arguments, and the
~/.fedora/credentials file which used to be available for you to store your username and password for 'non-interactive' login, no longer do anything. It would be a good idea to remove any remaining credentials files as they are now only a potential security risk. For long-term non-interactive usage of the wiki via relval or any other system, you must request a permanent auth token from the wiki administrators.
All sub-commands honor the option
--test, to operate on the staging wiki instead of the production wiki, which can be useful for testing. Please use this option if you are experimenting with the result page creation or result reporting sub-commands, especially if you also pass
All options mentioned here have short names (e.g.
--release, but the long names are given here for clarity. Usually the short name is the first letter of the long name. The help pages (
relval <sub-command> -h) list all options with both their long and short names.
For validation event page creation, use the
relval compose. You must pass either the parameters
--compose (and optionally
--release) or the parameter
--cid to identify the compose for which pages will be created. When using
--compose you may also pass
--release to specify the release to operate on; otherwise, relval will attempt to discover the 'next' release, and use that.
You may pass
--testtype to specify a particular 'test type' (e.g. Base or Desktop); if you pass a test type, only the page for that type (and the summary page and category pages) will be written, while if you do not, the pages for all test types will be written. You may pass
--no-current to specify that the Test_Results:Current redirect pages should not be updated to point to the newly-created pages (by default, they will). You may pass
--force to force the creation of pages that already exist: this applies to the results pages category page contents, and summary page, but not to the Current redirects, which will always be written if page creation succeeds (unless
--no-current is passed). You may pass
--download-only to specify that only the Download template (which provides the table included in the instructions section of all the results pages) should be written; this is handy if you need to create or update the Download page for an existing event.
For user statistics generation, use
relval user-stats. It has no required options.
You may pass
--release to specify the release to operate on; otherwise, relval will attempt to discover the 'next' release, and use that. You may optionally specify a milestone to operate against, with
--milestone (Alpha|Beta|Final) (it does not accept Branched or Rawhide, but if you do not pass
--milestone at all, Branched and Rawhide result pages will be included). You may also pass the
--filter option as many times as you like. If passed, only pages whose name matches any of the
--filter parameters will be included. For instance,
relval user-stats --release 21 --milestone Beta --filter TC3 --filter Desktop will operate against all Fedora 21 Beta pages with "TC3" or "Desktop" in their names. You may pass
--bot to include 'bot' results (those from automated test systems) in the statistics; by default they are excluded.
The result will be a simple HTML page source printed directly to the console which you can save or paste into for e.g. a blog post, containing statistics on the users who contributed results to the chosen set of pages.
For test coverage statistics generation, use
relval testcase-stats. The parameters are the same as those for
user-stats. The output will be an entire directory of HTML pages in
/tmp with a top-level
index.html that links to summary pages for each "test type", and detailed pages for each "unique test" that are linked from the summary pages. You can also pass
--out to specify an output directory, which will be deleted if it already exists. You can simply place the entire directory on your web server in a sensible location. Note that the top-level directory will have 0700 permissions by default and you may have to change this before the content will be visible on the server.
report-results lets you...report results. It edits the result pages in the wiki for you. Why yes, a hacky TUI that pretends mediawiki is a structured data store is a deeply ridiculous thing, thank you for asking.
You may pass
--testtype if you like. If you don't fully specify a compose version, it will first attempt to detect the 'current' compose and offer to let you report results against that; if you want to report against a different compose, it will prompt you for the details.
Once you've chosen a compose to report against one way or another, it will then ask you which page section to report a result in, and then which test to report a result for, then what type of result to submit, then whether you want to specify associated bug IDs and/or a comment. And then it will submit the result. Once you're done, you can submit another result for the same section, page, or test type (avoiding the need to re-input those choices).
Please do keep an eye on the actual result wiki pages and make sure the tool edited them correctly.
size-check checks the size of the image files for a given compose, and reports the results to the wiki.
You may pass
--compose to specify the compose to operate on. If you pass none of them, relval will check the 'current' compose. If you pass only some, wikitcms will try and guess what compose you meant, and the command will fail if it cannot.
You may also pass
--bugzilla, which will report bugs to Bugzilla for oversize images. If
--test is also passed, the bugs will be reported to partner-bugzilla.redhat.com (which is effectively a sandbox instance); otherwise they will be reported to bugzilla.redhat.com, so please do not do this unless you're really sure it's necessary. This uses python-bugzilla: please see its documentation for information on authentication. If you do not provide some form of authentication information in a python-bugzilla configuration file and no valid tokens are stored locally from a recent successful login, you will be prompted to enter a username and password interactively.
Note that there is now automation in place to run
size-check automatically when validation events are created, so it is unusual for it to be necessary to run it manually any more.
testcase-stats sub-commands are re-implementations of work originally done by Kamil Paral and Josef Skladanka, and incorporate sections of the original implementations, which can be found in the history of the qa-stats git repository.
relval is released under the GPL, version 3 or later.