This project started to answer a question: could we take the screenshots coming out of Fedora's OpenQA instance to do meaningful automated analysis.

The data associated with this project can be downloaded here.

There is currently no documentation on how to setup and run this project. This documentation is planned but currently not finished.

Dependencies

The code should work on any modern-ish Linux; it hasn't been tested on Windows or OS X but it should also work there.

Python > 3.10 and python-virtualenv are needed, at a minimum

# install CPU only pytorch if you don't know what accelerator you have
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/CPU

# install the remaining requirements
pip install -r requirements.txt

Documentation

A small amount of documentation can be found in doc/fedora_quickstart.md

General Notes

All python code in this repository is formatted using the Black code style