============ libtaskotron ============ libtaskotron is a library for running automated tasks as part of the Taskotron_ system. The initial objective is (but is not limited to) automating selected package checks in Fedora. The project is not yet to be considered stable - the features contained are generally functional but we will likely continue to make major changes in how libtaskotron works. For more information and API documentation, please look at `Taskotron project page`__. Please direct questions and comments to either *#fedora-qa* on freenode or the `qa-devel mailing list <https://admin.fedoraproject.org/mailman/listinfo/qa-devel>`_. .. _Taskotron: https://taskotron.fedoraproject.org __ Taskotron_ Running ansiblized (standard interface) tasks ============================================= A COPR repo for ansiblized libtaskotron package is available at: https://copr.fedorainfracloud.org/coprs/kparal/taskotron-ansiblize/ Here is a short documentation of most important changes related to the switch of libtaskotron runner to the ansiblized version of it and its support of test `standard interface`_: * The basic command line is now:: runtask --item ITEM --type TYPE DIRECTORY where ``DIRECTORY`` contains ``tests*.yml`` (as defined in SI_). * The tasks expect to be run as ``root`` and can make arbitrary changes to the system. Therefore you should never run it locally on your production machine, but instead use ``--ssh`` or ``--libvirt`` command line option. When you use ``--ssh``, you need to have an existing system running with password-less ssh login configured and `required packages <https://fedoraproject.org/wiki/CI/Tests#Setting_up>`_ installed. When you use ``--libvirt``, you need to create an OS image using `taskotron_cloud <https://pagure.io/taskotron/base_images/raw/master/f/trigger_build/kickstarts/taskotron_cloud.ks>`_ kickstart and set ``imageurl=file:///path/to/image.qcow2`` in ``/etc/taskotron/taskotron.yml``. The system version must always match the ``ITEM`` version tested. * There are two types of tasks: SI_ and *generic*. *SI* tasks are stored in distgit and their purpose is to test just a single specific package (or module). So e.g. gzip test suite, firefox test suite, etc. *Generic* tasks are SI tests extended with Taskotron-specific functionality. They need to know extra input information and they want to generate custom results to be sent to ResultsDB. These tasks are e.g. task-rpmlint, task-rpmdeplint, task-upgradepath, etc. * All tasks are considered *SI* by default. If the tasks wants to be consider a *generic* task, it needs to include ``taskotron_generic_task: true`` variable in the first play of each ``tests*.yml`` playbook. Generic tasks do not receive ``subjects`` variable as defined by the SI_, but they receive ``taskotron_item`` variable instead. The item/subject is not downloaded and installed automatically as mandated by SI, and their exit code is not used for generating a ResultsDB entry automatically. Instead, they're expected to create ``{{artifacts}}/taskotron/results.yml`` file in ResultYAML_ format themselves. * After execution, all important files are expected to be in the ``{{artifacts}}`` directory (printed by libtaskotron at the end of the execution). At minimum the ``ansible.log`` and ``test.log`` should be there according to the SI_. * The format of ResultYAML_ slightly changed. ``checkname`` is no longer just the task name part with namespace left out (e.g. just ``rpmlint`` without ``dist.``), but the full testcase name (e.g. ``dist.rpmlint``). A simple example for running an SI_ task is:: git clone https://upstreamfirst.fedorainfracloud.org/gzip.git runtask --item gzip-1.8-1.fc25 --type koji_build gzip/ # add --ssh or --libvirt A simple example for running a generic task is:: git clone https://pagure.io/taskotron/task-rpmlint.git --branch feature/ansiblize runtask --item htop-2.0.2-1.fc25 --type koji_build task-rpmlint/ # add --ssh or --libvirt So far the following generic tasks have been converted to the ansiblized libtaskotron version (look at the ``feature/ansiblize`` branch): * task-rpmlint * task-rpmgrill * task-abicheck * taskotron-python-versions * task-rpmdeplint * check_modulemd * task-mtf-containers .. _standard interface: https://fedoraproject.org/wiki/Changes/InvokingTests .. _SI: `standard interface`_ .. _ResultsYAML: https://qa.fedoraproject.org/docs/libtaskotron/latest/resultyaml.html Installing a Development Environment ==================================== Please consider whether you really need a libtaskotron development environment. Maybe you simply want to develop tests *using* libtaskotron? In that case, please follow `libtaskotron install instructions <https://docs.qa.fedoraproject.org/libtaskotron/latest/docs/#install-libtaskotron>`_ instead. If you really want to develop libtaskotron itself, please continue. For the moment, libtaskotron can't be fully installed by either pip or rpm and needs a bit of both for now. On your Fedora system, install the necessary packages:: sudo dnf install \ createrepo \ gcc \ git \ koji \ libtaskotron-config \ libvirt-python \ mash \ python-doit \ python-hawkey \ python-pip \ python-rpmfluff \ python-virtualenv \ rpm-build \ rpm-python On Fedora 27+, install ``python2-koji`` instead of ``koji``. If you have not yet cloned the repository, do it now:: git clone https://pagure.io/taskotron/libtaskotron.git cd libtaskotron Then, set up the virtualenv:: virtualenv --system-site-packages env_taskotron source env_taskotron/bin/activate pip install -r requirements.txt If you encounter any installation issues, it's possible that you don't have ``gcc`` and necessary C development headers installed to compile C extensions from PyPI. Either install those based on the error messages, or install the necessary packages directly to your system. See ``requirements.txt`` to learn how. Finally, you should install libtaskotron in editable mode. This way you don't need to reinstall the project every time you make some changes to it, the code changes are reflected immediately:: pip install -e . Before running any task, you also need to manually create a few required directories. First, create a ``taskotron`` group if you don't have it already, and add your user to it (you'll need to re-login afterwards):: getent group taskotron || sudo groupadd taskotron sudo usermod -aG taskotron <user> Now create the directories with proper permissions:: sudo install -d -m 775 -g taskotron /var/tmp/taskotron /var/log/taskotron \ /var/cache/taskotron /var/lib/taskotron /var/lib/taskotron/artifacts \ /var/lib/taskotron/images Configuration ============= The ``libtaskotron-config`` package installs config files with default values into ``/etc/taskotron``. If you need to change those default values, you can either change the files in ``/etc/taskotron`` or you can create config files inside your checkout with:: cp conf/taskotron.yaml.example conf/taskotron.yaml cp conf/yumrepoinfo.conf.example conf/yumrepoinfo.conf The configuration files in ``conf/`` take precedence over anything in ``/etc``, so make sure that you're editing the correct file if you create local copies. In the development environment, it's also useful to have taskotron-generated files automatically cleaned up, so that they don't occupy disk space in vain. There is a tmpfiles.d template prepared for you, look into ``conf/tmpfiles.d``. Running a Task ============== A relatively simple example task is `rpmlint <https://pagure.io/taskotron/task-rpmlint>`_. The task requires the rpmlint tool to be installed, so be sure to run:: sudo dnf install rpmlint To run that task against a koji build with NVR ``<nvr>``, do the following:: git clone https://pagure.io/taskotron/task-rpmlint.git runtask -i <nvr> -t koji_build task-rpmlint/runtask.yml This will download the ``<nvr>`` from koji into a temp directory under ``/var/tmp/taskotron/``, run rpmlint on the downloaded rpms and print output in YAML format to stdout. Example:: runtask -i htop-2.0.2-1.fc24 -t koji_build task-rpmlint/runtask.yml Using Disposable Clients for Task Execution =========================================== When executing the task on the local machine is not desirable, you can use the disposable client feature - a new virtual machine is spawned from a base image and the task is executed in the VM. Note: you need to setup Testcloud first refer to `Setting up Testcloud`_ To use the feature, just add ``--libvirt`` parameter to the runtask command:: runtask --libvirt -i htop-2.0.2-1.fc24 -t koji_build task-rpmlint/runtask.yml By default, a base image defined by the ``imageurl`` option in ``taskotron.yaml`` config file is used. You can provide your own base image by changing the ``imageurl`` value (use ``file://`` URL to point to a local file). We also provide updated base images at https://tflink.fedorapeople.org/taskotron/taskotron-cloud/images/ (note that the image needs to be uncompressed before use). If you store these images in ``/var/lib/taskotron/images``, adhere to their naming conventions and set ``force_imageurl=False`` in ``taskotron.yaml``, we will find the latest one available automatically for you and you don't need to update the ``imageurl`` option regularly. Setting up Testcloud -------------------- Configure ssh key for Testcloud to use (the private key must be passwordless). Edit ``~/.config/testcloud/settings.py`` or ``/etc/testcloud/settings.py`` and insert the respective public key in the place of the ``$SSH PUBKEY HERE$`` string:: USER_DATA = """#cloud-config users: - default - name: root password: %s chpasswd: { expire: False } ssh-authorized-keys: - $SSH PUBKEY HERE$ """ Running the Test Suite ====================== You can run the included test suite of unit and functional tests. Have the virtualenv active and from the root checkout directory execute:: py.test A nice HTML-based representation is available if you add ``--cov-report=html`` command line parameter. If you write new tests, be sure to run this to see whether the code is sufficiently covered by your tests. Building Documentation ====================== Libtaskotron's documentation is written in `reStructuredText <http://docutils.sourceforge.net/rst.html>`_ and built using `Sphinx <http://sphinx-doc.org/>`_. The documentation is easy to build if you have followed the instructions to set up a development environment. To actually build the documentation:: doit builddocs Build Automation ================ There are several development related tasks which are at least somewhat automated using `doit <http://pydoit.org/>`_. After either installing the doit package (``python-doit``, ``python3-doit``) or via pip (``pip install doit``), you can see a list of available tasks and a short description of those tasks by running ``doit list``. Some of the available tasks are: * ``buildsrpm`` takes a snapshot of current git repo and uses the in-repo spec file to build a matching srpm in the ``builds/<version>`` directory. Note that if a snapshot already exists for a given version, a new snapshot will not be generated until the existing one is deleted. * ``chainbuild`` uses mockchain and the existing COPR repo to build a noarch binary rpm from the latest srpm. * ``builddocs`` builds documentation from current git sources By default, the tool is pretty quiet but if you would like to see more verbose output, add ``--verbosity 2`` to the doit command and all stdout/stderr output will be shown.