#400 enable docker deployment
Closed 6 years ago by jskladan. Opened 7 years ago by lnie.
taskotron/ lnie/libtaskotron run-on-docker  into  feature/ansiblize

file removed
-7
@@ -1,7 +0,0 @@ 

- {

-     "project_id" : "libtaskotron",

-     "conduit_uri" : "https://phab.qa.fedoraproject.org",

-     "arc.land.onto.default" : "develop",

-     "arc.feature.start.default" : "develop",

-     "unit.engine" : "PytestTestEngine"

- }

file removed
-12
@@ -1,12 +0,0 @@ 

- {

-   "linters": {

-     "flake8": {

-       "type": "flake8",

-       "include": "(\\.py$)",

-       "severity.rules": {

-         "(^E)": "warning",

-         "(^F)": "error"

-       }

-     }

-   }

- }

file modified
+88 -2
@@ -20,6 +20,90 @@ 

  __ Taskotron_

  

  

+ Running ansiblized (standard interface) tasks

+ =============================================

+ 

+ A COPR repo for ansiblized libtaskotron package is available at:

+ https://copr.fedorainfracloud.org/coprs/kparal/taskotron-ansiblize/

+ 

+ Here is a short documentation of most important changes related to the switch

+ of libtaskotron runner to the ansiblized version of it and its support of test

+ `standard interface`_:

+ 

+ * The basic command line is now::

+ 

+     runtask --item ITEM --type TYPE DIRECTORY

+ 

+   where ``DIRECTORY`` contains ``tests.yml`` (as defined in SI_).

+ 

+ * The tasks expect to be run as ``root`` and can make arbitrary changes to the

+   system. Therefore you should never run it locally on your production machine,

+   but instead use ``--ssh`` or ``--libvirt`` command line option.

+ 

+   When you use ``--ssh``, you need to have an existing system running with

+   password-less ssh login configured and `required packages

+   <https://fedoraproject.org/wiki/CI/Tests#Setting_up>`_ installed.

+ 

+   When you use ``--libvirt``, you need to create an OS image using

+   `taskotron_cloud

+   <https://pagure.io/taskotron/base_images/raw/master/f/trigger_build/kickstarts/taskotron_cloud.ks>`_

+   kickstart and set ``imageurl=file:///path/to/image.qcow2`` in

+   ``/etc/taskotron/taskotron.yml``.

+ 

+   The system version must always match the ``ITEM`` version tested.

+ 

+ * There are two types of tasks: SI_ and *generic*.

+ 

+   *SI* tasks are stored in distgit and their purpose is to test just a single

+   specific package (or module). So e.g. gzip test suite, firefox test suite,

+   etc.

+ 

+   *Generic* tasks are SI tests extended with Taskotron-specific functionality.

+   They need to know extra input information and they want to generate custom

+   results to be sent to ResultsDB. These tasks are e.g. task-rpmlint,

+   task-rpmdeplint, task-upgradepath, etc.

+ 

+ * All tasks are considered *SI* by default. If the tasks wants to be consider

+   a *generic* task, it needs to include ``taskotron_generic_task: true``

+   variable in the first play of ``tests.yml`` playbook.

+ 

+   Generic tasks do not receive ``subjects`` variable as defined by the SI_,

+   but they receive ``taskotron_item`` variable instead. The item/subject is

+   not downloaded and installed automatically as mandated by SI, and their

+   exit code is not used for generating a ResultsDB entry automatically.

+   Instead, they're expected to create ``{{artifacts}}/taskotron/results.yml``

+   file in ResultYAML_ format themselves.

+ 

+ * After execution, all important files are expected to be in the

+   ``{{artifacts}}`` directory (printed by libtaskotron at the end of the

+   execution). At minimum the ``ansible.log`` and ``test.log`` should be there

+   according to the SI_.

+ 

+ * The format of ResultYAML_ slightly changed. ``checkname`` is no longer just

+   the task name part with namespace left out (e.g. just ``rpmlint``

+   without ``dist.``), but the full testcase name (e.g. ``dist.rpmlint``).

+ 

+ A simple example for running an SI_ task is::

+ 

+   git clone https://upstreamfirst.fedorainfracloud.org/gzip.git

+   runtask --item gzip-1.8-1.fc25 --type koji_build gzip/  # add --ssh or --libvirt

+ 

+ A simple example for running a generic task is::

+ 

+   git clone https://pagure.io/taskotron/task-rpmlint.git --branch feature/ansiblize

+   runtask --item htop-2.0.2-1.fc25 --type koji_build task-rpmlint/  # add --ssh or --libvirt

+ 

+ So far the following generic tasks have been converted to the ansiblized

+ libtaskotron version (look at the ``feature/ansiblize`` branch):

+ 

+ * task-rpmlint

+ 

+ 

+ .. _standard interface: https://fedoraproject.org/wiki/Changes/InvokingTests

+ .. _SI: `standard interface`_

+ .. _ResultsYAML: https://qa.fedoraproject.org/docs/libtaskotron/latest/resultyaml.html

+ 

+ 

  Installing a Development Environment

  ====================================

  
@@ -50,6 +134,8 @@ 

    rpm-build              \

    rpm-python

  

+ On Fedora 27+, install ``python2-koji`` instead of ``koji``.

+ 

  If you have not yet cloned the repository, do it now::

  

    git clone https://pagure.io/taskotron/libtaskotron.git
@@ -109,7 +195,7 @@ 

  Running a Task

  ==============

  

- A relatively simple example task is `rpmlint <https://bitbucket.org/fedoraqa/task-rpmlint>`_.

+ A relatively simple example task is `rpmlint <https://pagure.io/taskotron/task-rpmlint>`_.

  

  The task requires the rpmlint tool to be installed, so be sure to run::

  
@@ -117,7 +203,7 @@ 

  

  To run that task against a koji build with NVR ``<nvr>``, do the following::

  

-   git clone https://bitbucket.org/fedoraqa/task-rpmlint.git

+   git clone https://pagure.io/taskotron/task-rpmlint.git

    runtask -i <nvr> -t koji_build task-rpmlint/runtask.yml

  

  This will download the ``<nvr>`` from koji into a temp directory under

file removed
-24
@@ -1,24 +0,0 @@ 

- The linter's behaviour is configured in the .arclint file.

- 

- As the .arclint file parser does not support YAML comments, some documentation

- can be put here if needed. Generic arcanist linter documentation is here:

- https://secure.phabricator.com/book/phabricator/article/arcanist_lint/

- 

- Flake8

- ======

- All PEP8 error codes (E* codes) are considered warnings in Phabricator, so that

- only changes to the modified lines are displayed by default (unlike errors,

- which are always displayed regardless of which lines were modified).

- 

- All PyFlakes error codes (F* codes) are considered errors, because they are

- often introduced when modifying other lines than the one in question, and we

- want to be notified of those.

- 

- Additional Flake8 configuration is stored in tox.ini - it contains configuration

- which is not possible to do in .arclint file.

- 

- If you want to ignore a specific source code line, use '# noqa' comment. If you

- want to ignore the whole file, add '# flake8: noqa' comment. Read more

- documentation about flake8 at:

- https://flake8.readthedocs.org/

- 

file modified
+13 -4
@@ -5,8 +5,17 @@ 

  # A list of git repos that are allowed to post a result into a particular namespace

  namespaces_whitelist:

      dist:

-         - git@bitbucket.org:fedoraqa/task-rpmlint.git

-         - git@bitbucket.org:fedoraqa/task-depcheck.git

-         - git@bitbucket.org:fedoraqa/task-upgradepath.git

+         - https://pagure.io/taskotron/task-rpmlint.git

+         - https://pagure.io/taskotron/task-upgradepath.git

+         - https://pagure.io/task-abicheck.git

+         - https://pagure.io/taskotron/task-rpmgrill.git

+         - https://github.com/fedora-python/taskotron-python-versions.git

+         - https://github.com/fedora-modularity/check_modulemd.git

+         - https://pagure.io/taskotron/task-rpmdeplint.git

+         - https://pagure.io/taskotron/task-upstream-atomic.git

+         - https://pagure.io/taskotron/task-fedora-cloud-tests.git

+         - https://pagure.io/taskotron/task-modularity-testing-framework.git

      pkg:

-         - git://pkgs.fedoraproject.org/rpms-checks/

+         - git://pkgs.fedoraproject.org/test-rpms/

+         - git://pkgs.fedoraproject.org/test-modules/

+         - git://pkgs.fedoraproject.org/test-docker/

file modified
+20 -25
@@ -36,15 +36,31 @@ 

  [rawhide]

  url = %(rawhideurl)s

  path = development/rawhide

- tag = f27

+ tag = f28

  release_status = rawhide

  

- # Fedora 26

- [f26]

+ # Fedora 27

+ [f27]

  url = %(rawhideurl)s

- path = development/26

+ path = development/27

  release_status = branched

  

+ [f27-updates]

+ url = %(updatesurl)s

+ path = 27

+ parent = f27

+ 

+ [f27-updates-testing]

+ url = %(updatesurl)s

+ path = testing/27

+ parent = f27-updates

+ 

+ # Fedora 26

+ [f26]

+ url = %(goldurl)s

+ path = 26

+ release_status = stable

+ 

  [f26-updates]

  url = %(updatesurl)s

  path = 26
@@ -77,24 +93,3 @@ 

  primary_arches = armhfp, i386, x86_64

  alternate_arches = aarch64, ppc64, ppc64le, s390x

  

- # Fedora 24

- [f24]

- url = %(goldurl)s

- path = 24

- release_status = stable

- primary_arches = armhfp, i386, x86_64

- alternate_arches = aarch64, ppc64, ppc64le, s390x

- 

- [f24-updates]

- url = %(updatesurl)s

- path = 24

- parent = f24

- primary_arches = armhfp, i386, x86_64

- alternate_arches = aarch64, ppc64, ppc64le, s390x

- 

- [f24-updates-testing]

- url = %(updatesurl)s

- path = testing/24

- parent = f24-updates

- primary_arches = armhfp, i386, x86_64

- alternate_arches = aarch64, ppc64, ppc64le, s390x

@@ -0,0 +1,5 @@ 

+ [defaults]

+ # Make task output "pretty printed" (structured)

+ # https://serverfault.com/a/846232

+ stdout_callback = debug

+ 

@@ -0,0 +1,168 @@ 

+ #!/usr/bin/python

+ # Make coding more python3-ish

+ from __future__ import (absolute_import, division)

+ __metaclass__ = type

+ 

+ import os

+ import ast

+ 

+ from ansible.module_utils.basic import AnsibleModule

+ 

+ try:

+     import libtaskotron.exceptions as exc

+     from libtaskotron.ext.fedora.koji_utils import KojiClient

+     from libtaskotron.ext.fedora import rpm_utils

+ except ImportError:

+     libtaskotron_found = False

+ else:

+     libtaskotron_found = True

+ 

+ # these will need to be handled better but this is just a PoC

+ WORKDIR = os.path.abspath('./taskotron-workdir')

+ 

+ def main():

+     mod = AnsibleModule(

+         argument_spec=dict(

+             action=dict(required=True),

+             arch=dict(required=False, default=['noarch']),

+             workdir=dict(required=False, default="/tmp/firstmod"),

+             arch_exclude=dict(required=False),

+             build_log=dict(required=False, default=False, type="bool"),

+             debuginfo=dict(required=False, default=False, type="bool"),

+             koji_build=dict(required=False),

+             koji_tag=dict(required=False),

+             src=dict(required=False, default=False, type="bool"),

+             target_dir=dict(required=False, default='.')

+         )

+     )

+ 

+     # TODO: check args for completeness

+     if not libtaskotron_found:

+         mod.fail_json(msg="The libtaskotron python module is required")

+ 

+     try:

+         kojidirective = KojiDirective()

+         data = kojidirective.process(mod)

+     except exc.TaskotronError, e:

+         mod.fail_json(msg=e)

+ 

+     subjects = ' '.join(data['downloaded_rpms'])

+ 

+     mod.exit_json(msg="worky!", changed=True, subjects=subjects)

+ 

+ class KojiDirective(object):

+ 

+     def __init__(self, koji_session=None):

+         super(KojiDirective, self).__init__()

+         if koji_session is None:

+             self.koji = KojiClient()

+         else:

+             self.koji = koji_session

+ 

+     def process(self, mod):

+         # process params

+         valid_actions = ['download', 'download_tag', 'download_latest_stable']

+         action = mod.params['action']

+         if action not in valid_actions:

+             raise exc.TaskotronDirectiveError('%s is not a valid action for koji '

+                                                 'directive' % action)

+ 

+         # use register to save/return information

+         if 'target_dir' not in mod.params:

+             target_dir = WORKDIR

+         else:

+             target_dir = mod.params['target_dir']

+ 

+         if 'arch' not in mod.params:

+             detected_args = ', '.join(mod.params.keys())

+             raise exc.TaskotronDirectiveError(

+                 "The koji directive requires 'arch' as an argument. Detected "

+                 "arguments: %s" % detected_args)

+ 

+         # this is supposedly safe enough to use on raw input but should be double checked

+         # http://stackoverflow.com/questions/1894269/convert-string-representation-of-list-to-list-in-python

+         arches = ast.literal_eval(mod.params['arch'])

+         if not isinstance(arches, list):

+             raise exc.TaskotronError("arches must be a list")

+ 

+         if arches and ('all' not in arches) and ('noarch' not in arches):

+             arches.append('noarch')

+ 

+         arch_exclude_string = mod.params.get('arch_exclude', None)

+         if arch_exclude_string is None:

+             arch_exclude = []

+         else:

+             arch_exclude = ast.literal_eval(mod.params['arch_exclude'])

+ 

+         debuginfo = mod.params.get('debuginfo', False)

+         src = mod.params.get('src', False)

+         build_log = mod.params.get('build_log', False)

+ 

+         if not isinstance(arch_exclude, list):

+             print("arch_exclude: {}".format(type(arch_exclude)))

+             raise Exception("arch_exclude must be a list")

+         # download files

+         output_data = {}

+ 

+         if action == 'download':

+             if 'koji_build' not in mod.params:

+                 detected_args = ', '.join(mod.params.keys())

+                 raise exc.TaskotronDirectiveError(

+                     "The koji directive requires 'koji_build' for the 'download' "

+                     "action. Detected arguments: %s" % detected_args)

+ 

+             nvr = rpm_utils.rpmformat(mod.params['koji_build'], 'nvr')

+             output_data['downloaded_rpms'] = self.koji.get_nvr_rpms(

+                 nvr, target_dir, arches=arches, arch_exclude=arch_exclude,

+                 debuginfo=debuginfo, src=src)

+ 

+         elif action == 'download_tag':

+             if 'koji_tag' not in mod.params:

+                 detected_args = ', '.join(mod.params.keys())

+                 raise exc.TaskotronDirectiveError(

+                     "The koji directive requires 'koji_tag' for the 'download_tag' "

+                     "action. Detected arguments: %s" % detected_args)

+ 

+             koji_tag = mod.params['koji_tag']

+ 

+             output_data['downloaded_rpms'] = self.koji.get_tagged_rpms(

+                 koji_tag, target_dir, arches=arches, arch_exclude=arch_exclude,

+                 debuginfo=debuginfo, src=src)

+ 

+         elif action == 'download_latest_stable':

+             if 'koji_build' not in mod.params:

+                 detected_args = ', '.join(mod.params.keys())

+                 raise exc.TaskotronDirectiveError(

+                     "The koji directive requires 'koji_build' for the 'download_latest_stable' "

+                     "action. Detected arguments: %s" % detected_args)

+ 

+             name = rpm_utils.rpmformat(mod.params['koji_build'], 'n')

+             disttag = rpm_utils.get_dist_tag(mod.params['koji_build'])

+             # we need to do 'fc22' -> 'f22' conversion

+             tag = disttag.replace('c', '')

+ 

+             # first we need to check updates tag and if that fails, the latest

+             # stable nvr is in the base repo

+             tags = ['%s-updates' % tag, tag]

+             nvr = self.koji.latest_by_tag(tags, name)

+ 

+             output_data['downloaded_rpms'] = self.koji.get_nvr_rpms(

+                 nvr, target_dir, arch_exclude=arch_exclude,

+                 arches=arches, debuginfo=debuginfo, src=src)

+ 

+         # download build.log if requested

+         if build_log:

+             if action in ('download', 'download_latest_stable'):

+                 ret_log = self.koji.get_build_log(

+                         nvr, target_dir, arches=arches, arch_exclude=arch_exclude)

+                 output_data['downloaded_logs'] = ret_log['ok']

+                 output_data['log_errors'] = ret_log['error']

+             else:

+                 #log.warn("Downloading build logs is not supported for action '%s', ignoring.",

+                 #         action)

+                 print("Downloading build logs is not supported for action '%s', ignoring." % action)

+ 

+         return output_data

+ 

+ if __name__ == '__main__':

+     main()

file renamed
+27 -19
@@ -1,25 +1,36 @@ 

- - hosts: all

+ - hosts: "{{ ip_select }}"

    remote_user: root

    vars:

      local: false

+     docker: "{{ docker }}"

    tasks:

-     - name: Clean taskdir

+     - name: Clean client taskdir

        file:

          path: "{{ client_taskdir }}"

          state: absent

+       when: not docker

  

-     - name: Create taskdir

+     - name: Create client taskdir

        file:

          path: "{{ client_taskdir }}"

          state: directory

+       when: not docker 

  

-     - name: Upload taskdir

+     - name: Upload taskdir to client taskdir

        synchronize:

          src: "{{ taskdir }}/"

          dest: "{{ client_taskdir }}"

  

-     - name: create artifacts dir

-       command: mkdir -p "{{ artifacts }}"

+     - name: Create artifacts dir

+       file:

+         path: "{{ artifacts }}"

+         state: directory

+       when: not local

+ 

+     - name: Create artifacts/taskotron subdir

+       file:

+         path: "{{ artifacts }}/taskotron"

+         state: directory

        when: not local

  

      - name: Install required packages
@@ -29,21 +40,12 @@ 

        with_items:

          - ansible

          - libselinux-python

+         - standard-test-roles

+         - python2-dnf

        when: not local

  

-     - name: Run the task

-       become: yes

-       become_user: root

-       command: ansible-playbook "{{ client_taskdir }}/{{ taskfile }}" --inventory=localhost, --connection=local -e artifacts="{{ artifacts }}" -e subjects="{{ subjects }}"

-       register: output

- 

-     - name: Dump task output

-       debug: var=output.stdout_lines

- 

-     - name: Save ansible.log

-       copy:

-         dest: "{{ artifacts }}/ansible.log"

-         content: "{{ output.stdout }}"

+     - name: Include either generic or SI execution tasks

+       include: "{{ exec_tasks }}"

  

      - name: Collect logs

        synchronize:
@@ -51,3 +53,9 @@ 

          src: "{{ artifacts }}/*"

          dest: "{{ artifacts }}"

        when: not local

+ 

+     - name: Warn about failed task

+       debug:

+         msg: The executed task failed. Inspect 'ansible.log' and 'test.log' in

+              the artifacts directory to learn more.

+       when: task.rc != 0

@@ -0,0 +1,17 @@ 

+ - name: Run tests.yml

+   become: yes

+   become_user: root

+   shell: >

+     ansible-playbook "{{ client_taskdir }}/tests.yml"

+     --inventory=localhost,

+     --connection=local

+     -e artifacts="{{ artifacts }}"

+     -e taskotron_item="{{ taskotron_item }}"

+     &> "{{ artifacts }}/ansible.log"

+   environment:

+     TEST_ARTIFACTS: "{{ artifacts }}"

+     # Make task output "pretty printed" (structured)

+     # https://serverfault.com/a/846232

+     ANSIBLE_STDOUT_CALLBACK: 'debug'

+   ignore_errors: yes

+   register: task

@@ -0,0 +1,54 @@ 

+   # item is specified on command line (e.g. koji_build), this module should download

+   # artifacts related to the item and return subject(s) (path strings) for STI tests.yml

+ - name: Download subjects (only rpms atm)

+   taskotron_koji:

+     action: "download"

+     koji_build: "{{ taskotron_item }}"

+     arch: ['x86_64', 'noarch'] #FIXME

+     target_dir: "{{ client_taskdir }}" #FIXME?

+     src: False

+   register: koji_output

+ 

+ - debug: var=koji_output

+ 

+ - name: Run tests.yml

+   become: yes

+   become_user: root

+   # FIXME add context tags

+   shell: >

+     ansible-playbook "{{ client_taskdir }}/tests.yml"

+     --inventory="{{ sti_inventory }}"

+     --connection=local

+     -e artifacts="{{ artifacts }}"

+     -e subjects="{{ koji_output['subjects'] }}"

+     &> "{{ artifacts }}/ansible.log"

+   environment:

+     TEST_SUBJECTS: "{{ koji_output['subjects'] }}"

+     TEST_ARTIFACTS: "{{ artifacts }}"

+     # Make task output "pretty printed" (structured)

+     # https://serverfault.com/a/846232

+     ANSIBLE_STDOUT_CALLBACK: 'debug'

+   ignore_errors: yes

+   register: task

+ 

+ - name: Save exit code

+   copy:

+     content: "{{ task.rc }}"

+     dest: "{{ artifacts }}/taskotron/test.rc"

+ 

+ - name: Set outcome based on exit code

+   set_fact: outcome={{ (task.rc == 0) | ternary('PASSED', 'FAILED') }}

+ 

+ - name: Generate ResultsDB result file

+   # FIXME: type of result

+   # FIXME: checkname

+   shell: >

+     taskotron_result

+     -f "{{ artifacts }}/taskotron/results.yml"

+     -i "{{ taskotron_item }}"

+     -o "{{ outcome }}"

+     -t koji_build

+     -a "{{ artifacts }}/test.log"

+     -c "pkg.{{ taskotron_item }}"

+   args:

+     creates: "{{ artifacts }}/taskotron/results.yml"

data/report_templates/html.j2 libtaskotron/report_templates/html.j2
file renamed
file was moved with no change to the file
file modified
+29 -135
@@ -11,23 +11,18 @@ 

  

  Libtaskotron is written mostly in `Python <https://www.python.org/>`_.

  

- Source code

- -----------

- 

- The source code for libtaskotron is available at:

- https://pagure.io/taskotron/libtaskotron

- 

- If you submit patches, please use the process of :ref:`submitting-code`.

  

  .. _taskotron-bugs:

  

- Bugs, issues and tasks

- ----------------------

+ Source code, bugs, tasks

+ ------------------------

+ 

+ The source code for libtaskotron is available at:

+ https://pagure.io/taskotron/libtaskotron

  

- We use `Phabricator <http://phabricator.org/>`_ to track issues and facilitate

- code reviews for several projects related to libtaskotron and Fedora QA.

+ This project is also used to track bugs and tasks.

  

- Our phabricator instance can be found at https://phab.qa.fedoraproject.org/

+ If you submit patches, please use the process of :ref:`submitting-code`.

  

  

  .. _running-tests:
@@ -36,7 +31,7 @@ 

  ---------------------------------

  

  We place a high value on having decent test coverage for the libtaskotron code.

- In general, tests are written using `pytest <http://pytest.org/>` and are broken

+ In general, tests are written using `pytest <http://pytest.org/>`_ and are broken

  up into two types:

  

    * **Unit Tests** test the core logic of code. They do not touch the filesystem
@@ -47,13 +42,9 @@ 

      tests are often much slower than unit tests but they offer coverage which is

      not present in the unit tests.

  

- To run the unit tests::

- 

-   py.test testing/

+ To run the unit tests, execute::

  

- To run the functional and unit tests::

- 

-   py.test -F testing/

+   py.test

  

  

  Continuous integration
@@ -82,52 +73,6 @@ 

  files automatically cleaned up, so that they don't occupy disk space in vain.

  There is a tmpfiles.d template prepared for you, look into ``conf/tmpfiles.d``.

  

- Support tools

- -------------

- 

- There are several tools that, while not required, make the process of developing

- for libtaskotron significantly easier.

- 

- 

- .. _installing-arcanist:

- 

- Arcanist

- ^^^^^^^^

- 

- `Arcanist <https://secure.phabricator.com/book/phabricator/article/arcanist/>`_

- is a command line interface to Phabricator which can be used to submit code

- reviews, download/apply code under review among other useful functions.

- 

- As Arcanist is an interface to Phabricator, we **strongly recommend** that you

- install it using our packages instead of from the upstream git repos (as

- described in upstream documentation). That way, there is no question that the

- Arcanist version used locally is compatible with our Phabricator instance.

- 

- To add our dnf repository containing Phabricator related packages, run::

- 

-   sudo curl https://repos.fedorapeople.org/repos/tflink/phabricator/fedora-phabricator.repo \

-   -o /etc/yum.repos.d/fedora-phabricator.repo

- 

- Once the repository has been configured, install Arcanist using::

- 

-   sudo dnf -y install arcanist

- 

- Arcanist is written in PHP and installing it will pull in several PHP packages

- as dependencies.

- 

- 

- .. _installing-gitflow:

- 

- gitflow

- ^^^^^^^

- 

- The `gitflow plugin for git <https://github.com/nvie/gitflow>`_ is another

- useful, but not required tool that is available in the Fedora repositories.

- 

- To install the gitflow plugin, run::

- 

-   sudo dnf -y install gitflow

- 

  

  .. _submitting-code:

  
@@ -138,9 +83,8 @@ 

  branching model and with the exception of hotfixes, all changes made should be

  against the ``develop`` branch.

  

- While not required, using the `gitflow plugin for git <https://github.com/nvie/gitflow>`_

- is recommended as it makes the process significantly easier. See

- :ref:`installing-gitflow` for instructions on installing gitflow on Fedora.

+ If you want to use the `gitflow plugin for git <https://github.com/nvie/gitflow>`_

+ to make this process more user-friendly, simply install the ``gitflow`` package.

  

  

  Start a new feature
@@ -148,9 +92,13 @@ 

  

  To start work on a new feature, use::

  

-   git flow feature start TXXX-short-description

+   git checkout -b feature/XXX-short-description develop

  

- Where ``TXXX`` is the issue number in Phabricator and ``short-description`` is a

+ or if you want to use gitflow, use::

+ 

+   git flow feature start XXX-short-description

+ 

+ where ``XXX`` is the issue number in Pagure and ``short-description`` is a

  short, human understandable description of the change contained in this branch.

  

  In general, short reviews are better than long reviews. If you can, please break
@@ -160,36 +108,10 @@ 

  Submitting code for review

  ^^^^^^^^^^^^^^^^^^^^^^^^^^

  

- .. note::

- 

-   Make sure to run all unit and functional tests before submitting code for

-   review. Any code that causes test failure will receive requests to fix the

-   offending code or will be rejected. See :ref:`running-tests` for information

-   on running unit and functional tests.

- 

- Code reviews are done through Phabricator, not pull requests. While it is possible

- to submit code for review through the web interface, :ref:`installing-arcanist`

- is recommended.

- 

- You do not need to push code anywhere public before submitting a review. Unless

- there is a good reason to do so (and there are very few), pushing a feature

- branch to origin is frowned on as it makes repository maintenance more difficult

- for no real benefit.

- 

- To submit code for review, make sure that your code has been updated with respect

- to ``origin/develop`` and run the following from your checked-out feature branch::

- 

-   arc diff develop

- 

- The first time that you use Arcanist, it will ask for an API key which can be

- retrieved from a link contained in that prompt.

- 

- Arcanist will create a new code review on our Phabricator instance and prompt

- you for information about the testing which has been done, a description of the

- code under review and people to whom the review should be assigned. If you're

- not clear on who should review your code, leave the ``reviewers`` section blank

- and someone will either review your code or assign the review task to someone

- who will.

+ Make sure to run all unit and functional tests before submitting code for

+ review. Any code that causes test failure will receive requests to fix the

+ offending code or will be rejected. See :ref:`running-tests` for information

+ on running unit and functional tests.

  

  

  Updating code reviews
@@ -199,13 +121,6 @@ 

  the requested changes have been made in your feature branch, commit them and

  make sure that your branch is still up to date with respect to ``origin/develop``.

  

- To update the existing review, use::

- 

-   arc diff develop --update DXXX

- 

- Where ``DXXX`` is the Differential ID assigned to your review when it was

- originally created.

- 

  

  Pushing code

  ^^^^^^^^^^^^
@@ -226,46 +141,27 @@ 

  before starting the merge process, else messy commits and merges may ensue.

  Once ``develop`` is up-to-date, the basic workflow to use is::

  

-     git checkout feature/TXXX-some-feature

+     git checkout feature/XXX-some-feature

      git rebase develop

  

  To merge the code into develop, use one of two commands. If the feature can be

  reasonably expressed in one commit (most features), use::

  

-     git flow feature finish --squash TXXX-some-feature

+     git flow feature finish --squash XXX-some-feature

  

  Else, if the Feature is larger and should cover multiple commits (less common),

  use::

  

-     git flow feature finish TXXX-some-feature

+     git flow feature finish XXX-some-feature

  

- After merging the code, please inspect log messages in case they need to be

- shortened (Phabricator likes to make long commit messages). Groups of commits

- should at least have a short description of their content and a link to the

- revision in differential. Once the feature is ready, push to origin::

+ After merging the code, please inspect git commit description and make it

+ prettier if needed. Groups of commits should at least have a short description

+ of their content and a link to the issue in Pagure. Once the feature is ready,

+ push to origin::

  

      git push origin develop

  

  

- 

- Reviewing code

- --------------

- 

- To review code, use `Phabricator's web interface <https://phab.qa.fedoraproject.org/>`_

- to submit comments, request changes or accept reviews.

- 

- If you want to look at the code under review locally to run tests or test

- suggestions prior to posting them, use Arcanist to apply the review code.

- 

- Make sure that your local repo is at the same base revision as the code under

- review (usually origin/develop) and run the following command::

- 

-   arc patch DXXX

- 

- Where ``DXXX`` is the review id that you're interested in. Arcanist will grab the

- code under review and apply it to a local branch named ``arcpatch-DXXX``. You can

- then look at the code locally or make modifications.

- 

  Writing directives

  ==================

  
@@ -308,11 +204,9 @@ 

  Sphinx has several built-in info fields which should be used to document

  function/method arguments and return data.

  

- 

  The following is an excerpt from `the Sphinx documentation

  <http://sphinx-doc.org/domains.html#info-field-lists>`_

  

- 

  Inside Python object description directives, reST field lists with these fields

  are recognized and formatted nicely:

  

file modified
+11 -4
@@ -12,8 +12,8 @@ 

  a little light on the practical creation of tasks. :doc:`writingtasks`

  and some existing tasks are also good references:

  

- * `rpmlint <https://bitbucket.org/fedoraqa/task-rpmlint>`_

- * `examplebodhi <https://bitbucket.org/fedoraqa/task-examplebodhi>`_

+ * `rpmlint <https://pagure.io/taskotron/task-rpmlint>`_

+ * `task examples <https://pagure.io/taskotron/task-examples>`_

  

  Task description

  ================
@@ -73,8 +73,8 @@ 

  ------------

  

  A task may also require the presence of other code to support execution. Those

- dependencies are specified as part of the environment description. Anything that

- ``dnf install`` supports as an argument on the command line is supported.

+ dependencies are specified as part of the environment description. It is

+ recommended to only use package names or file paths.

  

  .. code-block:: yaml

  
@@ -82,6 +82,13 @@ 

        rpm:

            - python-solv

            - python-librepo

+           - /usr/bin/xz

+ 

+ .. note::

+ 

+   You might also use advanced syntax like package version comparisons or group

+   names (as long as it's supported by ``dnf install``), but such tasks might

+   not work properly when executed under a non-root user.

  

  .. note::

  

file modified
+3 -4
@@ -103,7 +103,7 @@ 

  ^^^^^^^^^^^^

  

  A very simple task from which you can learn all the basics. See its

- `source code <https://bitbucket.org/fedoraqa/task-rpmlint>`_.

+ `source code <https://pagure.io/taskotron/task-rpmlint>`_.

  

  Run the task like this (replace the example NVR with the build you want to

  test):
@@ -120,11 +120,10 @@ 

  * `task-abicheck <https://pagure.io/task-abicheck/>`_

  * `task-python-versions

    <https://github.com/fedora-python/task-python-versions>`_

- * `task-rpmgrill <https://bitbucket.org/fedoraqa/task-rpmgrill>`_

+ * `task-rpmgrill <https://pagure.io/taskotron/task-rpmgrilll>`_

  

  You can also check our

- `example tasks <https://bitbucket.org/account/user/fedoraqa/projects/TEX>`_

+ `task examples <https://pagure.io/taskotron/task-examples>`_

  (warning, some of them might be out of date) or all the rest of the tasks on

- `bitbucket <https://bitbucket.org/fedoraqa/>`_ and

  `pagure <https://pagure.io/group/taskotron>`_ (projects starting with

  *task-*).

file modified
+1 -1
@@ -46,7 +46,7 @@ 

  

  Refer to the :ref:`quick start <quick-run>` guide for the basics.

  

- Using `task-rpmlint <https://bitbucket.org/fedoraqa/task-rpmlint>`_ as an

+ Using `task-rpmlint <https://pagure.io/taskotron/task-rpmlint>`_ as an

  example, the task repository contains the following important files::

  

    task-rpmlint/

file modified
+5 -5
@@ -15,12 +15,12 @@ 

  Examples

  ========

  

- examplebodhi

- ------------

+ bodhi example

+ -------------

  

- `examplebodhi task git repository <https://bitbucket.org/fedoraqa/task-examplebodhi>`_

+ `bodhi example files <https://pagure.io/taskotron/task-examples/blob/master/f/bodhi>`_

  

- The examplebodhi task is a trivial example which takes an update and determines

+ The bodhi example task is a trivial example which takes an update and determines

  a PASS/FAIL results in a pseudo-random fashion. This task wasn't designed to be

  used in production but is useful as a starting point and for development

  purposes.
@@ -28,7 +28,7 @@ 

  rpmlint

  -------

  

- `rpmlint task git repository <https://bitbucket.org/fedoraqa/task-rpmlint>`_

+ `rpmlint task git repository <https://pagure.io/taskotron/task-rpmlint>`_

  

  rpmlint is the simplest of the production tasks. It takes a koji build and runs

  rpmlint on it, reporting results as result-YAML.

file modified
+3 -3
@@ -18,10 +18,10 @@ 

  ################################################################################

  

  NAME = 'libtaskotron'

- MOCKENV = 'fedora-24-x86_64'

- TARGETDIST = 'fc24'

+ MOCKENV = 'fedora-26-x86_64'

+ TARGETDIST = 'fc26'

  BUILDARCH = 'noarch'

- COPRREPOS = ['https://copr-be.cloud.fedoraproject.org/results/tflink/taskotron/fedora-24-x86_64/']

+ COPRREPOS = ['https://copr-be.cloud.fedoraproject.org/results/tflink/taskotron/fedora-26-x86_64/']

  # virtualenv name to use for operations which require it

  VENVNAME = 'env_testing'

  

file modified
+64 -11
@@ -1,7 +1,7 @@ 

  Name:           libtaskotron

  # NOTE: if you update version, *make sure* to also update `libtaskotron/__init__.py`

- Version:        0.5.0

- Release:        1%{?dist}

+ Version:        0.4.99.1

+ Release:        3%{?dist}

  Summary:        Taskotron Support Library

  

  License:        GPLv3
@@ -33,6 +33,7 @@ 

  Requires:       python-setuptools

  Requires:       python-xunitparser >= 1.3.3

  Requires:       PyYAML >= 3.11

+ BuildRequires:  grep

  BuildRequires:  python2-devel

  BuildRequires:  python-configparser >= 3.5.0b2

  BuildRequires:  python-dingus >= 0.3.4
@@ -46,6 +47,7 @@ 

  BuildRequires:  python2-sphinx_rtd_theme >= 0.1.9

  BuildRequires:  python-xunitparser >= 1.3.3

  BuildRequires:  PyYAML >= 3.11

+ BuildRequires:  sed

  

  %description -n libtaskotron-core

  The minimal, core parts of libtaskotron that are needed to run tasks
@@ -62,14 +64,22 @@ 

  

  Requires:       createrepo

  Requires:       dnf >= 0.6.4

+ %if 0%{?fedora} >= 27

+ Requires:       python2-koji >= 1.10.0

+ %else

  Requires:       koji >= 1.10.0

+ %endif

  Requires:       libtaskotron-core = %{version}-%{release}

  Requires:       mash

  Requires:       python-fedora >= 0.8.0

  Requires:       python-hawkey >= 0.4.13-1

  Requires:       python-munch >= 2.0.2

  Requires:       rpm-python

+ %if 0%{?fedora} >= 27

+ BuildRequires:  python2-koji >= 1.10.0

+ %else

  BuildRequires:  koji >= 1.10.0

+ %endif

  BuildRequires:  mash

  BuildRequires:  python-fedora >= 0.8.0

  BuildRequires:  python-hawkey >= 0.4.13-1
@@ -89,21 +99,24 @@ 

  %description -n libtaskotron-disposable

  Module for libtaskotron which enables the use of disposable clients

  

- 

  %pre core

  getent group taskotron >/dev/null || groupadd taskotron

  

  %prep

  %setup -q

  

- %check

- %{__python} setup.py test

- 

  %build

+ # testing needs to occur here instead of %%check section, because we need to

+ # patch config files after testing is done, but before py[co] files are built

+ # (so that they match the source files)

+ #%{__python} setup.py test

+ # adjust data path in config

+ sed -i "/_data_dir/s#_data_dir = '../data'#_data_dir = '%{_datarootdir}/libtaskotron'#" libtaskotron/config_defaults.py

+ grep -Fq  "_data_dir = '%{_datarootdir}/libtaskotron'" libtaskotron/config_defaults.py

+ # build files

  %{__python} setup.py build

  

  %install

- rm -rf %{buildroot}

  %{__python} setup.py install -O1 --skip-build --root %{buildroot}

  

  # configuration files
@@ -127,6 +140,10 @@ 

  # images dir

  install -d %{buildroot}/%{_sharedstatedir}/taskotron/images

  

+ # data files

+ mkdir -p %{buildroot}%{_datarootdir}/libtaskotron

+ cp -a data/* %{buildroot}%{_datarootdir}/libtaskotron

+ 

  %files

  

  %files -n libtaskotron-core
@@ -135,13 +152,11 @@ 

  %{python2_sitelib}/libtaskotron/*.py*

  %{python2_sitelib}/libtaskotron/directives/*.py*

  %{python2_sitelib}/libtaskotron/ext/*.py*

- %{python2_sitelib}/libtaskotron/report_templates/*.j2

  %{python2_sitelib}/*.egg-info

  

  %dir %{python2_sitelib}/libtaskotron/ext/fedora

  %dir %{python2_sitelib}/libtaskotron/ext

  %dir %{python2_sitelib}/libtaskotron/directives

- %dir %{python2_sitelib}/libtaskotron/report_templates

  

  %attr(755, root, root) %{_bindir}/runtask

  %attr(755, root, root) %{_bindir}/taskotron_result
@@ -149,9 +164,10 @@ 

  %dir %attr(2775, root, taskotron) %{_tmppath}/taskotron

  %dir %attr(2775, root, taskotron) %{_localstatedir}/log/taskotron

  %dir %attr(2775, root, taskotron) %{_localstatedir}/cache/taskotron

- %dir %attr(2775, root, taskotron) %{_sharedstatedir}/taskotron

+ %dir %attr(0775, root, taskotron) %{_sharedstatedir}/taskotron

  %dir %attr(2775, root, taskotron) %{_sharedstatedir}/taskotron/artifacts

  %dir %attr(2775, root, taskotron) %{_sharedstatedir}/taskotron/images

+ %{_datarootdir}/libtaskotron

  

  %files -n libtaskotron-config

  %dir %{_sysconfdir}/taskotron
@@ -168,9 +184,27 @@ 

  %{python2_sitelib}/libtaskotron/ext/disposable/*

  

  %changelog

- * Wed May 10 2017 Martin Krizek <mkrizek@fedoraproject.org> - 0.5.0-1

+ * Tue Aug 29 2017 Martin Krizek <mkrizek@fedoraproject.org> - 0.4.99.1-1

  - Support Ansible style tasks (D1195)

  

+ * Fri Jul 14 2017 Kamil Páral <kparal@redhat.com> - 0.4.24-1

+ - do not use --cacheonly for dnf operations

+ 

+ * Wed Jul 12 2017 Kamil Páral <kparal@redhat.com> - 0.4.23-1

+ - fix python2-koji dep on F27+

+ - fix broken test suite

+ 

+ * Wed Jul 12 2017 Kamil Páral <kparal@redhat.com> - 0.4.22-1

+ - mark Fedora 26 as stable in yumrepoinfo

+ - remove check for installed packages because it was problematic

+ 

+ * Fri Jun 30 2017 Kamil Páral <kparal@redhat.com> - 0.4.21-1

+ - documentation improvements

+ - DNF_REPO item type removed

+ - default task artifact now points to artifacts root dir instead of task log

+ - fix rpm deps handling via dnf on Fedora 26 (but only support package names

+   and filepaths as deps in task formulas)

+ 

  * Tue Apr 4 2017 Martin Krizek <mkrizek@fedoraproject.org> - 0.4.20-1

  - Add module_build item type (D1184)

  - taskformula: replace vars in dictionary keys (D1176)
@@ -179,9 +213,28 @@ 

  - argparse: change --arch to be a single value instead of a string (D1171)

  - yumrepoinfo: specify all primary and alternate arches (D1172)

  

+ * Fri Mar 24 2017 Tim Flink <tflink@fedoraproject.org> - 0.4.19-4

+ - bumping revision to test package-specific testing again

+ 

+ * Fri Mar 17 2017 Tim Flink <tflink@fedoraproject.org> - 0.4.19-3

+ - bumping revision to test package-specific testing again

+ 

+ * Fri Mar 17 2017 Tim Flink <tflink@fedoraproject.org> - 0.4.19-2

+ - bumping revision to test package-specific testing

+ 

+ * Fri Mar 17 2017 Tim Flink <tflink@fedoraproject.org> - 0.4.19-1

+ - updating yumrepoinfo for F26

+ - improved support for secondary architectures

+ 

+ * Thu Mar 16 2017 Tim Flink <tflink@fedoraproject.org> - 0.4.18-3

+ - bumping revision to test package-specific testing

+ 

  * Fri Feb 17 2017 Kamil Páral <kparal@redhat.com> - 0.4.18-4

  - require koji >= 1.10.0 because of T910

  

+ * Fri Feb 10 2017 Fedora Release Engineering <releng@fedoraproject.org> - 0.4.18-2

+ - Rebuilt for https://fedoraproject.org/wiki/Fedora_26_Mass_Rebuild

+ 

  * Fri Feb 10 2017 Kamil Páral <kparal@redhat.com> - 0.4.18-3

  - add python-pytest-cov builddep because the test suite now needs it,

    and python-rpmfluff because we're missing it

file modified
+1 -1
@@ -4,4 +4,4 @@ 

  # See the LICENSE file for more details on Licensing

  

  from __future__ import absolute_import

- __version__ = '0.5.0'

+ __version__ = '0.4.99.1'

file modified
+8 -1
@@ -86,6 +86,7 @@ 

      config = _load_defaults(env_profile)

      if config.profile == ProfileName.TESTING:

          log.debug('Testing profile, not loading config files from disk')

+         _customize_values(config)

          return config

  

      # load config files
@@ -111,7 +112,7 @@ 

  

      # set config filename used, this is set after merging

      # so it doesn't get overridden

-     config.config_filename = filename

+     config._config_filename = filename

  

      return config

  
@@ -258,6 +259,12 @@ 

      # for each user so we don't delete other user's tmp files.

      config.tmpdir = os.path.join(config.tmpdir, getpass.getuser())

  

+     # set full path to the data dir

+     if not os.path.isabs(config._data_dir):

+         config._data_dir = os.path.abspath(

+             os.path.join(os.path.dirname(libtaskotron.__file__),

+                          config._data_dir))

+ 

  

  def _create_dirs(config):

      '''Create directories in the local file system for appropriate config

@@ -50,7 +50,11 @@ 

         default values. (If no config file is found, this is going to

         stay empty). *Do not* set this value manually in a config file

         itself - it is for internal use only.'''

-     config_filename = ''                                                    #:

+     _config_filename = ''                                                   #:

+     '''Path to the library data files. Always converted to an absolute path

+     after initialization. *Do not* set this value manually, it's for internal

+     use only.'''

+     _data_dir = '../data'                                                   #:

  

      profile = ProfileName.DEVELOPMENT                                       #:

  

@@ -82,6 +82,7 @@ 

  

  from libtaskotron import check

  from libtaskotron import file_utils

+ from libtaskotron import config

  from libtaskotron.exceptions import TaskotronDirectiveError, TaskotronValueError

  from libtaskotron.logger import log

  
@@ -128,8 +129,8 @@ 

              template_fname = os.path.join(os.path.dirname(arg_data['task']),

                                            params['template'])

          else:

-             template_fname = os.path.join(os.path.dirname(os.path.abspath(__file__)),

-                                           '../report_templates/html.j2')

+             template_fname = os.path.join(config.get_config()._data_dir,

+                                           'report_templates/html.j2')

  

          try:

              file_utils.makedirs(os.path.dirname(report_fname))

@@ -5,6 +5,18 @@ 

  

  from __future__ import absolute_import

  

+ import os

+ import configparser

+ import pprint

+ 

+ from libtaskotron.directives import BaseDirective

+ from libtaskotron import check

+ from libtaskotron import config

+ from libtaskotron.exceptions import TaskotronDirectiveError, TaskotronValueError

+ from libtaskotron.logger import log

+ from libtaskotron.ext.fedora import rpm_utils

+ import resultsdb_api

+ 

  DOCUMENTATION = """

  module: resultsdb_directive

  short_description: send task results to ResultsDB or check ResultYAML format
@@ -96,21 +108,9 @@ 

       'summary': 'RPMLINT PASSED for xchat-tcl-2.8.8-21.fc20.x86_64.rpm'}>

  """

  

- import os

- import configparser

- 

- from libtaskotron.directives import BaseDirective

- 

- from libtaskotron import check

- from libtaskotron import config

- from libtaskotron.exceptions import TaskotronDirectiveError, TaskotronValueError

- from libtaskotron.logger import log

- from libtaskotron.ext.fedora import rpm_utils

- 

- import resultsdb_api

- 

  directive_class = 'ResultsdbDirective'

  

+ 

  class ResultsdbDirective(BaseDirective):

  

      def __init__(self, resultsdb = None):
@@ -244,9 +244,8 @@ 

                  if not checkname.startswith(pkg_ns):

                      raise TaskotronDirectiveError

          except TaskotronDirectiveError:

-             raise TaskotronDirectiveError("%s is not allowed to post results into %s "

-                                           "namespace. Not posting results." %

-                                           (arg_data['checkname'], arg_data['namespace']))

+             raise TaskotronDirectiveError("This repo is not allowed to post results into %s "

+                                           "namespace. Not posting results." % checkname)

  

      def process(self, params, arg_data):

          # checking if reporting is enabled is done after importing yaml which
@@ -274,23 +273,21 @@ 

          conf = config.get_config()

          if not conf.report_to_resultsdb:

              log.info("Reporting to ResultsDB is disabled. Once enabled, the "

-                      "following would get reported:\n%s" % params['results'])

+                      "following would get reported:\n%s", params['results'])

              return check.export_YAML(check_details)

  

-         checkname = '%s.%s' % (arg_data['namespace'], arg_data['checkname'])

          artifactsdir_url = '%s/all/%s' % (self.artifacts_baseurl, arg_data['uuid'])

  

-         # find out if the task is allowed to post results into the namespace

-         if config.get_config().profile == config.ProfileName.PRODUCTION:

-             self.check_namespace(checkname, arg_data)

- 

          # for now, we're creating the resultsdb group at reporting time

-         group_data = self.create_resultsdb_group(uuid=arg_data['uuid'], name=checkname)

+         group_data = self.create_resultsdb_group(uuid=arg_data['uuid'])

  

          log.info('Posting %s results to ResultsDB...' % len(check_details))

          for detail in check_details:

-             checkname = '%s.%s' % (arg_data['namespace'],

-                                    detail.checkname or arg_data['checkname'])

+             checkname = detail.checkname

+ 

+             # find out if the task is allowed to post results into the namespace

+             # if config.get_config().profile == config.ProfileName.PRODUCTION:

+             self.check_namespace(checkname, arg_data)

  

              self.ensure_testcase_exists(checkname)

              result_log_url = artifactsdir_url
@@ -311,6 +308,7 @@ 

                      item=detail.item,

                      type=detail.report_type,

                      **detail.keyvals)

+                 log.debug('Result saved in ResultsDB:\n%s', pprint.pformat(result))

                  detail._internal['resultsdb_result_id'] = result['id']

  

              except resultsdb_api.ResultsDBapiException, e:

file modified
+80 -11
@@ -2,18 +2,22 @@ 

  # Copyright 2009-2017, Red Hat, Inc.

  # License: GPL-2.0+ <http://spdx.org/licenses/GPL-2.0+>

  # See the LICENSE file for more details on Licensing

+ # Authors:

+ # Mike Ruckman  <roshi@fedora-qa>

  

  from __future__ import absolute_import

  

  import os

  import os.path

  import subprocess

+ import yaml

  

  from libtaskotron import config

  from libtaskotron import image_utils

  from libtaskotron import os_utils

  from libtaskotron.logger import log

  from libtaskotron import exceptions as exc

+ from libtaskotron.directives import resultsdb_directive

  

  try:

      from libtaskotron.ext.disposable import vm
@@ -34,6 +38,7 @@ 

      def __init__(self, arg_data):

          self.arg_data = arg_data

          self.task_vm = None

+         self.task_container = None

          self.run_remotely = False

  

      def _spawn_vm(self, uuid):
@@ -54,6 +59,19 @@ 

                    self.task_vm.ipaddr)

  

          return self.task_vm.ipaddr

+     

+     def  _spawn_container(self):

+ 

+         taskdir = os.path.dirname(self.arg_data['task'])

+         docker_task = os.path.abspath(taskdir)

+         uuid = self.arg_data['uuid']

+         container_name = 'taskotron-worker'+ str(uuid)

+         self.task_container = docker.DockerClient(name=container_name)

+         self.task_container.create_container(artifacts=self.arg_data['artifactsdir'],

+                                             docker_taskdir=docker_task)

+         port = self.task_container.port

+         log.info('Running task on a container on port {}.'.format(port))

+         return port

  

      def _get_client_ipaddr(self):

          '''Get an IP address of the machine the task is going to be executed on.
@@ -85,7 +103,14 @@ 

              self.run_remotely = True

              persistent = True

  

+         elif self.arg_data['docker']:

+             self.run_remotely = True

+             persistent = False

+             docker = True

+             log.debug('Forcing execution on docker (option --docker)')

+ 

          log.debug('Execution mode: %s', 'remote' if self.run_remotely else 'local')

+         

  

          ipaddr = '127.0.0.1'

          if self.run_remotely:
@@ -93,47 +118,91 @@ 

  

          return ipaddr

  

-     def _run_ansible_playbook(self, ipaddr):

+     def _run_ansible_playbook(self, ipaddr, d_select):

          '''Run the ansible-playbook command to execute given playbook containing the task.

  

          :param str ipaddr: IP address of the machine the task will be run on

          '''

  

-         taskdir = os.path.dirname(self.arg_data['task'])

-         taskfile = os.path.basename(self.arg_data['task'])

+         if os.path.isfile(os.path.join(self.arg_data['taskdir'], 'inventory')):

+             sti_inventory = os.path.join(config.get_config().client_taskdir, "inventory")

+         else:

+             sti_inventory = "/usr/share/ansible/inventory"

+ 

+         ansible_dir = os.path.join(config.get_config()._data_dir, 'ansible')

+         #FIXME taskdir without tests.yml

+         with open(os.path.join(self.arg_data['taskdir'], 'tests.yml'), 'r') as playbook_file:

+             playbook = yaml.load(playbook_file.read())[0] #FIXME document [0]

+             if playbook.get('vars',{}).get('taskotron_generic_task', False):

+                 exec_tasks = 'tasks_generic.yml'

+             else:

+                 exec_tasks = 'tasks_si.yml'

+         

  

          cmd = [

              'ansible-playbook', 'runner.yml',

-             '--inventory=%s,' % ipaddr,

              '-e', 'artifacts=%s' % self.arg_data['artifactsdir'],

-             '-e', 'subjects=%s' % self.arg_data['item'],

-             '-e', 'taskdir=%s' % taskdir,

-             '-e', 'taskfile=%s' % taskfile,

+             '-e', 'taskotron_item=%s' % self.arg_data['item'],

+             '-e', 'taskdir=%s' % self.arg_data['taskdir'],

              '-e', 'client_taskdir=%s' % config.get_config().client_taskdir,

+             '-e', 'sti_inventory=%s' % sti_inventory,

+             '-e', 'exec_tasks=%s' % exec_tasks,

          ]

  

+         taskdir = os.path.dirname(self.arg_data['task'])

+         docker_task = os.path.abspath(taskdir)

+ 

          if self.run_remotely:

-             cmd.extend(['--private-key=%s' % self.arg_data['ssh_privkey']])

+             if self.arg_data['ssh_privkey']:

+                 cmd.extend(['--private-key=%s' % self.arg_data['ssh_privkey']])

          else:

-             cmd.extend(['--ask-become-pass', '--connection=local', '-e', 'local=true'])

+             cmd.extend(['--become', '--connection=local', '-e', 'local=true'])

+         if self.arg_data['docker']:

+             cmd.extend(['-i','dockerinventory','-e','ip_select=%s'%d_select,'-e','docker=True','-e', 'client_taskdir=%s' % docker_task])

+         else:

+             cmd.extend(['--inventory=%s,' % ipaddr,'-e','ip_select=all'])

+             cmd.extend(['-e', 'client_taskdir=%s' % config.get_config().client_taskdir])

+ 

+ 

+         if self.arg_data['debug']:

+             cmd.extend(['-vv'])

  

          log.debug('Running ansible playbook %s', ' '.join(cmd))

          try:

-             os_utils.popen_rt(cmd, stderr=subprocess.STDOUT)

+             os_utils.popen_rt(cmd, cwd=ansible_dir)

          except subprocess.CalledProcessError, e:

              log.error('ansible-playbook ended with %d return code', e.returncode)

              log.debug(e.output)

              raise exc.TaskotronError(e.output)

  

+     def _report_results(self):

+         results_file = os.path.join(self.arg_data['artifactsdir'], 'taskotron',

+                                     'results.yml')

+         if os.path.exists(results_file):

+             rdb = resultsdb_directive.ResultsdbDirective()

+             rdb.process(params={"file": results_file}, arg_data=self.arg_data)

+         else:

+             #FIXME change to exception?

+             log.info("Results file %s does not exist" % results_file)

+ 

      def execute(self):

          ipaddr = self._get_client_ipaddr()

          if ipaddr is None:

              ipaddr = self._spawn_vm(self.arg_data['uuid'])

+          # select right docker host from the list in inventory 

+         d_select = None

+         if self.arg_data['docker']:

+             port = self._spawn_container()

+             d_select='task'+str(port)

+             with open(os.path.join('dockerinventory'), 'a') as f:

+                   f.write('[%s]\n\t\t %s ansible_user=root ansible_port=%s ansible_password=passw0rd '

+                          'ansible_ssh_common_args="-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"\n'%(d_select,ipaddr,port))

  

          log.info('Running task on machine %s', ipaddr)

  

          try:

-             self._run_ansible_playbook(ipaddr)

+             self._run_ansible_playbook(ipaddr,d_select)

+             self._report_results()

          finally:

              if self.task_vm is not None:

                  self.task_vm.teardown()

@@ -0,0 +1,212 @@ 

+ import os

+ import random

+ import subprocess as sp

+ 

+ from libtaskotron.logger import log

+ 

+ 

+ def get_images():

+     '''Return a list of image dicts.'''

+ 

+     template = "{{.ID}}\t{{.Repository}}"

+     raw_output = sp.check_output(['docker',

+                                     'images',

+                                     '--format',

+                                     template])

+     if raw_output:

+         images = raw_output.split('\n')

+         attrs = ['id', 'name']

+         images = [image.split('\t') for image in images[:-1]]

+         images = [dict(zip(attrs, image)) for image in images]

+         return images

+ 

+ def _check_mounts(container_name):

+     '''check if the existing container is created by taskotron '''

+     

+     raw_output = sp.check_output(['docker',

+                                     'inspect',

+                                     '--format',

+                                     "{{.HostConfig.Binds}}",

+                                     container_name])

+     mounts = raw_output.split(':')[:-1]

+     for check in mounts:

+         base_dir = check.split('/')[-1] 

+         if container_name.endswith(base_dir):

+             return True

+ 

+ def get_containers():

+     '''Return a list of all containers on the host.'''

+ 

+         # This is a go template to clean up docker ps output

+     template = "{{.ID}}\t{{.Command}}\t{{.Status}}\t{{.Ports}}\t{{.Names}}"

+ 

+     raw_output = sp.check_output(['docker',

+                                       'ps',

+                                       '-a',

+                                       '--format',

+                                       template])

+ 

+     containers = raw_output.split('\n')

+ 

+     # Clean up the output a bit, also remove the last, empty, entry

+     containers = [x.split('\t') for x in containers][:-1]

+ 

+     attrs = ['id', 'command', 'status', 'ports', 'names']

+     containers = [dict(zip(attrs, container)) for container in containers]

+ 

+     return containers

+ 

+ 

+ def find_container(search_term):

+     '''Find a container based on either name or id.'''

+     containers = get_containers()

+     for container in containers:

+         if search_term in container.values():