#399 fix results error and give some thoughts
Closed 5 years ago by jskladan. Opened 6 years ago by lnie.
taskotron/ lnie/libtaskotron handle-taskotron-result  into  feature/ansiblize

file removed
-7
@@ -1,7 +0,0 @@ 

- {

-     "project_id" : "libtaskotron",

-     "conduit_uri" : "https://phab.qa.fedoraproject.org",

-     "arc.land.onto.default" : "develop",

-     "arc.feature.start.default" : "develop",

-     "unit.engine" : "PytestTestEngine"

- }

file removed
-12
@@ -1,12 +0,0 @@ 

- {

-   "linters": {

-     "flake8": {

-       "type": "flake8",

-       "include": "(\\.py$)",

-       "severity.rules": {

-         "(^E)": "warning",

-         "(^F)": "error"

-       }

-     }

-   }

- }

file modified
+4 -2
@@ -50,6 +50,8 @@ 

    rpm-build              \

    rpm-python

  

+ On Fedora 27+, install ``python2-koji`` instead of ``koji``.

+ 

  If you have not yet cloned the repository, do it now::

  

    git clone https://pagure.io/taskotron/libtaskotron.git
@@ -109,7 +111,7 @@ 

  Running a Task

  ==============

  

- A relatively simple example task is `rpmlint <https://bitbucket.org/fedoraqa/task-rpmlint>`_.

+ A relatively simple example task is `rpmlint <https://pagure.io/taskotron/task-rpmlint>`_.

  

  The task requires the rpmlint tool to be installed, so be sure to run::

  
@@ -117,7 +119,7 @@ 

  

  To run that task against a koji build with NVR ``<nvr>``, do the following::

  

-   git clone https://bitbucket.org/fedoraqa/task-rpmlint.git

+   git clone https://pagure.io/taskotron/task-rpmlint.git

    runtask -i <nvr> -t koji_build task-rpmlint/runtask.yml

  

  This will download the ``<nvr>`` from koji into a temp directory under

file removed
-24
@@ -1,24 +0,0 @@ 

- The linter's behaviour is configured in the .arclint file.

- 

- As the .arclint file parser does not support YAML comments, some documentation

- can be put here if needed. Generic arcanist linter documentation is here:

- https://secure.phabricator.com/book/phabricator/article/arcanist_lint/

- 

- Flake8

- ======

- All PEP8 error codes (E* codes) are considered warnings in Phabricator, so that

- only changes to the modified lines are displayed by default (unlike errors,

- which are always displayed regardless of which lines were modified).

- 

- All PyFlakes error codes (F* codes) are considered errors, because they are

- often introduced when modifying other lines than the one in question, and we

- want to be notified of those.

- 

- Additional Flake8 configuration is stored in tox.ini - it contains configuration

- which is not possible to do in .arclint file.

- 

- If you want to ignore a specific source code line, use '# noqa' comment. If you

- want to ignore the whole file, add '# flake8: noqa' comment. Read more

- documentation about flake8 at:

- https://flake8.readthedocs.org/

- 

file modified
+13 -4
@@ -5,8 +5,17 @@ 

  # A list of git repos that are allowed to post a result into a particular namespace

  namespaces_whitelist:

      dist:

-         - git@bitbucket.org:fedoraqa/task-rpmlint.git

-         - git@bitbucket.org:fedoraqa/task-depcheck.git

-         - git@bitbucket.org:fedoraqa/task-upgradepath.git

+         - https://pagure.io/taskotron/task-rpmlint.git

+         - https://pagure.io/taskotron/task-upgradepath.git

+         - https://pagure.io/task-abicheck.git

+         - https://pagure.io/taskotron/task-rpmgrill.git

+         - https://github.com/fedora-python/taskotron-python-versions.git

+         - https://github.com/fedora-modularity/check_modulemd.git

+         - https://pagure.io/taskotron/task-rpmdeplint.git

+         - https://pagure.io/taskotron/task-upstream-atomic.git

+         - https://pagure.io/taskotron/task-fedora-cloud-tests.git

+         - https://pagure.io/taskotron/task-modularity-testing-framework.git

      pkg:

-         - git://pkgs.fedoraproject.org/rpms-checks/

+         - git://pkgs.fedoraproject.org/test-rpms/

+         - git://pkgs.fedoraproject.org/test-modules/

+         - git://pkgs.fedoraproject.org/test-docker/

file modified
+20 -25
@@ -36,15 +36,31 @@ 

  [rawhide]

  url = %(rawhideurl)s

  path = development/rawhide

- tag = f27

+ tag = f28

  release_status = rawhide

  

- # Fedora 26

- [f26]

+ # Fedora 27

+ [f27]

  url = %(rawhideurl)s

- path = development/26

+ path = development/27

  release_status = branched

  

+ [f27-updates]

+ url = %(updatesurl)s

+ path = 27

+ parent = f27

+ 

+ [f27-updates-testing]

+ url = %(updatesurl)s

+ path = testing/27

+ parent = f27-updates

+ 

+ # Fedora 26

+ [f26]

+ url = %(goldurl)s

+ path = 26

+ release_status = stable

+ 

  [f26-updates]

  url = %(updatesurl)s

  path = 26
@@ -77,24 +93,3 @@ 

  primary_arches = armhfp, i386, x86_64

  alternate_arches = aarch64, ppc64, ppc64le, s390x

  

- # Fedora 24

- [f24]

- url = %(goldurl)s

- path = 24

- release_status = stable

- primary_arches = armhfp, i386, x86_64

- alternate_arches = aarch64, ppc64, ppc64le, s390x

- 

- [f24-updates]

- url = %(updatesurl)s

- path = 24

- parent = f24

- primary_arches = armhfp, i386, x86_64

- alternate_arches = aarch64, ppc64, ppc64le, s390x

- 

- [f24-updates-testing]

- url = %(updatesurl)s

- path = testing/24

- parent = f24-updates

- primary_arches = armhfp, i386, x86_64

- alternate_arches = aarch64, ppc64, ppc64le, s390x

file modified
+29 -135
@@ -11,23 +11,18 @@ 

  

  Libtaskotron is written mostly in `Python <https://www.python.org/>`_.

  

- Source code

- -----------

- 

- The source code for libtaskotron is available at:

- https://pagure.io/taskotron/libtaskotron

- 

- If you submit patches, please use the process of :ref:`submitting-code`.

  

  .. _taskotron-bugs:

  

- Bugs, issues and tasks

- ----------------------

+ Source code, bugs, tasks

+ ------------------------

+ 

+ The source code for libtaskotron is available at:

+ https://pagure.io/taskotron/libtaskotron

  

- We use `Phabricator <http://phabricator.org/>`_ to track issues and facilitate

- code reviews for several projects related to libtaskotron and Fedora QA.

+ This project is also used to track bugs and tasks.

  

- Our phabricator instance can be found at https://phab.qa.fedoraproject.org/

+ If you submit patches, please use the process of :ref:`submitting-code`.

  

  

  .. _running-tests:
@@ -36,7 +31,7 @@ 

  ---------------------------------

  

  We place a high value on having decent test coverage for the libtaskotron code.

- In general, tests are written using `pytest <http://pytest.org/>` and are broken

+ In general, tests are written using `pytest <http://pytest.org/>`_ and are broken

  up into two types:

  

    * **Unit Tests** test the core logic of code. They do not touch the filesystem
@@ -47,13 +42,9 @@ 

      tests are often much slower than unit tests but they offer coverage which is

      not present in the unit tests.

  

- To run the unit tests::

- 

-   py.test testing/

+ To run the unit tests, execute::

  

- To run the functional and unit tests::

- 

-   py.test -F testing/

+   py.test

  

  

  Continuous integration
@@ -82,52 +73,6 @@ 

  files automatically cleaned up, so that they don't occupy disk space in vain.

  There is a tmpfiles.d template prepared for you, look into ``conf/tmpfiles.d``.

  

- Support tools

- -------------

- 

- There are several tools that, while not required, make the process of developing

- for libtaskotron significantly easier.

- 

- 

- .. _installing-arcanist:

- 

- Arcanist

- ^^^^^^^^

- 

- `Arcanist <https://secure.phabricator.com/book/phabricator/article/arcanist/>`_

- is a command line interface to Phabricator which can be used to submit code

- reviews, download/apply code under review among other useful functions.

- 

- As Arcanist is an interface to Phabricator, we **strongly recommend** that you

- install it using our packages instead of from the upstream git repos (as

- described in upstream documentation). That way, there is no question that the

- Arcanist version used locally is compatible with our Phabricator instance.

- 

- To add our dnf repository containing Phabricator related packages, run::

- 

-   sudo curl https://repos.fedorapeople.org/repos/tflink/phabricator/fedora-phabricator.repo \

-   -o /etc/yum.repos.d/fedora-phabricator.repo

- 

- Once the repository has been configured, install Arcanist using::

- 

-   sudo dnf -y install arcanist

- 

- Arcanist is written in PHP and installing it will pull in several PHP packages

- as dependencies.

- 

- 

- .. _installing-gitflow:

- 

- gitflow

- ^^^^^^^

- 

- The `gitflow plugin for git <https://github.com/nvie/gitflow>`_ is another

- useful, but not required tool that is available in the Fedora repositories.

- 

- To install the gitflow plugin, run::

- 

-   sudo dnf -y install gitflow

- 

  

  .. _submitting-code:

  
@@ -138,9 +83,8 @@ 

  branching model and with the exception of hotfixes, all changes made should be

  against the ``develop`` branch.

  

- While not required, using the `gitflow plugin for git <https://github.com/nvie/gitflow>`_

- is recommended as it makes the process significantly easier. See

- :ref:`installing-gitflow` for instructions on installing gitflow on Fedora.

+ If you want to use the `gitflow plugin for git <https://github.com/nvie/gitflow>`_

+ to make this process more user-friendly, simply install the ``gitflow`` package.

  

  

  Start a new feature
@@ -148,9 +92,13 @@ 

  

  To start work on a new feature, use::

  

-   git flow feature start TXXX-short-description

+   git checkout -b feature/XXX-short-description develop

  

- Where ``TXXX`` is the issue number in Phabricator and ``short-description`` is a

+ or if you want to use gitflow, use::

+ 

+   git flow feature start XXX-short-description

+ 

+ where ``XXX`` is the issue number in Pagure and ``short-description`` is a

  short, human understandable description of the change contained in this branch.

  

  In general, short reviews are better than long reviews. If you can, please break
@@ -160,36 +108,10 @@ 

  Submitting code for review

  ^^^^^^^^^^^^^^^^^^^^^^^^^^

  

- .. note::

- 

-   Make sure to run all unit and functional tests before submitting code for

-   review. Any code that causes test failure will receive requests to fix the

-   offending code or will be rejected. See :ref:`running-tests` for information

-   on running unit and functional tests.

- 

- Code reviews are done through Phabricator, not pull requests. While it is possible

- to submit code for review through the web interface, :ref:`installing-arcanist`

- is recommended.

- 

- You do not need to push code anywhere public before submitting a review. Unless

- there is a good reason to do so (and there are very few), pushing a feature

- branch to origin is frowned on as it makes repository maintenance more difficult

- for no real benefit.

- 

- To submit code for review, make sure that your code has been updated with respect

- to ``origin/develop`` and run the following from your checked-out feature branch::

- 

-   arc diff develop

- 

- The first time that you use Arcanist, it will ask for an API key which can be

- retrieved from a link contained in that prompt.

- 

- Arcanist will create a new code review on our Phabricator instance and prompt

- you for information about the testing which has been done, a description of the

- code under review and people to whom the review should be assigned. If you're

- not clear on who should review your code, leave the ``reviewers`` section blank

- and someone will either review your code or assign the review task to someone

- who will.

+ Make sure to run all unit and functional tests before submitting code for

+ review. Any code that causes test failure will receive requests to fix the

+ offending code or will be rejected. See :ref:`running-tests` for information

+ on running unit and functional tests.

  

  

  Updating code reviews
@@ -199,13 +121,6 @@ 

  the requested changes have been made in your feature branch, commit them and

  make sure that your branch is still up to date with respect to ``origin/develop``.

  

- To update the existing review, use::

- 

-   arc diff develop --update DXXX

- 

- Where ``DXXX`` is the Differential ID assigned to your review when it was

- originally created.

- 

  

  Pushing code

  ^^^^^^^^^^^^
@@ -226,46 +141,27 @@ 

  before starting the merge process, else messy commits and merges may ensue.

  Once ``develop`` is up-to-date, the basic workflow to use is::

  

-     git checkout feature/TXXX-some-feature

+     git checkout feature/XXX-some-feature

      git rebase develop

  

  To merge the code into develop, use one of two commands. If the feature can be

  reasonably expressed in one commit (most features), use::

  

-     git flow feature finish --squash TXXX-some-feature

+     git flow feature finish --squash XXX-some-feature

  

  Else, if the Feature is larger and should cover multiple commits (less common),

  use::

  

-     git flow feature finish TXXX-some-feature

+     git flow feature finish XXX-some-feature

  

- After merging the code, please inspect log messages in case they need to be

- shortened (Phabricator likes to make long commit messages). Groups of commits

- should at least have a short description of their content and a link to the

- revision in differential. Once the feature is ready, push to origin::

+ After merging the code, please inspect git commit description and make it

+ prettier if needed. Groups of commits should at least have a short description

+ of their content and a link to the issue in Pagure. Once the feature is ready,

+ push to origin::

  

      git push origin develop

  

  

- 

- Reviewing code

- --------------

- 

- To review code, use `Phabricator's web interface <https://phab.qa.fedoraproject.org/>`_

- to submit comments, request changes or accept reviews.

- 

- If you want to look at the code under review locally to run tests or test

- suggestions prior to posting them, use Arcanist to apply the review code.

- 

- Make sure that your local repo is at the same base revision as the code under

- review (usually origin/develop) and run the following command::

- 

-   arc patch DXXX

- 

- Where ``DXXX`` is the review id that you're interested in. Arcanist will grab the

- code under review and apply it to a local branch named ``arcpatch-DXXX``. You can

- then look at the code locally or make modifications.

- 

  Writing directives

  ==================

  
@@ -308,11 +204,9 @@ 

  Sphinx has several built-in info fields which should be used to document

  function/method arguments and return data.

  

- 

  The following is an excerpt from `the Sphinx documentation

  <http://sphinx-doc.org/domains.html#info-field-lists>`_

  

- 

  Inside Python object description directives, reST field lists with these fields

  are recognized and formatted nicely:

  

file modified
+11 -4
@@ -12,8 +12,8 @@ 

  a little light on the practical creation of tasks. :doc:`writingtasks`

  and some existing tasks are also good references:

  

- * `rpmlint <https://bitbucket.org/fedoraqa/task-rpmlint>`_

- * `examplebodhi <https://bitbucket.org/fedoraqa/task-examplebodhi>`_

+ * `rpmlint <https://pagure.io/taskotron/task-rpmlint>`_

+ * `task examples <https://pagure.io/taskotron/task-examples>`_

  

  Task description

  ================
@@ -73,8 +73,8 @@ 

  ------------

  

  A task may also require the presence of other code to support execution. Those

- dependencies are specified as part of the environment description. Anything that

- ``dnf install`` supports as an argument on the command line is supported.

+ dependencies are specified as part of the environment description. It is

+ recommended to only use package names or file paths.

  

  .. code-block:: yaml

  
@@ -82,6 +82,13 @@ 

        rpm:

            - python-solv

            - python-librepo

+           - /usr/bin/xz

+ 

+ .. note::

+ 

+   You might also use advanced syntax like package version comparisons or group

+   names (as long as it's supported by ``dnf install``), but such tasks might

+   not work properly when executed under a non-root user.

  

  .. note::

  

file modified
+3 -4
@@ -103,7 +103,7 @@ 

  ^^^^^^^^^^^^

  

  A very simple task from which you can learn all the basics. See its

- `source code <https://bitbucket.org/fedoraqa/task-rpmlint>`_.

+ `source code <https://pagure.io/taskotron/task-rpmlint>`_.

  

  Run the task like this (replace the example NVR with the build you want to

  test):
@@ -120,11 +120,10 @@ 

  * `task-abicheck <https://pagure.io/task-abicheck/>`_

  * `task-python-versions

    <https://github.com/fedora-python/task-python-versions>`_

- * `task-rpmgrill <https://bitbucket.org/fedoraqa/task-rpmgrill>`_

+ * `task-rpmgrill <https://pagure.io/taskotron/task-rpmgrilll>`_

  

  You can also check our

- `example tasks <https://bitbucket.org/account/user/fedoraqa/projects/TEX>`_

+ `task examples <https://pagure.io/taskotron/task-examples>`_

  (warning, some of them might be out of date) or all the rest of the tasks on

- `bitbucket <https://bitbucket.org/fedoraqa/>`_ and

  `pagure <https://pagure.io/group/taskotron>`_ (projects starting with

  *task-*).

file modified
+1 -1
@@ -46,7 +46,7 @@ 

  

  Refer to the :ref:`quick start <quick-run>` guide for the basics.

  

- Using `task-rpmlint <https://bitbucket.org/fedoraqa/task-rpmlint>`_ as an

+ Using `task-rpmlint <https://pagure.io/taskotron/task-rpmlint>`_ as an

  example, the task repository contains the following important files::

  

    task-rpmlint/

file modified
+5 -5
@@ -15,12 +15,12 @@ 

  Examples

  ========

  

- examplebodhi

- ------------

+ bodhi example

+ -------------

  

- `examplebodhi task git repository <https://bitbucket.org/fedoraqa/task-examplebodhi>`_

+ `bodhi example files <https://pagure.io/taskotron/task-examples/blob/master/f/bodhi>`_

  

- The examplebodhi task is a trivial example which takes an update and determines

+ The bodhi example task is a trivial example which takes an update and determines

  a PASS/FAIL results in a pseudo-random fashion. This task wasn't designed to be

  used in production but is useful as a starting point and for development

  purposes.
@@ -28,7 +28,7 @@ 

  rpmlint

  -------

  

- `rpmlint task git repository <https://bitbucket.org/fedoraqa/task-rpmlint>`_

+ `rpmlint task git repository <https://pagure.io/taskotron/task-rpmlint>`_

  

  rpmlint is the simplest of the production tasks. It takes a koji build and runs

  rpmlint on it, reporting results as result-YAML.

file added
+171
@@ -0,0 +1,171 @@ 

+ #!/usr/bin/python

+ # Make coding more python3-ish

+ from __future__ import (absolute_import, division)

+ __metaclass__ = type

+ 

+ import os

+ import ast

+ import json

+ 

+ from ansible.module_utils.basic import AnsibleModule

+ from ansible.module_utils.urls import open_url

+ 

+ #from . import TaskotronError

+ try:

+     import libtaskotron.exceptions as exc

+     from libtaskotron.ext.fedora.koji_utils import KojiClient

+     from libtaskotron.ext.fedora import rpm_utils

+ except ImportError:

+     libtaskotron_found = False

+ else:

+     libtaskotron_found = True

+ 

+ # these will need to be handled better but this is just a PoC

+ WORKDIR = os.path.abspath('./taskotron-workdir')

+ 

+ def main():

+     mod = AnsibleModule(

+         argument_spec=dict(

+             action=dict(required=True),

+             arch=dict(required=False, default=['noarch']),

+             workdir=dict(required=False, default="/tmp/firstmod"),

+             arch_exclude=dict(required=False),

+             build_log=dict(required=False, default=False),

+             debuginfo=dict(required=False, default=False),

+             koji_build=dict(required=False),

+             koji_tag=dict(required=False),

+             src=dict(required=False, default=False),

+             target_dir=dict(required=False, default='.')

+         )

+     )

+ 

+     # TODO: check args for completeness

+     if not libtaskotron_found:

+         mod.fail_json(msg="The libtaskotron python module is required")

+ 

+     try:

+         kojidirective = KojiDirective()

+         data = kojidirective.process(mod)

+     except exc.TaskotronError, e:

+         mod.fail_json(msg=e)

+ 

+     subjects = ' '.join(data['downloaded_rpms'])

+ 

+     mod.exit_json(msg="worky!", changed=True, subjects=subjects)

+ 

+ class KojiDirective(object):

+ 

+     def __init__(self, koji_session=None):

+         super(KojiDirective, self).__init__()

+         if koji_session is None:

+             self.koji = KojiClient()

+         else:

+             self.koji = koji_session

+ 

+     def process(self, mod):

+         # process params

+         valid_actions = ['download', 'download_tag', 'download_latest_stable']

+         action = mod.params['action']

+         if action not in valid_actions:

+             raise exc.TaskotronDirectiveError('%s is not a valid action for koji '

+                                                 'directive' % action)

+ 

+         # use register to save/return information

+         if 'target_dir' not in mod.params:

+             target_dir = WORKDIR

+         else:

+             target_dir = mod.params['target_dir']

+ 

+         if 'arch' not in mod.params:

+             detected_args = ', '.join(mod.params.keys())

+             raise exc.TaskotronDirectiveError(

+                 "The koji directive requires 'arch' as an argument. Detected "

+                 "arguments: %s" % detected_args)

+ 

+         # this is supposedly safe enough to use on raw input but should be double checked

+         # http://stackoverflow.com/questions/1894269/convert-string-representation-of-list-to-list-in-python

+         arches = ast.literal_eval(mod.params['arch'])

+         if not isinstance(arches, list):

+             raise exc.TaskotronError("arches must be a list")

+ 

+         if arches and ('all' not in arches) and ('noarch' not in arches):

+             arches.append('noarch')

+ 

+         arch_exclude_string = mod.params.get('arch_exclude', None) 

+         if arch_exclude_string is None:

+             arch_exclude = []

+         else:

+             arch_exclude = ast.literal_eval(mod.params['arch_exclude'])

+ 

+         debuginfo = mod.params.get('debuginfo', False)

+         src = mod.params.get('src', False)

+         build_log = mod.params.get('build_log', False)

+ 

+         if not isinstance(arch_exclude, list):

+             print("arch_exclude: {}".format(type(arch_exclude)))

+             raise Exception("arch_exclude must be a list")

+         # download files

+         output_data = {}

+ 

+         if action == 'download':

+             if 'koji_build' not in mod.params:

+                 detected_args = ', '.join(mod.params.keys())

+                 raise exc.TaskotronDirectiveError(

+                     "The koji directive requires 'koji_build' for the 'download' "

+                     "action. Detected arguments: %s" % detected_args)

+ 

+             nvr = rpm_utils.rpmformat(mod.params['koji_build'], 'nvr')

+             output_data['downloaded_rpms'] = self.koji.get_nvr_rpms(

+                 nvr, target_dir, arches=arches, arch_exclude=arch_exclude,

+                 debuginfo=debuginfo, src=src)

+ 

+         elif action == 'download_tag':

+             if 'koji_tag' not in mod.params:

+                 detected_args = ', '.join(mod.params.keys())

+                 raise exc.TaskotronDirectiveError(

+                     "The koji directive requires 'koji_tag' for the 'download_tag' "

+                     "action. Detected arguments: %s" % detected_args)

+ 

+             koji_tag = mod.params['koji_tag']

+ 

+             output_data['downloaded_rpms'] = self.koji.get_tagged_rpms(

+                 koji_tag, target_dir, arches=arches, arch_exclude=arch_exclude,

+                 debuginfo=debuginfo, src=src)

+ 

+         elif action == 'download_latest_stable':

+             if 'koji_build' not in mod.params:

+                 detected_args = ', '.join(mod.params.keys())

+                 raise exc.TaskotronDirectiveError(

+                     "The koji directive requires 'koji_build' for the 'download_latest_stable' "

+                     "action. Detected arguments: %s" % detected_args)

+ 

+             name = rpm_utils.rpmformat(mod.params['koji_build'], 'n')

+             disttag = rpm_utils.get_dist_tag(mod.params['koji_build'])

+             # we need to do 'fc22' -> 'f22' conversion

+             tag = disttag.replace('c', '')

+ 

+             # first we need to check updates tag and if that fails, the latest

+             # stable nvr is in the base repo

+             tags = ['%s-updates' % tag, tag]

+             nvr = self.koji.latest_by_tag(tags, name)

+ 

+             output_data['downloaded_rpms'] = self.koji.get_nvr_rpms(

+                 nvr, target_dir, arch_exclude=arch_exclude,

+                 arches=arches, debuginfo=debuginfo, src=src)

+ 

+         # download build.log if requested

+         if build_log:

+             if action in ('download', 'download_latest_stable'):

+                 ret_log = self.koji.get_build_log(

+                         nvr, target_dir, arches=arches, arch_exclude=arch_exclude)

+                 output_data['downloaded_logs'] = ret_log['ok']

+                 output_data['log_errors'] = ret_log['error']

+             else:

+                 #log.warn("Downloading build logs is not supported for action '%s', ignoring.",

+                 #         action)

+                 print("Downloading build logs is not supported for action '%s', ignoring." % action)

+ 

+         return output_data

+ 

+ if __name__ == '__main__':

+     main() 

\ No newline at end of file

file modified
+47 -2
@@ -1,6 +1,6 @@ 

  Name:           libtaskotron

  # NOTE: if you update version, *make sure* to also update `libtaskotron/__init__.py`

- Version:        0.5.0

+ Version:        0.4.99.1

  Release:        1%{?dist}

  Summary:        Taskotron Support Library

  
@@ -62,14 +62,22 @@ 

  

  Requires:       createrepo

  Requires:       dnf >= 0.6.4

+ %if 0%{?fedora} >= 27

+ Requires:       python2-koji >= 1.10.0

+ %else

  Requires:       koji >= 1.10.0

+ %endif

  Requires:       libtaskotron-core = %{version}-%{release}

  Requires:       mash

  Requires:       python-fedora >= 0.8.0

  Requires:       python-hawkey >= 0.4.13-1

  Requires:       python-munch >= 2.0.2

  Requires:       rpm-python

+ %if 0%{?fedora} >= 27

+ BuildRequires:  python2-koji >= 1.10.0

+ %else

  BuildRequires:  koji >= 1.10.0

+ %endif

  BuildRequires:  mash

  BuildRequires:  python-fedora >= 0.8.0

  BuildRequires:  python-hawkey >= 0.4.13-1
@@ -168,9 +176,27 @@ 

  %{python2_sitelib}/libtaskotron/ext/disposable/*

  

  %changelog

- * Wed May 10 2017 Martin Krizek <mkrizek@fedoraproject.org> - 0.5.0-1

+ * Tue Aug 29 2017 Martin Krizek <mkrizek@fedoraproject.org> - 0.4.99.1-1

  - Support Ansible style tasks (D1195)

  

+ * Fri Jul 14 2017 Kamil Páral <kparal@redhat.com> - 0.4.24-1

+ - do not use --cacheonly for dnf operations

+ 

+ * Wed Jul 12 2017 Kamil Páral <kparal@redhat.com> - 0.4.23-1

+ - fix python2-koji dep on F27+

+ - fix broken test suite

+ 

+ * Wed Jul 12 2017 Kamil Páral <kparal@redhat.com> - 0.4.22-1

+ - mark Fedora 26 as stable in yumrepoinfo

+ - remove check for installed packages because it was problematic

+ 

+ * Fri Jun 30 2017 Kamil Páral <kparal@redhat.com> - 0.4.21-1

+ - documentation improvements

+ - DNF_REPO item type removed

+ - default task artifact now points to artifacts root dir instead of task log

+ - fix rpm deps handling via dnf on Fedora 26 (but only support package names

+   and filepaths as deps in task formulas)

+ 

  * Tue Apr 4 2017 Martin Krizek <mkrizek@fedoraproject.org> - 0.4.20-1

  - Add module_build item type (D1184)

  - taskformula: replace vars in dictionary keys (D1176)
@@ -179,9 +205,28 @@ 

  - argparse: change --arch to be a single value instead of a string (D1171)

  - yumrepoinfo: specify all primary and alternate arches (D1172)

  

+ * Fri Mar 24 2017 Tim Flink <tflink@fedoraproject.org> - 0.4.19-4

+ - bumping revision to test package-specific testing again

+ 

+ * Fri Mar 17 2017 Tim Flink <tflink@fedoraproject.org> - 0.4.19-3

+ - bumping revision to test package-specific testing again

+ 

+ * Fri Mar 17 2017 Tim Flink <tflink@fedoraproject.org> - 0.4.19-2

+ - bumping revision to test package-specific testing

+ 

+ * Fri Mar 17 2017 Tim Flink <tflink@fedoraproject.org> - 0.4.19-1

+ - updating yumrepoinfo for F26

+ - improved support for secondary architectures

+ 

+ * Thu Mar 16 2017 Tim Flink <tflink@fedoraproject.org> - 0.4.18-3

+ - bumping revision to test package-specific testing

+ 

  * Fri Feb 17 2017 Kamil Páral <kparal@redhat.com> - 0.4.18-4

  - require koji >= 1.10.0 because of T910

  

+ * Fri Feb 10 2017 Fedora Release Engineering <releng@fedoraproject.org> - 0.4.18-2

+ - Rebuilt for https://fedoraproject.org/wiki/Fedora_26_Mass_Rebuild

+ 

  * Fri Feb 10 2017 Kamil Páral <kparal@redhat.com> - 0.4.18-3

  - add python-pytest-cov builddep because the test suite now needs it,

    and python-rpmfluff because we're missing it

file modified
+1 -1
@@ -4,4 +4,4 @@ 

  # See the LICENSE file for more details on Licensing

  

  from __future__ import absolute_import

- __version__ = '0.5.0'

+ __version__ = '0.4.99.1'

@@ -5,6 +5,18 @@ 

  

  from __future__ import absolute_import

  

+ import os

+ import configparser

+ import pprint

+ 

+ from libtaskotron.directives import BaseDirective

+ from libtaskotron import check

+ from libtaskotron import config

+ from libtaskotron.exceptions import TaskotronDirectiveError, TaskotronValueError

+ from libtaskotron.logger import log

+ from libtaskotron.ext.fedora import rpm_utils

+ import resultsdb_api

+ 

  DOCUMENTATION = """

  module: resultsdb_directive

  short_description: send task results to ResultsDB or check ResultYAML format
@@ -96,21 +108,9 @@ 

       'summary': 'RPMLINT PASSED for xchat-tcl-2.8.8-21.fc20.x86_64.rpm'}>

  """

  

- import os

- import configparser

- 

- from libtaskotron.directives import BaseDirective

- 

- from libtaskotron import check

- from libtaskotron import config

- from libtaskotron.exceptions import TaskotronDirectiveError, TaskotronValueError

- from libtaskotron.logger import log

- from libtaskotron.ext.fedora import rpm_utils

- 

- import resultsdb_api

- 

  directive_class = 'ResultsdbDirective'

  

+ 

  class ResultsdbDirective(BaseDirective):

  

      def __init__(self, resultsdb = None):
@@ -274,7 +274,7 @@ 

          conf = config.get_config()

          if not conf.report_to_resultsdb:

              log.info("Reporting to ResultsDB is disabled. Once enabled, the "

-                      "following would get reported:\n%s" % params['results'])

+                      "following would get reported:\n%s", params['results'])

              return check.export_YAML(check_details)

  

          checkname = '%s.%s' % (arg_data['namespace'], arg_data['checkname'])
@@ -311,6 +311,7 @@ 

                      item=detail.item,

                      type=detail.report_type,

                      **detail.keyvals)

+                 log.debug('Result saved in ResultsDB:\n%s', pprint.pformat(result))

                  detail._internal['resultsdb_result_id'] = result['id']

  

              except resultsdb_api.ResultsDBapiException, e:

file modified
+10 -6
@@ -99,21 +99,25 @@ 

          :param str ipaddr: IP address of the machine the task will be run on

          '''

  

-         taskdir = os.path.dirname(self.arg_data['task'])

Yes, we will probably need this variable in the future. However I would choose different name, probably 'taskotron_item_type', since 'item' is already taken by ansible.

-         taskfile = os.path.basename(self.arg_data['task'])

+         if os.path.isfile(os.path.join(self.arg_data['taskdir'], 'inventory')):

+             sti_inventory = os.path.join(config.get_config().client_taskdir, "inventory")

+         else:

+             sti_inventory = "/usr/share/ansible/inventory"

  

          cmd = [

              'ansible-playbook', 'runner.yml',

              '--inventory=%s,' % ipaddr,

              '-e', 'artifacts=%s' % self.arg_data['artifactsdir'],

-             '-e', 'subjects=%s' % self.arg_data['item'],

-             '-e', 'taskdir=%s' % taskdir,

-             '-e', 'taskfile=%s' % taskfile,

+             '-e', 'taskotron_item=%s' % self.arg_data['item'],

+             '-e', 'item_type=%s' % self.arg_data['type'],

+             '-e', 'taskdir=%s' % self.arg_data['taskdir'],

              '-e', 'client_taskdir=%s' % config.get_config().client_taskdir,

+             '-e', 'sti_inventory=%s' % sti_inventory,

          ]

  

          if self.run_remotely:

-             cmd.extend(['--private-key=%s' % self.arg_data['ssh_privkey']])

+             if self.arg_data['ssh_privkey']:

+                 cmd.extend(['--private-key=%s' % self.arg_data['ssh_privkey']])

          else:

              cmd.extend(['--ask-become-pass', '--connection=local', '-e', 'local=true'])

  

@@ -108,10 +108,6 @@ 

  def install(pkgs):

      '''Install packages from system repositories using DNF. Either root or sudo access required.

  

-     Note: This is first tested with ``--cacheonly`` to avoid downloading metadata and speed up the

-     process. If that command fails for whatever reason, we run it again, this time without

-     ``--cacheonly``.

- 

      :param pkgs: packages to be installed, e.g. ``['pidgin']``, or any other argument supported by

                   ``dnf install`` command

      :type pkgs: list of str
@@ -131,111 +127,21 @@ 

      cmd = ['dnf', '--assumeyes', 'install']

      cmd.extend(pkgs)

  

-     if dnf_cache_available():

-         cmd.insert(1, '--cacheonly')

-     else:

-         log.warn('No DNF cache available, metadata will be need to be downloaded. If your DNF '

-                  "cache doesn't persist across task executions (e.g. on a disposable VM), "

-                  'consider creating the cache in your base system image to speed up execution.')

- 

      if not os_utils.is_root():  # we must have sudo at this point, don't test again needlessly

          cmd = ['sudo', '--non-interactive'] + cmd

  

-     while True:  # we need to call the command twice, if the first run with --cacheonly fails

-         log.debug('Running: %s', ' '.join([pipes.quote(c) for c in cmd]))

-         try:

-             output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)

-         except subprocess.CalledProcessError as e:

-             if '--cacheonly' in cmd:  # maybe just outdated cache

-                 log.debug(u'✘ Package installation failed, DNF output was:\n%s', e.output.rstrip())

-                 log.debug("Trying again with forced metadata refresh...")

-                 cmd = list(cmd)  # make a copy to allow unit tests to intercept both calls easily

-                 cmd.remove('--cacheonly')

-                 cmd.append('--refresh')

-                 continue

- 

-             log.error(u'✘ Package installation failed. We tried to install following packages:\n%s'

-                       '\nDNF returned exit code %d and output:\n%s', pkglist, e.returncode,

-                       e.output.rstrip())

-             raise exc.TaskotronError("Unable to install packages: %s" % pkglist)

-         else:

-             log.debug(u'✔ Package installation completed successfully. DNF output was:\n%s',

-                       output.rstrip())

-             return

- 

-         # safeguard against infinite loop, we should never reach this code

-         log.critical("Infinite loop detected in 'install()'")

-         assert False, "Infinite loop detected in 'install()'"

-         return

- 

- 

- def is_installed(pkgs):

-     '''Check if packages are installed using DNF. No elevated permissions needed.

- 

-     :param pkgs: packages to be checked whether they are installed, e.g. ``['pidgin']`` or any

-                  other argument supported by ``dnf install`` command

-     :type pkgs: list of str

-     :return: True if all specified packages are installed, False otherwise

-     :rtype: bool

-     '''

-     if not pkgs:

-         return True

- 

-     log.info('Checking installed state of %d packages...', len(pkgs))

-     cmd = ['dnf', '--assumeno', '--disableplugin=noroot', 'install']

-     cmd.extend(pkgs)

- 

-     if dnf_cache_available():

-         cmd.insert(1, '--cacheonly')

-     else:

-         log.warn('No DNF cache available, metadata will be need to be downloaded. If your DNF '

-                  "cache doesn't persist across task executions (e.g. on a disposable VM), "

-                  'consider creating the cache in your base system image to speed up execution.')

- 

-     if not os_utils.is_root() and os_utils.has_sudo():

-         cmd = ['sudo', '--non-interactive'] + cmd

- 

      log.debug('Running: %s', ' '.join([pipes.quote(c) for c in cmd]))

      try:

-         subprocess.check_output(cmd, stderr=subprocess.STDOUT)

+         output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)

      except subprocess.CalledProcessError as e:

-         log.debug(u'✘ Some of specified packages are not installed. DNF returned exit code %d and '

-                   'output:\n%s', e.returncode, e.output.rstrip())

-         return False

+         log.error(u'✘ Package installation failed. We tried to install following packages:\n%s'

+                   '\nDNF returned exit code %d and output:\n%s', pkglist, e.returncode,

+                   e.output.rstrip())

+         raise exc.TaskotronError("Unable to install packages: %s" % pkglist)

      else:

-         log.debug(u'✔ All specified packages are installed')

-         return True

- 

- 

- def dnf_cache_available():

-     '''Determine whether DNF cache is available.

- 

-     If it is, we can run read-only actions solely from the cache, and try to use it even for

-     ``install`` action (provided the repos haven't changed much). This way we avoid contacting

-     DNF repos and downloading new metadata if they changed.

- 

-     This tests the system cache (if running as root or with sudo access) or the user cache

-     (otherwise). Since DNF has no direct way of testing this, we simply run an action that should

-     always pass (``dnf repolist``) with ``--cacheonly`` option and look at the exit code.

- 

-     :return: bool whether DNF cache is currently available and can be used (system or user cache,

-              depending on your current admin privileges)

-     '''

-     cmd = ['dnf', '--cacheonly', 'repolist']

- 

-     if not os_utils.is_root() and os_utils.has_sudo():

-         cmd = ['sudo', '--non-interactive'] + cmd

- 

-     log.debug('Deciding whether DNF cache is available. Running: %s', ' '.join(cmd))

-     try:

-         subprocess.check_output(cmd, stderr=subprocess.STDOUT)

-     except subprocess.CalledProcessError as e:

-         log.debug(u'✘ DNF cache is not available. Received exit code %d and output:\n%s',

-                   e.returncode, e.output.rstrip())

-         return False

-     else:

-         log.debug(u'✔ DNF cache is available.')

-         return True

+         log.debug(u'✔ Package installation completed successfully. DNF output was:\n%s',

+                   output.rstrip())

+         return

  

  

  def get_dist_tag(rpmstr):

file modified
+7 -1
@@ -65,7 +65,7 @@ 

          pass

  

      parser = argparse.ArgumentParser(epilog=ITEM_TYPE_DOCS, formatter_class=CustomFormatter)

-     parser.add_argument("task", help="task playbook to run")

+     parser.add_argument("taskdir", help="taskdir with STI playbook to run")

      parser.add_argument("-a", "--arch",

                          choices=["i386", "x86_64", "armhfp", "noarch"], default='noarch',

                          help="architecture specifying the item to be checked. 'noarch' value "

I just have feeling that executor should only execute the task, not do the results reporting. But I don't have strong opinion on the placement. Any thoughts @jskladan @kparal ?

@@ -185,6 +185,12 @@ 

      # start execution

      executor = Executor(arg_data)

      executor.execute()

+     '''I think,just I think,the following three lines should be put into executor.py,

+       we are going to import resultsdb,after it's be modified accordingly? 

+       we should keep main.py as concise as possible,right?'''

+     with open(os.path.join(arg_data['artifactsdir'], 'results.yml')) as results_file:

+         rusults_yaml = results_file.read()

+         log.info('WIP, this will be reported to resultsDB:\n%s\n' % rusults_yaml)

  

      # finalize

      log.info('Execution finished at: %s. Task artifacts were saved in: %s',

@@ -21,7 +21,8 @@ 

      parser.add_argument("-k", "--keyval", action="append", default=[], metavar='KEY=VALUE',

                          help="all key-value pairs in this dictionary are stored "

                               "in ResultsDB as 'extra data'")

-     parser.add_argument("-c", "--checkname", help="name of the check", default=None)

+     parser.add_argument("-c", "--checkname",

+                         help="name of the check (don't include namespace here)", default=None)

      parser.add_argument("-a", "--artifact", help="file or directory placed in the artifacts dir",

                          default=None)

  

file modified
+40 -4
@@ -3,17 +3,17 @@ 

    vars:

      local: false

    tasks:

-     - name: Clean taskdir

+     - name: Clean client taskdir

tests.yml is entry point specified by Standard Interface

        file:

          path: "{{ client_taskdir }}"

          state: absent

  

-     - name: Create taskdir

+     - name: Create client taskdir

        file:

          path: "{{ client_taskdir }}"

          state: directory

  

-     - name: Upload taskdir

+     - name: Upload taskdir to client taskdir

        synchronize:

          src: "{{ taskdir }}/"

Sure, why not...

          dest: "{{ client_taskdir }}"
@@ -29,12 +29,31 @@ 

        with_items:

          - ansible

          - libselinux-python

+         - standard-test-roles

+         - python2-dnf

        when: not local

  

+     # item is specified on command line (e.g. koji_build), this module should download

Does "local_action: shell..." support "args: creates=..."? The plan is that we are not going to generate results.yml every time. If the task creates the file (e.g. our generic task like rpmlint), we use that instead.

+     # artifacts related to the item and return subject(s) (path strings) for STI tests.yml

+     - name: Download subjects (only rpms atm)

+       koji: 

+         action: "download"

+         koji_build: "{{ taskotron_item }}" 

+         arch: ['x86_64'] #FIXME

+         target_dir: "{{ client_taskdir }}" #FIXME?

+       register: koji_output

+ 

+     - debug: var=koji_output

+ 

      - name: Run the task

        become: yes

        become_user: root

-       command: ansible-playbook "{{ client_taskdir }}/{{ taskfile }}" --inventory=localhost, --connection=local -e artifacts="{{ artifacts }}" -e subjects="{{ subjects }}"

+       # how can we get the tests.yml here, using the above "koji module"? I may be wrong,but I don't think we can make koji work that way

+       command: ansible-playbook "{{ client_taskdir }}/tests.yml" --inventory="{{ sti_inventory }}", --connection=local -e artifacts="{{ artifacts }}" -e subjects="{{ koji_output['subjects'] }}"

+       environment:

+         TEST_SUBJECTS: "{{ koji_output['subjects'] }}"

+         TEST_ARTIFACTS: "{{ artifacts }}"

+       ignore_errors: yes

        register: output

  

      - name: Dump task output
@@ -45,9 +64,26 @@ 

          dest: "{{ artifacts }}/ansible.log"

          content: "{{ output.stdout }}"

  

+     - name: RC of test

+       shell: echo "{{ output.rc }}" > "{{ artifacts }}/test.rc"

+ 

+     - name: Set outcome to PASSED if RC == 0

+       set_fact: outcome="PASSED"

+       when: output.rc == 0

+ 

+     - name: Set outcome to PASSED if RC != 0

+       set_fact: outcome="FAILED"

+       when: output.rc != 0

+ 

+     # we can do this first,as we already get all we must get from the minion machine

      - name: Collect logs

        synchronize:

          mode: pull

          src: "{{ artifacts }}/*"

          dest: "{{ artifacts }}"

        when: not local

+ 

+     - name: generate resultsdb result file

+       # it seems that this should be executed in the local machine,as taskotron is not asked to installed in the remote machine

+       local_action: shell taskotron_result -f "{{ artifacts }}/results.yml" -i "{{ taskotron_item }}" -o "{{ outcome }}" -t "{{ item_type }}" -a "{{ artifacts }}/test.log"

+ 

file modified
+2 -143
@@ -8,9 +8,8 @@ 

  import pytest

  import subprocess

  

- from libtaskotron.ext.fedora.rpm_utils import (cmpNEVR, dnf_cache_available, get_dist_tag, install,

-                                                is_installed, rpmformat)

- from libtaskotron.ext.fedora import rpm_utils

+ from libtaskotron.ext.fedora.rpm_utils import (cmpNEVR, get_dist_tag, install,

+                                                rpmformat)

  from libtaskotron import exceptions as exc

  from libtaskotron import os_utils

  
@@ -142,8 +141,6 @@ 

          monkeypatch.setattr(os_utils, 'is_root', self.mock_is_root)

          self.mock_has_sudo = mock.Mock(return_value=True)

          monkeypatch.setattr(os_utils, 'has_sudo', self.mock_has_sudo)

-         self.mock_dnf_cache = mock.Mock(return_value=True)

-         monkeypatch.setattr(rpm_utils, 'dnf_cache_available', self.mock_dnf_cache)

  

          self.mock_check_output = mock.Mock(return_value='')

          monkeypatch.setattr(subprocess, 'check_output', self.mock_check_output)
@@ -189,144 +186,6 @@ 

          call_args = self.mock_check_output.call_args[0][0]

          assert all([pkg in call_args for pkg in pkgs])

  

-     def test_run_once(self):

-         '''The command should be executed only once if it passes'''

-         install(['foo'])

-         assert self.mock_check_output.call_count == 1

- 

-     def test_run_once_fail(self):

-         '''If there's no cache and install fails, it should not try again'''

-         self.mock_dnf_cache.return_value = False

-         self.mock_check_output.side_effect = self.err

- 

-         with pytest.raises(exc.TaskotronError) as excinfo:

-             install(['foo'])

- 

-         assert self.mock_check_output.call_count == 1

-         # direct comparison here, because isinstance() would also accept subclasses we throw for

-         # different issues (missing permissions)

-         assert type(excinfo.value) is exc.TaskotronError

- 

-     def test_run_twice_on_cache_fail(self):

-         '''If cache is used and install fails, it should run again without cache'''

-         self.mock_check_output.side_effect = self.err

-         with pytest.raises(exc.TaskotronError):

-             install(['foo'])

- 

-         assert self.mock_check_output.call_count == 2

-         assert '--cacheonly' in self.mock_check_output.call_args_list[0][0][0]

-         assert '--cacheonly' not in self.mock_check_output.call_args_list[1][0][0]

- 

-     def test_run_twice_second_ok(self):

-         '''If installation is tried twice and second one succeeds, the whole operation should

-         succeed.'''

-         self.mock_check_output.side_effect = (self.err, mock.DEFAULT)

-         install(['foo'])

-         assert self.mock_check_output.call_count == 2

- 

- 

- @pytest.mark.usefixtures('setup')

- class TestIsInstalled(object):

-     '''Test rpm_utils.is_installed()'''

- 

-     @pytest.fixture

-     def setup(self, monkeypatch):

-         self.mock_is_root = mock.Mock(return_value=True)

-         monkeypatch.setattr(os_utils, 'is_root', self.mock_is_root)

-         self.mock_has_sudo = mock.Mock(return_value=True)

-         monkeypatch.setattr(os_utils, 'has_sudo', self.mock_has_sudo)

-         self.mock_dnf_cache = mock.Mock(return_value=True)

-         monkeypatch.setattr(rpm_utils, 'dnf_cache_available', self.mock_dnf_cache)

- 

-         self.mock_check_output = mock.Mock(return_value='')

-         monkeypatch.setattr(subprocess, 'check_output', self.mock_check_output)

-         self.err = subprocess.CalledProcessError(1, 'cmd', output='')

- 

-     def test_installed(self):

-         assert is_installed(['foo']) is True

- 

-     def test_not_installed(self):

-         self.mock_check_output.side_effect = self.err

-         assert is_installed(['foo']) is False

- 

-     def test_no_permissions_is_ok(self):

-         self.mock_is_root.return_value = False

-         self.mock_has_sudo.return_value = False

-         assert is_installed(['foo']) is True

- 

-     def test_add_sudo(self):

-         self.mock_is_root.return_value = False

-         is_installed(['foo'])

-         assert self.mock_check_output.call_args[0][0].index('sudo') == 0

- 

-     def test_dont_add_sudo(self):

-         is_installed(['foo'])

-         assert 'sudo' not in self.mock_check_output.call_args[0][0]

- 

-     def test_no_pkgs(self):

-         assert is_installed([]) is True

-         assert self.mock_check_output.call_count == 0

- 

-     def test_special_args(self):

-         '''Make sure args like 'rpmlint > 1.0' are passed in correctly'''

-         pkgs = ['foo', 'bar >= 1.0', '@group']

-         is_installed(pkgs)

- 

-         call_args = self.mock_check_output.call_args[0][0]

-         assert all([pkg in call_args for pkg in pkgs])

- 

-     def test_cache_exists(self):

-         is_installed(['foo'])

-         assert '--cacheonly' in self.mock_check_output.call_args_list[0][0][0]

- 

-     def test_cache_doesnt_exist(self):

-         self.mock_dnf_cache.return_value = False

-         is_installed(['foo'])

-         assert '--cacheonly' not in self.mock_check_output.call_args_list[0][0][0]

- 

- 

- @pytest.mark.usefixtures('setup')

- class TestDnfCacheAvailable(object):

-     '''Test rpm_utils.dnf_cache_available()'''

- 

-     @pytest.fixture

-     def setup(self, monkeypatch):

-         self.mock_is_root = mock.Mock(return_value=True)

-         monkeypatch.setattr(os_utils, 'is_root', self.mock_is_root)

-         self.mock_has_sudo = mock.Mock(return_value=True)

-         monkeypatch.setattr(os_utils, 'has_sudo', self.mock_has_sudo)

- 

-         self.mock_check_output = mock.Mock(return_value='')

-         monkeypatch.setattr(subprocess, 'check_output', self.mock_check_output)

-         self.err = subprocess.CalledProcessError(1, 'cmd', output='')

- 

-     def test_cache_exists(self):

-         assert dnf_cache_available() is True

- 

-     def test_cache_exists_user(self):

-         self.mock_is_root.return_value = False

-         self.mock_has_sudo.return_value = False

-         assert dnf_cache_available() is True

- 

-     def test_cache_doesnt_exist(self):

-         self.mock_check_output.side_effect = self.err

-         assert dnf_cache_available() is False

- 

-     def test_cache_doesnt_exist_user(self):

-         self.mock_is_root.return_value = False

-         self.mock_has_sudo.return_value = False

-         self.mock_check_output.side_effect = self.err

-         assert dnf_cache_available() is False

- 

-     def test_add_sudo(self):

-         self.mock_is_root.return_value = False

-         dnf_cache_available()

-         assert self.mock_check_output.call_args[0][0].index('sudo') == 0

- 

-     def test_dont_add_sudo(self):

-         dnf_cache_available()

-         assert 'sudo' not in self.mock_check_output.call_args[0][0]

- 

  

  class TestGetDistTag(object):

      def test_nvrs(self):

file modified
+4 -1
@@ -1,7 +1,10 @@ 

  # This is a common file where different test suites/linters can be configured.

- # Phabricator uses this file when running `arc unit` or `arc lint`.

  

  [flake8]

+ # If you want to ignore a specific source code line, use '# noqa' comment. If

+ # you want to ignore the whole file, add '# flake8: noqa' comment. Read more

+ # documentation about flake8 at:

+ # https://flake8.readthedocs.org/

  max-line-length=99

  

  [pep8]

Fix one potential error,and give some thoughts in comment.
I may be wrong,but think it's better to say it out.

Yes, we will probably need this variable in the future. However I would choose different name, probably 'taskotron_item_type', since 'item' is already taken by ansible.

I just have feeling that executor should only execute the task, not do the results reporting. But I don't have strong opinion on the placement. Any thoughts @jskladan @kparal ?

tests.yml is entry point specified by Standard Interface

Does "local_action: shell..." support "args: creates=..."? The plan is that we are not going to generate results.yml every time. If the task creates the file (e.g. our generic task like rpmlint), we use that instead.

I just have feeling that executor should only execute the task, not do the results reporting. But >I don't have strong opinion on the placement. Any thoughts @jskladan @kparal ?

Actually, in the non-ansible branch,we do call "resultsdb_directive" in executor.py just with the help of runtask.yml,right? In ansible branch,We can use almost the same logic in our runner.yml,if we have a resultsdb module,but we don't.

tests.yml is entry point specified by Standard Interface

yeah,I know, it's the entry point.it's just that your code makes me feel that you are going to get the tests.yml using koji command,and I don't think we can get the test.yml in that way,though I may be wrong,please correct me, if I am.

Does "local_action: shell..." support "args: creates=..."? The plan is that we are not going to >generate results.yml every time. If the task creates the file (e.g. our generic task like rpmlint), >we use that instead.

Ah,that's the plan,didn't get much clue,as there are only ansible.log and rpmlint_output.log in the artifacts dir.I'm not so familiar with the whole project,is there an easy way to get results.yml without using taskotron_result ?If there isn't ,why don't we only create results.yml here instead?Anyway,local_action dose support "args:creates".

rebased onto af984b6

5 years ago

Pull-Request has been closed by jskladan

5 years ago