#413 variables for env matching added
Merged 5 years ago by kparal. Opened 5 years ago by lbrabec.

file modified
+6 -27
@@ -7,28 +7,7 @@ 

    remote_user: root

    gather_facts: no

    vars:

-     become_root: true  # whether to run playbooks as root

-     exec_tasks: tasks_generic.yml  # path to taskotron tasks playbook (generic or STI)

-     heartbeat_file: "{{artifacts_root}}/taskotron/heartbeat.log"  # heartbeat will appear in this file

-     heartbeat_interval: 120  # add line to heartbeat_file every x seconds (2 minutes)

-     keepalive_minutes: 0  # how long should heartbeat process run (0 to disable)

-     local: false  # running on local machine (overlord), no remote connection

-     # These variables are also available:

-     # artifacts - path to the playbook-specific artifacts directory (on overlord and minion)

-     # artifacts_root - path to the root artifacts directory (on overlord and minion)

-     # client_taskdir - path to directory with test suite (on minion)

-     # minion_repos - a list of repos (strings) to install on the minion

-     # taskdir - path to directory with test suite (on overlord)

-     # taskotron_arch - architecture of taskotron_item to be tested

-     # taskotron_item - item under test

-     # taskotron_item_type - item type under test

-     # taskotron_supported_arches - list of base architectures supported by Taskotron (e.g. 'armhfp')

-     # taskotron_supported_binary_arches - list of base+binary architectures supported by Taskotron (e.g. 'armhfp, armv7hl')

-     # test_playbook - path to playbook to execute inside client_taskdir

-     #                 (usually tests.yml)

-     # sti_inventory - path to inventory file to be used with STI tests (local

-     #                inside client_taskdir or global)

-     # varsfile - name of the ansible vars file inside artifacts/taskotron/ to forward to task

+     # Available variables are documented at executor.py:_create_playbook_vars()

    tasks:

      - name: Install required packages

        dnf:
@@ -73,7 +52,7 @@ 

        file:

          path: "{{ artifacts_root }}/taskotron"

          state: directory

-       when: keepalive_minutes|int > 0

+       when: taskotron_keepalive_minutes|int > 0

        delegate_to: localhost

  

      - name: Set up extra DNF repositories (minion_repos)
@@ -95,10 +74,10 @@ 

      - name: Start heartbeat

        command: >

          ./heartbeat.sh start {{ heartbeat_file }} {{ heartbeat_interval }}

-         {{ keepalive_minutes | int * 60 }}

-       async: "{{ keepalive_minutes | int * 60 + 60 }}"

+         {{ taskotron_keepalive_minutes | int * 60 }}

+       async: "{{ taskotron_keepalive_minutes | int * 60 + 60 }}"

        poll: 0

-       when: keepalive_minutes|int > 0

+       when: taskotron_keepalive_minutes|int > 0

        delegate_to: localhost

  

      - name: Include either generic or STI execution tasks
@@ -108,7 +87,7 @@ 

  

      - name: Kill heartbeat

        command: ./heartbeat.sh stop {{ heartbeat_file }}

-       when: keepalive_minutes|int > 0

+       when: taskotron_keepalive_minutes|int > 0

        delegate_to: localhost

        tags:

          - failsafe

@@ -182,5 +182,17 @@ 

      specified number of minutes (the keepalive time) and only then the standard

      timeout counter will be started.

  

+ ``taskotron_match_host_arch``

+     (bool) Set to ``True``, if your task needs to be executed on a machine of

+     the same architecture as provided by ``taskotron_arch``.

+ 

+ ``taskotron_match_host_distro``

+     (bool) Set to ``True``, if your task needs to be executed on the same

+     distribution (e.g. Fedora, RHEL) as indicated by ``taskotron_item``.

+ 

+ ``taskotron_match_host_release``

+     (bool) Set to ``True``, if your task needs to be executed on the same

+     distribution release version (e.g. Fedora 27) as indicated by

+     ``taskotron_item``.

  

  .. _ResultsDB: https://fedoraproject.org/wiki/ResultsDB

file modified
+212 -111
@@ -40,20 +40,44 @@ 

      :ivar bool run_remotely: whether the task is run on a remote machine

      '''

  

+     #: vars retrieved from tests.yml

+     ACCEPTED_VARS = [

+         # if you adjust this, also adjust writingtasks.rst

+         'taskotron_generic_task',

+         'taskotron_keepalive_minutes',

+         'taskotron_match_host_arch',

+         'taskotron_match_host_distro',

+         'taskotron_match_host_release',

+     ]

+ 

+     #: list of all vars to be exposed in the task playbook

+     FORWARDED_VARS = [

+         # if you adjust this, also adjust writingtasks.rst

+         'artifacts',

+         'taskotron_arch',

+         'taskotron_item_type',

+         'taskotron_item',

+         'taskotron_supported_arches',

+         'taskotron_supported_binary_arches',

+     ]

+ 

      def __init__(self, arg_data):

          self.arg_data = arg_data

          self.task_vm = None

          self.run_remotely = False

+         self.ipaddr = self._get_client_ipaddr()

  

-     def _spawn_vm(self, uuid):

+     def _spawn_vm(self, uuid, playbook_vars):

          '''Spawn a virtual machine using testcloud.

  

          :param str uuid: unicode string uuid for the task being executed

+         :param dict playbook_vars: a vars dict created by

+             :meth:`_create_playbook_vars`

          :returns: str ip address of spawned vm

          '''

          log.info('Spawning disposable client')

  

-         env = image_utils.devise_environment(self.arg_data)

+         env = image_utils.devise_environment(self.arg_data, playbook_vars)

          self.task_vm = vm.TestCloudMachine(uuid)

  

          retries = config.get_config().spawn_vm_retries
@@ -137,132 +161,197 @@ 

                        e)

              raise exc.TaskotronPlaybookError(e)

  

-     def _run_playbook(self, test_playbook, ipaddr, root=True):

-         '''Run the ansible-playbook command to execute given playbook

-         containing the task.

+     def _load_playbook_vars(self, test_playbook):

+         '''Load accepted playbook vars from a playbook file and return it

+         as a dict.

  

          :param str test_playbook: name of the playbook, relative to the task

              directory

-         :param str ipaddr: IP address of the machine the task will be run on

-         :param bool root: whether to run as ``root`` for local execution mode

-         :return: a tuple of ``(str, bool)``. The first item is stream output of

-             the ansible-playbook command (stdout and stderr merged together).

-             The second item marks whether this playbook is a Taskotron generic

-             task (``True``) or a plain STI task (``False``).

-         :rtype: tuple

-         :raise TaskotronPlaybookError: when the playbook is not syntactically

-             correct

+         :return: a dict with keyvals from the playbook which are allowed to be

+             loaded (see :attr:`ACCEPTED_VARS`).

          '''

-         # syntax check

-         self._check_playbook_syntax(

-             os.path.join(self.arg_data['taskdir'], test_playbook))

+         vars_ = {}

+         with open(os.path.join(self.arg_data['taskdir'], test_playbook), 'r') \

+         as playbook_file:

+             playbook_yaml = yaml.safe_load(playbook_file.read())

+             # we only consider variables in the first play

+             playbook_vars = playbook_yaml[0].get('vars', {})

+         for var, val in playbook_vars.items():

+             if var in self.ACCEPTED_VARS:

+                 vars_[var] = val

+         return vars_

+ 

+     def _create_playbook_vars(self, test_playbook):

+         '''Create and return dictionary containing all variables to be used

+         with our ansible playbook.

  

-         # compute variables

-         ansible_dir = os.path.join(config.get_config()._data_dir, 'ansible')

-         artifacts_subdir = os.path.join(self.arg_data['artifactsdir'],

-                                         test_playbook)

+         :param str test_playbook: name of the playbook, relative to the task

+             directory

+         :param bool root: whether to run playbooks as root. This is mainly

+             for testing purposes.

+ 

+         :return: A dictionary containing these keys:

+ 

+             artifacts_root

+                 path to the root artifacts directory (on overlord and minion)

+             artifacts

+                 path to the playbook-specific artifacts directory (on overlord

+                 and minion)

+             become_root

+                 whether to run playbooks as root

+             client_taskdir

+                 path to directory with test suite (on minion)

+             exec_tasks

+                 path to taskotron tasks playbook (generic or STI)

+             heartbeat_file

+                 heartbeat will appear in this file

+             heartbeat_interval

+                 add line to heartbeat_file every x seconds (2 minutes)

+             local

+                 running on local machine (overlord), no remote connection

+             minion_repos

+                 a list of repos (strings) to install on the minion

+             sti_inventory

+                 path to inventory file to be used with STI tests (local inside

+                 client_taskdir or global)

+             taskdir

+                 path to directory with test suite (on overlord)

+             taskotron_arch

+                 architecture of taskotron_item to be tested

+             taskotron_generic_task

+                 whether to run a generic or STI task

+             taskotron_item_type

+                 item under test

+             taskotron_item

+                 item type under test

+             taskotron_keepalive_minutes

+                 how long should heartbeat process run (0 to disable)

+             taskotron_match_host_arch

+                 whether VM guest arch has to match taskotron_arch

+             taskotron_match_host_distro

+                 whether VM guest distro has to match taskotron_item

+             taskotron_match_host_release

+                 whether VM guest release has to match taskotron_item

+             taskotron_supported_arches

+                 list of base architectures supported by Taskotron (e.g.

+                 'armhfp')

+             taskotron_supported_binary_arches

+                 list of base+binary architectures supported by Taskotron (e.g.

+                 'armhfp, armv7hl')

+             test_playbook

+                 path to playbook to execute inside client_taskdir (usually

+                 tests.yml)

+             varsfile

+                 name of the ansible vars file inside ``artifacts/taskotron/``

+                 to forward to task

+         '''

  

-         if os.path.isfile(os.path.join(self.arg_data['taskdir'], 'inventory')):

-             sti_inventory = os.path.join(config.get_config().client_taskdir,

-                                         'inventory')

+         vars_ = {}

+         cfg = config.get_config()

+ 

+         # default values

+         vars_['become_root'] = True

+         vars_['taskotron_generic_task'] = False

+         vars_['heartbeat_interval'] = 120

+         vars_['taskotron_keepalive_minutes'] = 0

+         vars_['taskotron_match_host_arch'] = False

+         vars_['taskotron_match_host_distro'] = False

+         vars_['taskotron_match_host_release'] = False

+         vars_['varsfile'] = 'task_vars.json'

+ 

+         # load all allowed vars from tests.yml

+         loaded_vars = self._load_playbook_vars(test_playbook)

+         vars_.update(loaded_vars)

+ 

+         # compute vars

+         vars_['artifacts'] = os.path.join(self.arg_data['artifactsdir'],

+             test_playbook)

+         vars_['artifacts_root'] = self.arg_data['artifactsdir']

+         vars_['client_taskdir'] = cfg.client_taskdir

+         vars_['heartbeat_file'] = os.path.join(vars_['artifacts_root'],

+             'taskotron', 'heartbeat.log')

+         vars_['local'] = not self.run_remotely

+         vars_['minion_repos'] = cfg.minion_repos

+         vars_['taskdir'] = self.arg_data['taskdir']

+         vars_['taskotron_arch'] = self.arg_data['arch']

+         vars_['taskotron_item'] = self.arg_data['item']

+         vars_['taskotron_item_type'] = self.arg_data['type']

+         vars_['taskotron_supported_arches'] = cfg.supported_arches

+         vars_['taskotron_supported_binary_arches'] = [binarch for arch in

+             cfg.supported_arches for binarch in arch_utils.Arches.binary[arch]]

+         vars_['test_playbook'] = test_playbook

+ 

+         if vars_['taskotron_generic_task']:

+             vars_['exec_tasks'] = 'tasks_generic.yml'

          else:

-             sti_inventory = "/usr/share/ansible/inventory"

+             vars_['exec_tasks'] = 'tasks_sti.yml'

  

-         supported_arches = config.get_config().supported_arches

-         supported_binary_arches = [binarch for arch in supported_arches

-             for binarch in arch_utils.Arches.binary[arch]]

+         if os.path.isfile(os.path.join(vars_['taskdir'], 'inventory')):

+             vars_['sti_inventory'] = os.path.join(vars_['client_taskdir'],

+                 'inventory')

+         else:

+             vars_['sti_inventory'] = '/usr/share/ansible/inventory'

  

-         minion_repos = config.get_config().minion_repos

+         return vars_

  

-         # load taskotron vars from playbook and compute variables

-         with open(os.path.join(self.arg_data['taskdir'], test_playbook), 'r') \

-             as playbook_file:

-             # playbook should contain at least one play as the file passed the

-             # syntax check

-             # TODO: should we look for 'taskotron_generic_task' in all plays,

-             # not just in the first?

-             playbook = yaml.load(playbook_file.read())[0]

-             if playbook.get('vars',{}).get('taskotron_generic_task', False):

-                 generic = True

-                 exec_tasks = 'tasks_generic.yml'

-                 log.debug('Playbook %s is a generic task', test_playbook)

-             else:

-                 generic = False

-                 exec_tasks = 'tasks_sti.yml'

-                 log.debug('Playbook %s is an STI task', test_playbook)

-             keepalive = playbook.get('vars', {}).get(

-                 'taskotron_keepalive_minutes', None)

- 

-         # set up variables to load into playbooks.

-         # separate them into the ones to forward, and the ones to use

-         # internally

- 

-         # if you adjust forwarded variables, don't forget to adjust docs

-         # as well

-         ansible_vars = {

-             'artifacts': artifacts_subdir,

-             'taskotron_arch': self.arg_data['arch'],

-             'taskotron_item': self.arg_data['item'],

-             'taskotron_item_type': self.arg_data['type'],

-             'taskotron_supported_arches': supported_arches,

-             'taskotron_supported_binary_arches': supported_binary_arches,

-         }

-         varsfile = os.path.join(artifacts_subdir, 'taskotron',

-                                 'ansible_vars.json')

- 

-         ansible_vars_internal = {

-             'artifacts_root': self.arg_data['artifactsdir'],

-             'client_taskdir': config.get_config().client_taskdir,

-             'exec_tasks': exec_tasks,

-             'minion_repos': minion_repos,

-             'taskdir': self.arg_data['taskdir'],

-             'test_playbook': test_playbook,

-             'sti_inventory': sti_inventory,

-             'varsfile': os.path.basename(varsfile),

-         }

-         varsfile_internal = os.path.join(artifacts_subdir, 'taskotron',

-                                          'ansible_vars_internal.json')

- 

-         if keepalive:

-             ansible_vars_internal['keepalive_minutes'] = keepalive

+     def _run_playbook(self, test_playbook, ipaddr, playbook_vars):

+         '''Run the ansible-playbook command to execute given playbook

+         containing the task.

+ 

+         :param str test_playbook: name of the playbook, relative to the task

+             directory

+         :param str ipaddr: IP address of the machine the task will be run on

+         :param dict playbook_vars: vars dict created by

+             :meth:`_create_playbook_vars`

+         :return: stream output of the ansible-playbook command (stdout and

+             stderr merged together)

+         :rtype: str

+         :raise TaskotronPlaybookError: when the playbook is not syntactically

+             correct

+         '''

+         # save forwarded variables, so that task playbook can load them

+         fwdvars = {}

+         for fwdvar in self.FORWARDED_VARS:

+             fwdvars[fwdvar] = playbook_vars[fwdvar]

+         file_utils.makedirs(os.path.join(playbook_vars['artifacts'],

+             'taskotron'))

+         varsfile = os.path.join(playbook_vars['artifacts'], 'taskotron',

+                 playbook_vars['varsfile'])

+         with open(varsfile, 'w') as vf:

+             vars_str = json.dumps(fwdvars, indent=2, sort_keys=True)

+             vf.write(vars_str)

+             log.debug('Saved task vars file %s with contents:\n%s',

+                       varsfile, vars_str)

  

+         # save also all runner playbook variables, for debugging

+         allvarsfile = os.path.join(playbook_vars['artifacts'], 'taskotron',

+             'internal_vars.json')

+         with open(allvarsfile, 'w') as vf:

+             vars_str = json.dumps(playbook_vars, indent=2, sort_keys=True)

+             vf.write(vars_str)

+             log.debug('Saved internal ansible vars file %s with contents:\n%s',

+                       allvarsfile, vars_str)

+ 

+         ansible_dir = os.path.join(config.get_config()._data_dir, 'ansible')

          # figure out the ansible-playbook command

          cmd = [

              'ansible-playbook', 'runner.yml',

              '--inventory=%s,' % ipaddr,  # the ending comma is important

-             '--extra-vars=@%s' % varsfile,

-             '--extra-vars=@%s' % varsfile_internal,

+             '--extra-vars=@%s' % allvarsfile,

          ]

  

+         if playbook_vars['become_root']:

+             cmd.append('--become')

+ 

          if self.run_remotely:

              if self.arg_data['ssh_privkey']:

                  cmd.append('--private-key=%s' % self.arg_data['ssh_privkey'])

          else:

              cmd.append('--connection=local')

-             ansible_vars_internal['local'] = True

-             if root:

-                 cmd.append('--become')

-                 ansible_vars_internal['become_root'] = True

-             else:

-                 ansible_vars_internal['become_root'] = False

  

          if self.arg_data['debug']:

              cmd.append('-vv')

  

-         # store the variables in json files, so that ansible can load them

-         file_utils.makedirs(os.path.join(artifacts_subdir, 'taskotron'))

-         with open(varsfile, 'w') as vf:

-             vars_str = json.dumps(ansible_vars, indent=2, sort_keys=True)

-             vf.write(vars_str)

-             log.debug('Saved ansible vars file %s with contents:\n%s',

-                       varsfile, vars_str)

-         with open(varsfile_internal, 'w') as vf:

-             vars_str = json.dumps(ansible_vars_internal, indent=2,

-                                   sort_keys=True)

-             vf.write(vars_str)

-             log.debug('Saved internal ansible vars file %s with contents:\n%s',

-                       varsfile_internal, vars_str)

- 

          log.debug('Running ansible playbook %s', ' '.join(cmd))

          try:

              # during playbook execution, handle system signals asking us to
@@ -271,7 +360,7 @@ 

              signal.signal(signal.SIGTERM, self._interrupt_handler)

  

              output, _ = os_utils.popen_rt(cmd, cwd=ansible_dir)

-             return (output, generic)

+             return output

          except subprocess.CalledProcessError as e:

              log.error('ansible-playbook ended with %d return code',

                        e.returncode)
@@ -340,15 +429,27 @@ 

  

          failed = []

          for test_playbook in test_playbooks:

-             ipaddr = self._get_client_ipaddr()

-             if ipaddr is None:

-                 ipaddr = self._spawn_vm(self.arg_data['uuid'])

- 

-             log.info('Running playbook %s on machine: %s', test_playbook, ipaddr)

- 

              try:

-                 _, generic = self._run_playbook(test_playbook, ipaddr)

-                 if generic:

+                 # syntax check

+                 self._check_playbook_syntax(os.path.join(

+                     self.arg_data['taskdir'], test_playbook))

+ 

+                 # compute variables

+                 playbook_vars = self._create_playbook_vars(test_playbook)

+ 

+                 # spawn VM if needed

+                 ipaddr = self.ipaddr

+                 if ipaddr is None:

+                     ipaddr = self._spawn_vm(self.arg_data['uuid'],

+                         playbook_vars)

+ 

+                 # execute

+                 log.info('Running playbook %s on machine: %s', test_playbook,

+                     ipaddr)

+                 self._run_playbook(test_playbook, ipaddr, playbook_vars)

+ 

+                 # report results

+                 if playbook_vars['taskotron_generic_task']:

                      self._report_results(test_playbook)

              except exc.TaskotronInterruptError as e:

                  log.error('Caught system interrupt during execution of '

@@ -46,7 +46,7 @@ 

          #: hostname to use for spawned instance - based on username of current user

          self.hostname = 'taskotron-%s' % getpass.getuser()

  

-     def _prepare_image(self, distro=None, release=None, flavor=None, arch=None):

+     def _prepare_image(self, distro, release, flavor, arch):

          '''Use testcloud to prepare an image for local booting

          :param str distro: Distro to use in image discovery

          :param str release: Distro's release to use in image discovery
@@ -62,11 +62,6 @@ 

              if config.get_config().force_imageurl:

                  img_url = config.get_config().imageurl

              else:

-                 distro = distro or config.get_config().default_disposable_distro

-                 release = release or config.get_config().default_disposable_release

-                 flavor = flavor or config.get_config().default_disposable_flavor

-                 arch = arch or config.get_config().default_disposable_arch

- 

                  log.debug("Looking for image with DISTRO: %s, RELEASE: %s, FLAVOR: %s, ARCH: %s" %

                            (distro, release, flavor, arch))

  
@@ -125,7 +120,7 @@ 

                                                 "already defined".format(self.instancename))

          return existing_instance

  

-     def prepare(self, distro=None, release=None, flavor=None, arch=None):

+     def prepare(self, distro, release, flavor, arch):

          '''Prepare a virtual machine for running tasks.

          :param str distro: Distro to use in image discovery

          :param str release: Distro's release to use in image discovery

file modified
+31 -38
@@ -9,6 +9,7 @@ 

  

  from libtaskotron import exceptions as exc

  from libtaskotron.logger import log

+ from libtaskotron import config

  

  try:

      from libtaskotron.ext.fedora import rpm_utils
@@ -16,44 +17,40 @@ 

      raise exc.TaskotronImportError(e)

  

  

- def devise_environment(arg_data):

-     '''Takes an input item and type, and returns a required run-environment (i.e.

-     distro, arch, fedora release, and base-image flavor), based on item and type.

+ def devise_environment(arg_data, playbook_vars):

+     '''Takes an input item and type, and returns a required run-environment,

+     or a default one, if the task doesn't require anything specific.

  

-     :param dict formula: parsed formula file (or dict with equivalent structure)

-     :param dict arg_data: parsed command-line arguments. item, type and arch

-                           are used in this method

-     :return: dict containing distro, release, flavor, arch. Each either set, or None

-     :raise TaskotronValueError: when environment can't be parsed from the formula

+     :param dict arg_data: parsed command-line arguments (item, type and arch

+         are used in this method)

+     :param dict playbook_vars: vars dict created by

+         :meth:`executor._create_playbook_vars`

+     :return: dict containing ``distro``, ``release``, ``flavor`` and ``arch``.

+         Each either set, or ``None``.

      '''

  

      env = {'distro': None, 'release': None, 'flavor': None, 'arch': None}

  

-     item = arg_data.get('item', None)

-     item_type = arg_data.get('type', None)

+     item = arg_data['item']

+     item_type = arg_data['type']

+     arch = arg_data['arch']

  

-     if not env['distro']:

+     if playbook_vars['taskotron_match_host_distro']:

          if item_type == 'koji_build':

              # FIXME: find a way to make this not Fedora-specific

              # For `xchat-2.8.8-21.fc20` disttag is `fc20` for example

              try:

                  distro = rpm_utils.get_dist_tag(item)[:2]

-                 env['distro'] = {'fc': 'fedora'}.get(distro, None)

+                 env['distro'] = {'fc': 'fedora'}.get(distro)

              except exc.TaskotronValueError:

-                 env['distro'] = None

+                 log.debug('Failed to parse distro from koji build %s, using '

+                     'default', item)

  

          elif item_type == 'koji_tag':

              if re.match(r'^f[0-9]{2}-.*', item):

                  env['distro'] = 'fedora'

  

-         if env['distro']:

-             log.debug("Environment/distro overriden with ENVVAR from %r:%r to %r",

-                       item_type, item, env['distro'])

-         else:

-             log.debug("Environment/distro can not be inferred from %r:%r. Using default.",

-                       item_type, item)

- 

-     if not env['release']:

+     if playbook_vars['taskotron_match_host_release']:

          if item_type == 'koji_build':

              # FIXME: find a way to make this not Fedora-specific

              # Last two characters in rpm's disttag are the Fedora release.
@@ -61,28 +58,24 @@ 

              try:

                  env['release'] = rpm_utils.get_dist_tag(item)[-2:]

              except exc.TaskotronValueError:

-                 env['release'] = None

+                 log.debug('Failed to parse release from koji build %s, using '

+                     'default', item)

  

          elif item_type == 'koji_tag':

              if re.match(r'^f[0-9]{2}-.*', item):

                  env['release'] = item[1:3]

  

-         if env['release']:

-             log.debug("Environment/release inferred from %r:%r to %r",

-                       item_type, item, env['release'])

-         else:

-             log.debug("Environment/release can not be inferred from %r:%r. Using default.",

-                       item_type, item)

- 

-     if not env['flavor']:

-         log.debug("Environment/flavor not specified. Using default")

- 

-     if not env['arch']:

-         arch = arg_data.get('arch')

-         if not arch or arch == 'noarch':

-             log.warn("Environment/arch can not be inferred from %r. Using default.", arch)

-         else:

+     if playbook_vars['taskotron_match_host_arch']:

+         if arch != 'noarch':

              env['arch'] = arch

-             log.debug("Environment/arch inferred from %r to %r", arch, env['arch'])

+ 

+     log.debug('Forced environment values: %s', env)

+ 

+     env['distro'] = env['distro'] or config.get_config().default_disposable_distro

+     env['release'] = env['release'] or config.get_config().default_disposable_release

+     env['flavor'] = env['flavor'] or config.get_config().default_disposable_flavor

+     env['arch'] = env['arch'] or config.get_config().default_disposable_arch

+ 

+     log.debug('Environment to be used: %s', env)

  

      return env

file modified
+10 -4
@@ -41,6 +41,9 @@ 

              'type': 'koji_build',

              'arch': 'noarch',

              'debug': False,

+             'local': True,

+             'libvirt': False,

+             'ssh': False,

              'ssh_privkey': None,

          }

          self.playbook_name = 'tests.yml'
@@ -48,6 +51,8 @@ 

          self.playbook.write(PLAYBOOK)

          self.ipaddr = '127.0.0.1'

          self.executor = executor.Executor(self.arg_data)

+         self.playbook_vars = self.executor._create_playbook_vars(

+             self.playbook_name)

  

          monkeypatch.setattr(config, '_config', None)

          self.conf = config.get_config()
@@ -65,13 +70,14 @@ 

  

      def test_run_playbook_simple(self, monkeypatch):

          '''Execute a very simple playbook whether everything works'''

+         # don't override Ctrl+C during testing

          mock_signal = mock.Mock()

          monkeypatch.setattr(signal, 'signal', mock_signal)

+         # it's non-default, but we can't run the test suite as root

+         self.playbook_vars['become_root'] = False

  

-         # the tradeoff here is that we need to run with root=False

-         # (non-default)

-         output, _ = self.executor._run_playbook(self.playbook_name,

-             self.ipaddr, root=False)

+         output = self.executor._run_playbook(self.playbook_name,

+             self.ipaddr, self.playbook_vars)

          # FIXME: We currently have no idea whether the test playbook passed

          # or failed, because we ignore the inner playbook's exit status. So

          # we can't really verify here whether everything worked ok.

file modified
+141 -34
@@ -11,6 +11,7 @@ 

  import os

  import json

  import subprocess

+ import yaml

  

  from libtaskotron import executor

  import libtaskotron.exceptions as exc
@@ -66,6 +67,8 @@ 

          self.playbook.write(PLAYBOOK)

          self.ipaddr = '127.0.0.1'

          self.executor = executor.Executor(self.arg_data)

+         self.playbook_vars = self.executor._create_playbook_vars(

+             self.playbook_name)

  

          monkeypatch.setattr(config, '_config', None)

          self.conf = config.get_config()
@@ -118,32 +121,26 @@ 

  

      def test_run_playbook(self, monkeypatch):

          '''A standard invocation of ansible-playbook'''

-         mock_check_syntax = mock.Mock()

-         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

-             mock_check_syntax)

          mock_signal = mock.Mock()

          monkeypatch.setattr(signal, 'signal', mock_signal)

          mock_popen = mock.Mock(return_value=('fake output', None))

          monkeypatch.setattr(os_utils, 'popen_rt', mock_popen)

  

-         output, _ = self.executor._run_playbook(self.playbook_name,

-             self.ipaddr)

+         output = self.executor._run_playbook(self.playbook_name,

+             self.ipaddr, self.playbook_vars)

  

          # must return playbook output

          assert output == 'fake output'

  

-         # must check syntax

-         mock_check_syntax.assert_called_once()

- 

          # must mask signals

          assert mock_signal.call_count == 4  # 2 signals masked, then reset

  

          # must export ansible vars

          varsfile = os.path.join(self.artifactsdir.strpath, self.playbook_name,

-             'taskotron', 'ansible_vars.json')

-         varsfile_internal = os.path.join(self.artifactsdir.strpath,

-             self.playbook_name, 'taskotron', 'ansible_vars_internal.json')

-         for vf in [varsfile, varsfile_internal]:

+             'taskotron', 'task_vars.json')

+         allvarsfile = os.path.join(self.artifactsdir.strpath,

+             self.playbook_name, 'taskotron', 'internal_vars.json')

+         for vf in [varsfile, allvarsfile]:

              assert os.path.isfile(vf)

              with open(vf, 'r') as f:

                  json.load(f)
@@ -153,16 +150,12 @@ 

          cmd_args = mock_popen.call_args[0][0]

          assert cmd_args[0] == 'ansible-playbook'

          assert '--inventory={},'.format(self.ipaddr) in cmd_args

-         assert '--extra-vars=@{}'.format(varsfile) in cmd_args

-         assert '--extra-vars=@{}'.format(varsfile_internal) in cmd_args

+         assert '--extra-vars=@{}'.format(allvarsfile) in cmd_args

          assert '--become' in cmd_args

          assert '--connection=local' in cmd_args

  

      def test_run_playbook_failed(self, monkeypatch):

          '''Should raise when ansible-playbook fails'''

-         mock_check_syntax = mock.Mock()

-         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

-             mock_check_syntax)

          mock_signal = mock.Mock()

          monkeypatch.setattr(signal, 'signal', mock_signal)

          cpe = subprocess.CalledProcessError(returncode=99, cmd='fake')
@@ -170,7 +163,8 @@ 

          monkeypatch.setattr(os_utils, 'popen_rt', mock_popen)

  

          with pytest.raises(exc.TaskotronError):

-             self.executor._run_playbook(self.playbook_name, self.ipaddr)

+             self.executor._run_playbook(self.playbook_name, self.ipaddr,

+                 self.playbook_vars)

  

          # must unmask signals even when playbook failed

          assert mock_signal.call_count == 4  # 2 signals masked, then reset
@@ -180,9 +174,6 @@ 

  

      def test_run_playbook_interrupted(self, monkeypatch):

          '''Should try failsafe stop when interrupted'''

-         mock_check_syntax = mock.Mock()

-         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

-             mock_check_syntax)

          mock_signal = mock.Mock()

          monkeypatch.setattr(signal, 'signal', mock_signal)

          error = exc.TaskotronInterruptError(15, 'SIGTERM')
@@ -190,7 +181,8 @@ 

          monkeypatch.setattr(os_utils, 'popen_rt', mock_popen)

  

          with pytest.raises(exc.TaskotronInterruptError):

-             self.executor._run_playbook(self.playbook_name, self.ipaddr)

+             self.executor._run_playbook(self.playbook_name, self.ipaddr,

+                 self.playbook_vars)

  

          # must unmask signals even when playbook failed

          assert mock_signal.call_count == 4  # 2 signals masked, then reset
@@ -202,9 +194,7 @@ 

  

      def test_run_playbook_failsafe_error(self, monkeypatch):

          '''When failsafe stop gives an error, it should be ignored'''

-         mock_check_syntax = mock.Mock()

-         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

-             mock_check_syntax)

+         # don't override Ctrl+C during testing

          mock_signal = mock.Mock()

          monkeypatch.setattr(signal, 'signal', mock_signal)

          error = exc.TaskotronInterruptError(15, 'SIGTERM')
@@ -213,10 +203,72 @@ 

          monkeypatch.setattr(os_utils, 'popen_rt', mock_popen)

  

          with pytest.raises(exc.TaskotronInterruptError):

-             self.executor._run_playbook(self.playbook_name, self.ipaddr)

+             self.executor._run_playbook(self.playbook_name, self.ipaddr,

+                 self.playbook_vars)

+ 

+     def test_run_playbook_forwarded_vars(self, monkeypatch):

+         '''Only certain vars should get forwarded to task playbook, no other'''

+         # don't override Ctrl+C during testing

+         mock_signal = mock.Mock()

+         monkeypatch.setattr(signal, 'signal', mock_signal)

+         mock_popen = mock.Mock(return_value=('fake output', None))

+         monkeypatch.setattr(os_utils, 'popen_rt', mock_popen)

+ 

+         self.executor._run_playbook(self.playbook_name,

+             self.ipaddr, self.playbook_vars)

+         varsfile = os.path.join(self.artifactsdir.strpath, self.playbook_name,

+             'taskotron', 'task_vars.json')

+ 

+         assert os.path.isfile(varsfile)

+         with open(varsfile, 'r') as vf:

+             vars_ = json.load(vf)

+ 

+         for var in executor.Executor.FORWARDED_VARS:

+             assert var in vars_

+         assert len(vars_) == len(executor.Executor.FORWARDED_VARS)

+ 

+     def test_get_client_ipaddr_local(self):

+         '''Local execution'''

+         self.arg_data['local'] = True

+         self.arg_data['libvirt'] = False

+         self.arg_data['ssh'] = False

+         self.executor = executor.Executor(self.arg_data)

+ 

+         ipaddr = self.executor._get_client_ipaddr()

+ 

+         assert ipaddr == '127.0.0.1'

+         assert self.executor.run_remotely == False

+ 

+     def test_get_client_ipaddr_libvirt(self):

+         '''Libvirt execution'''

+         self.arg_data['local'] = False

+         self.arg_data['libvirt'] = True

+         self.arg_data['ssh'] = False

+         self.executor = executor.Executor(self.arg_data)

+ 

+         ipaddr = self.executor._get_client_ipaddr()

+ 

+         assert ipaddr == None

+         assert self.executor.run_remotely == True

+ 

+     def test_get_client_ipaddr_ssh(self):

+         '''Ssh execution'''

+         self.arg_data['local'] = False

+         self.arg_data['libvirt'] = False

+         self.arg_data['ssh'] = True

+         self.arg_data['machine'] = '127.0.0.2'

+         self.executor = executor.Executor(self.arg_data)

+ 

+         ipaddr = self.executor._get_client_ipaddr()

+ 

+         assert ipaddr == '127.0.0.2'

+         assert self.executor.run_remotely == True

  

      def test_execute_local(self, monkeypatch):

          '''Execution using local mode'''

+         mock_check_syntax = mock.Mock()

+         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

+             mock_check_syntax)

          mock_spawn_vm = mock.Mock()

          monkeypatch.setattr(executor.Executor, '_spawn_vm', mock_spawn_vm)

          mock_run_playbook = mock.Mock(return_value=(None, True))
@@ -225,11 +277,13 @@ 

          mock_report_results = mock.Mock()

          monkeypatch.setattr(executor.Executor, '_report_results',

              mock_report_results)

-         assert self.executor.arg_data['local'] == True

+         self.executor.ipaddr = '127.0.0.1'

+         self.executor.run_remotely = False

  

          success = self.executor.execute()

  

          assert success == True

+         mock_check_syntax.assert_called_once()

          mock_spawn_vm.assert_not_called()

          mock_run_playbook.assert_called_once()

          assert mock_run_playbook.call_args[0][1] == '127.0.0.1'
@@ -237,6 +291,9 @@ 

  

      def test_execute_ssh(self, monkeypatch):

          '''Execution using ssh mode'''

+         mock_check_syntax = mock.Mock()

+         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

+             mock_check_syntax)

          mock_spawn_vm = mock.Mock()

          monkeypatch.setattr(executor.Executor, '_spawn_vm', mock_spawn_vm)

          mock_run_playbook = mock.Mock(return_value=(None, True))
@@ -245,13 +302,13 @@ 

          mock_report_results = mock.Mock()

          monkeypatch.setattr(executor.Executor, '_report_results',

              mock_report_results)

-         self.executor.arg_data['local'] = False

-         self.executor.arg_data['ssh'] = True

-         self.executor.arg_data['machine'] = '127.0.0.2'

+         self.executor.ipaddr = '127.0.0.2'

+         self.executor.run_remotely = True

  

          success = self.executor.execute()

  

          assert success == True

+         mock_check_syntax.assert_called_once()

          mock_spawn_vm.assert_not_called()

          mock_run_playbook.assert_called_once()

          assert mock_run_playbook.call_args[0][1] == '127.0.0.2'
@@ -259,6 +316,9 @@ 

  

      def test_execute_libvirt(self, monkeypatch):

          '''Execution using libvirt mode'''

+         mock_check_syntax = mock.Mock()

+         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

+             mock_check_syntax)

          mock_spawn_vm = mock.Mock(return_value='127.0.0.3')

          monkeypatch.setattr(executor.Executor, '_spawn_vm', mock_spawn_vm)

          mock_run_playbook = mock.Mock(return_value=(None, True))
@@ -269,12 +329,13 @@ 

              mock_report_results)

          mock_task_vm = mock.Mock()

          self.executor.task_vm = mock_task_vm

-         self.executor.arg_data['local'] = False

-         self.executor.arg_data['libvirt'] = True

+         self.executor.ipaddr = None

+         self.executor.run_remotely = True

  

          success = self.executor.execute()

  

          assert success == True

+         mock_check_syntax.assert_called_once()

          mock_spawn_vm.assert_called_once()

          mock_run_playbook.assert_called_once()

          assert mock_run_playbook.call_args[0][1] == '127.0.0.3'
@@ -283,6 +344,9 @@ 

  

      def test_execute_no_playbooks(self, monkeypatch):

          '''Should raise when there are no playbooks'''

+         mock_check_syntax = mock.Mock()

+         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

+             mock_check_syntax)

          mock_run_playbook = mock.Mock()

          monkeypatch.setattr(executor.Executor, '_run_playbook',

              mock_run_playbook)
@@ -294,11 +358,15 @@ 

          with pytest.raises(exc.TaskotronError):

              self.executor.execute()

  

+         mock_check_syntax.assert_not_called()

          mock_run_playbook.assert_not_called()

          mock_report_results.assert_not_called()

  

      def test_execute_more_playbooks(self, monkeypatch):

          '''Should execute all found playbooks'''

+         mock_check_syntax = mock.Mock()

+         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

+             mock_check_syntax)

          mock_run_playbook = mock.Mock(return_value=(None, True))

          monkeypatch.setattr(executor.Executor, '_run_playbook',

              mock_run_playbook)
@@ -310,6 +378,7 @@ 

          success = self.executor.execute()

  

          assert success == True

+         assert mock_check_syntax.call_count == 2

          assert mock_run_playbook.call_count == 2

          playbooks = [

              mock_run_playbook.call_args_list[0][0][0],
@@ -327,6 +396,9 @@ 

  

      def test_execute_error(self, monkeypatch):

          '''Should raise on playbook errors'''

+         mock_check_syntax = mock.Mock()

+         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

+             mock_check_syntax)

          mock_run_playbook = mock.Mock(side_effect=exc.TaskotronError)

          monkeypatch.setattr(executor.Executor, '_run_playbook',

              mock_run_playbook)
@@ -337,11 +409,15 @@ 

          success = self.executor.execute()

  

          assert success == False

+         mock_check_syntax.assert_called_once()

          mock_run_playbook.assert_called_once()

          mock_report_results.assert_not_called()

  

      def test_execute_more_playbooks_error(self, monkeypatch):

          '''Should execute all found playbooks even if some has error'''

+         mock_check_syntax = mock.Mock()

+         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

+             mock_check_syntax)

          mock_run_playbook = mock.Mock(side_effect=[

              exc.TaskotronError,

              (None, True)])
@@ -355,6 +431,7 @@ 

          success = self.executor.execute()

  

          assert success == False

+         assert mock_check_syntax.call_count == 2

          assert mock_run_playbook.call_count == 2

          playbooks = [

              mock_run_playbook.call_args_list[0][0][0],
@@ -371,6 +448,9 @@ 

      def test_execute_sti_no_report(self, monkeypatch):

          '''Shouldn't try to report results for STI tasks'''

          self.playbook.write(PLAYBOOK_STI)

+         mock_check_syntax = mock.Mock()

+         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

+             mock_check_syntax)

          mock_spawn_vm = mock.Mock()

          monkeypatch.setattr(executor.Executor, '_spawn_vm', mock_spawn_vm)

          mock_run_playbook = mock.Mock(return_value=(None, False))
@@ -383,6 +463,7 @@ 

          success = self.executor.execute()

  

          assert success == True

+         mock_check_syntax.assert_called_once()

          mock_spawn_vm.assert_not_called()

          mock_run_playbook.assert_called_once()

          assert mock_run_playbook.call_args[0][1] == '127.0.0.1'
@@ -390,6 +471,9 @@ 

  

      def test_execute_interrupted(self, monkeypatch):

          '''Should halt execution and return when interrupted'''

+         mock_check_syntax = mock.Mock()

+         monkeypatch.setattr(executor.Executor, '_check_playbook_syntax',

+             mock_check_syntax)

          error = exc.TaskotronInterruptError(15, 'SIGTERM')

          mock_run_playbook = mock.Mock(side_effect=error)

          monkeypatch.setattr(executor.Executor, '_run_playbook',
@@ -402,6 +486,7 @@ 

          success = self.executor.execute()

  

          assert success == False

+         mock_check_syntax.assert_called_once()

          mock_run_playbook.assert_called_once()

          mock_report_results.assert_not_called()

  
@@ -416,7 +501,7 @@ 

          vm_instance.ipaddr = '10.11.12.13'

          monkeypatch.setattr(vm, 'TestCloudMachine', mock_vm)

  

-         ipaddr = self.executor._spawn_vm(None)

+         ipaddr = self.executor._spawn_vm(None, self.playbook_vars)

          assert ipaddr == '10.11.12.13'

          assert mock_vm_prepare.call_count == 2

  
@@ -428,6 +513,28 @@ 

          monkeypatch.setattr(vm, 'TestCloudMachine', mock_vm)

  

          with pytest.raises(exc.TaskotronMinionError):

-             self.executor._spawn_vm(None)

+             self.executor._spawn_vm(None, self.playbook_vars)

  

          assert mock_vm_prepare.call_count == self.conf.spawn_vm_retries

+ 

+     def test_load_playbook_accepted_vars(self):

+         '''All accepted vars must be loaded and none other'''

+         playbook = yaml.safe_load(PLAYBOOK)[0]

+         playbook['vars'].clear()

+         # add all accepted

+         for var in executor.Executor.ACCEPTED_VARS:

+             playbook['vars'][var] = 'This is ' + var

+         # add unaccepted starting with taskotron_

+         playbook['vars']['taskotron_unaccepted'] = 'ignore me'

+         # add unaccepted generic

+         playbook['vars']['unaccepted'] = 'ignore me'

+         self.playbook.remove()

+         self.playbook.write(yaml.safe_dump([playbook]))

+ 

+         vars_ = self.executor._load_playbook_vars(self.playbook_name)

+ 

+         for var in executor.Executor.ACCEPTED_VARS:

+             assert var in vars_

+         assert 'taskotron_unaccepted' not in vars_

+         assert 'unaccepted' not in vars_

+         assert len(vars_) == len(executor.Executor.ACCEPTED_VARS)

file modified
+69 -12
@@ -5,36 +5,93 @@ 

  

  '''Unit tests for libtaskotron/image_utils.py'''

  

+ import pytest

+ 

  from libtaskotron.image_utils import devise_environment

+ from libtaskotron import config

  

  class TestDeviseEnvironment:

  

      def setup_method(self, method):

          self.arg_data = {

-             'item': 'htop-2.0.2-4.fc27',

+             'item': 'htop-2.0.2-4.fc20',

              'type': 'koji_build',

-             'arch': 'noarch',

+             'arch': 'i686',

+         }

+         self.playbook_vars = {

+             'taskotron_match_host_distro': True,

+             'taskotron_match_host_release': True,

+             'taskotron_match_host_arch': True,

          }

+         self.cfg = config.get_config()

  

      def test_koji_build(self):

-         env = devise_environment(self.arg_data)

+         env = devise_environment(self.arg_data, self.playbook_vars)

          assert env['distro'] == 'fedora'

-         assert env['release'] == '27'

-         assert env['arch'] == None

+         assert env['release'] == '20'

+         assert env['arch'] == self.arg_data['arch']

  

      def test_koji_tag(self):

          self.arg_data = {

-             'item': 'f27-updates-pending',

+             'item': 'f20-updates-pending',

              'type': 'koji_tag',

-             'arch': 'x86_64',

+             'arch': 'i686',

          }

-         env = devise_environment(self.arg_data)

+         env = devise_environment(self.arg_data, self.playbook_vars)

          assert env['distro'] == 'fedora'

-         assert env['release'] == '27'

+         assert env['release'] == '20'

          assert env['arch'] == self.arg_data['arch']

  

      def test_unknown_distro(self):

          self.arg_data['item'] = 'htop-2.0.2-4.el7'

-         env = devise_environment(self.arg_data)

-         assert env['distro'] == None

-         assert env['release'] == None

+         env = devise_environment(self.arg_data, self.playbook_vars)

+         assert env['distro'] == self.cfg.default_disposable_distro

+         assert env['release'] == self.cfg.default_disposable_release

+         assert env['arch'] == self.arg_data['arch']

+ 

+     def test_match_host_arch_true(self):

+         env = devise_environment(self.arg_data, self.playbook_vars)

+         assert env['distro'] == 'fedora'

+         assert env['release'] == '20'

+         assert env['arch'] == self.arg_data['arch']

+ 

+     def test_match_host_arch_false(self):

+         self.playbook_vars['taskotron_match_host_arch'] = False

+         env = devise_environment(self.arg_data, self.playbook_vars)

+         assert env['distro'] == 'fedora'

+         assert env['release'] == '20'

+         assert env['arch'] == self.cfg.default_disposable_arch

+ 

+     def test_match_host_release_true(self):

+         env = devise_environment(self.arg_data, self.playbook_vars)

+         assert env['distro'] == 'fedora'

+         assert env['release'] == '20'

+         assert env['arch'] == self.arg_data['arch']

+ 

+     def test_match_host_release_false(self):

+         self.playbook_vars['taskotron_match_host_release'] = False

+         env = devise_environment(self.arg_data, self.playbook_vars)

+         assert env['distro'] == 'fedora'

+         assert env['release'] == self.cfg.default_disposable_release

+         assert env['arch'] == self.arg_data['arch']

+ 

+     @pytest.mark.parametrize('match_arch', [True, False])

+     def test_noarch_to_default(self, match_arch):

+         self.arg_data['arch'] = 'noarch'

+         self.playbook_vars['taskotron_match_host_arch'] = match_arch

+         env = devise_environment(self.arg_data, self.playbook_vars)

+         assert env['distro'] == 'fedora'

+         assert env['release'] == '20'

+         assert env['arch'] == self.cfg.default_disposable_arch

+ 

+     def test_no_disttag(self):

+         self.arg_data['item'] = 'htop-2.0.2-4'

+         env = devise_environment(self.arg_data, self.playbook_vars)

+         assert env['distro'] == 'fedora'

+         assert env['release'] == self.cfg.default_disposable_release

+         assert env['arch'] == self.arg_data['arch']

+ 

+     def test_flavor(self):

+         '''Flavor is always the default one'''

+         env = devise_environment(self.arg_data, self.playbook_vars)

+         assert env['flavor'] == self.cfg.default_disposable_flavor

file modified
+4 -2
@@ -28,7 +28,8 @@ 

          test_vm = vm.TestCloudMachine(self.ref_uuid)

  

          with pytest.raises(exc.TaskotronImageError):

-             test_vm._prepare_image()

+             test_vm._prepare_image(distro=None, release=None, flavor=None,

+                 arch=None)

  

      def should_behave_on_success(self, monkeypatch):

          stub_image = MagicMock()
@@ -38,7 +39,8 @@ 

  

          test_vm = vm.TestCloudMachine(self.ref_uuid)

  

-         test_vm._prepare_image()

+         test_vm._prepare_image(distro=None, release=None, flavor=None,

+             arch=None)

  

  

  class TestvmInstancePrepare(object):

Playbooks can now contain these variables for devise_environment function:
- taskotron_match_host_distro
- taskotron_match_host_release
- taskotron_match_host_arch
If not present, values default to False and default env/{distro,release,arch} is used instead.

Please have a look at the "Implement match_host_* in libtaskotron" section of the design document I shared with you via google-drive , and change the code accordingly. This is a good start, though.

rebased onto 6ccbcc2

5 years ago

Why not use a function that returns a dict() instead? Seems like a cannon to kill a fly. Not that there is something functionally wrong with it.

I'd rather see the playbook.get('vars', {}) part separated to something like:

    vars = playbook.get('vars', {})

Honestly this whole thing is a bit repetetive, and could be done like:

    vars = playbook.get('vars', {})
    for var in ['taskotron_match_host_distro', 'taskotron_match_host_release', ...]:
        setattr(self, var, vars.get(var, False))
        log.info("Variable %s is %s", var, getattr(self, var))

Not saying this is necessarily better, though

Please also remove the default None values from the method signature.

I'm fairly certain we could just get rid of all the log.debug calls, and just replace all of those with something like:

log.debug('Inferred data: %s', env)

Here and

log.debug('Devised environmnet: %s', env)

just before the return statement

1 new commit added

  • rework lbrabec's patch
5 years ago

rework lbrabec's patch

It uses dict for data management now. I used the opportunity to
restructure it a bit, all available variables are now documented in
executor.py. The variables are saved into a file to debugging, including
the new ones. There's a new method that takes care of parsing variables
and populating the dictionary, keeping the logic in one place. The
number of debug printouts was reduced where they seemed unnecessary.

A bit unrelated change is removing root=bool support to run the playbook
as a standard user. It was a non-complete feature (not hooked up to
cmdline args), never used, very probably broken. I didn't want to spend
time to hook it up properly to the updated workflow. We can add it back
later if needed.

Please note that testsuite is broken at the moment, will fix it when I
get ack on the patch.

Please move the comments to the end of line. This is really unreadable. I also do not really understand the change. Why move the "docs" from what is the location one could possibly look for it to some random source file?

IMO this also makes fair bit harder to reconfigure these. Previously, this was a 'static file' provided by the libtaskotron package. Now the changes (e.g. heartbeat_interval) need to be done by much nastier way of hacking a source code, should one want to change it.

It's long yes, but it made little sense to me to have all the vars defaults in executor.py, and then have most of them (but not all of them) duplicated in runner.yml together with their description. It would get out of sync all the time. I can move it to the end of file, if you prefer, sure.

I don't follow the "changes need to be done nastier" comment. The heartbeat_interval wasn't configurable before, and it still isn't. If we want it configurable, we can expose it in taskotrong config. I doubt it's useful, though.

Some of the "default" values are also absolutely disregarded (caught varsfile on the first sight, decided not to go check all the others).

Good catch, will fix varsfile.

1 new commit added

  • move vars template to end of file, use varsfile default
5 years ago

move vars template to end of file, use varsfile default

@kparal - yes, the value was not in config.py, but if you wanted to change it, the default was not "hidden" inside a source code, but in a pretty manageable place in /usr/share/libtaskotron/ansible. Say we wanted to change some of the default values in our deployment. The only way would be either releasing a new version, or hot-fixing the executor.py. In the previous state, one could simply use ansible+template to change the runner.yml file

Please check also all the other variables, when you are at it.

Not sure which are the "allowed" taskotron_* values in the playbook, but it might make sense to do this after the default values are set.

Please add the debug log which contains only the actually "devised" data before defaults are applied.

Why not move this whole thing to the top, and just set the distro/release/flavor/arch from the defaults to begin with?

I'd like to see the possible KeyError handled.

I don't really agree on anything in /usr/share being "configurable". If you need to hotpatch something, you can do it the same way with a yaml file or a python file. I don't really see a difference, it's a source code change in both cases, just a different file format. Also, the only thing that you really could do this previously was the heartbeat/keepalive values (everything else got overridden). Is it that important to be able to hotfix hearbeat default in yaml instead of python to discuss this topic?

It doesn't raise KeyError, None is default.

Is it that important to be able to hotfix hearbeat default in yaml instead of python to discuss this topic?

It is a poster child for why I don't like the change. So, yes it is important to discuss it :)

Currently you see a printout every time a value is forced ("devised" if you want to call it). In the end, anything that hasn't been forced is a default value. It seemed quite clear to me, is it not?

Also, I'm confused by your previous comment. If I move the 4 lines to the top and set it as default values right away, how do you want me to print the values before the defaults are applied?

Sorry, this comment is displayed at the top of vm.py. I have no idea which method you're referring to.

Not sure which are the "allowed" taskotron_* values in the playbook, but it might make sense to do this after the default values are set.

Does this refer to the beginning of _create_playbook_vars? I even added a comment:

        # load all provided taskotron_* vars first, so that they don't override
        # out logic later on

This needs to be done as the first thing, otherwise people could override some vars we expose to them later. See
https://qa.fedoraproject.org/docs/libtaskotron/latest/writingtasks.html#special-task-variables

The alternative approach is to list which variables we read from tests.yml and read only those, instead of taskotron_*. But that means having yet another list to maintain, and this seemed easier and pretty safe.

It is a poster child for why I don't like the change. So, yes it is important to discuss it :)

Well then tell me what you'd like to see changed, in a generic fashion. I don't understand it currently. You're not complaining about not being able to change e.g. arg_data in the middle of the execution. Why should you be able to change ansible data? And why is moving some (a few!) of the defaults into the playbook file worth the increased complexity (more logic), worse readability (definitions scattered in several places) and increased maintenance (vars going out of sync in different places)?

1 new commit added

  • restructure per review comments
5 years ago

1 new commit added

  • moar changes
5 years ago

5 new commits added

  • moar changes
  • restructure per review comments
  • move vars template to end of file, use varsfile default
  • rework lbrabec's patch
  • variables for env matching added
5 years ago

Since we both ended up dissatisfied, I think this is a worthy compromise :-D

Fine to merge, once the testsuite is updated.

1 new commit added

  • add tests. Also add back option to run non-root, since tests need it
5 years ago

1 new commit added

  • trivial tests improvements
5 years ago

Functinaly, this probably works as intended. Code looks good. Vast amounts of yacks could be shaved, but I've been unreasonable enough for a patch of this insignificance already.

THX for all the relevant changes!

Commit 1f8884b fixes this pull-request

Pull-Request has been merged by kparal

5 years ago

And this is now live on dev, and working :-)