#214 qcow2 inventory seems to break local_action
Closed: Fixed 5 years ago Opened 5 years ago by till.

The qcow2 inventory script creates a localhost group and puts the VM in there (not sure why). This seems to break the local_action plugin since it makes Ansible run on the VM instead of locally. local_action seems to be needed to be able to reboot a VM and check its availability afterwards. I need this to write a tests that configures something in the system, reboots it and then checks if the change was properly applied.

 cat > 'reboot.yml' <<EOF
---
- hosts: all
  tasks:
  - shell: sleep 1 && /sbin/shutdown -r now "Ansible system package upgraded" && sleep 1
    async: 1
    poll: 0
    ignore_errors: yes

  - name: wait for server to come back
    local_action: wait_for
    args:
      host: "{{ ansible_host }}"
      port: "{{ ansible_port }}"
      state: started
      delay: 30
      timeout: 300

  - shell: id

EOF

Try delegate_to: localhost like this:

  tasks:
    - name: echo hostname localhost
      command: hostname
      delegate_to: localhost

For me, this prints my hostname, instead of the one of the test VM.

it does not work here:

cat > 'local_action.yml' <<EOF
---
- hosts: all
  tasks:
    - command: hostname
      delegate_to: localhost
      register: hostname_cmd

    - debug:
        var: hostname_cmd.stdout
EOF
ansible-playbook -i /tmp/inventory-cloudlzSNvU/inventory ./local_action.yml 

PLAY [all] ********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [localhost]

TASK [command] ****************************************************************************************************************************************************************************************************
changed: [localhost -> 127.0.0.3]

TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
    "hostname_cmd.stdout": "ibm-p8-kvm-03-guest-02"
}

PLAY RECAP ***********************************************************************
cat /tmp/inventory-cloudlzSNvU/inventory
[subjects]
localhost ansible_ssh_common_args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' ansible_port='3721' ansible_ssh_private_key_file='/tmp/inventory-cloudlzSNvU/identity' ansible_python_interpreter='/usr/bin/python2' ansible_ssh_pass='foobar' ansible_user='root' ansible_host='127.0.0.3'

The inventory file is from the qcow2 inventory.

So I guess the debug inventory is broken since the actual inventory is more complex:

{
    "_meta": {
        "hostvars": {
            "/home/till/cloud-images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2": {
                "ansible_host": "127.0.0.3", 
                "ansible_port": "3460", 
                "ansible_python_interpreter": "/usr/bin/python3", 
                "ansible_ssh_common_args": "-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no", 
                "ansible_ssh_pass": "foobar", 
                "ansible_ssh_private_key_file": "/tmp/inventory-cloudTFHNTo/identity", 
                "ansible_user": "root"
            }
        }
    }, 
    "all": {
        "children": [
            "localhost", 
            "subjects", 
            "ungrouped"
        ]
    }, 
    "localhost": {
        "hosts": [
            "/home/till/cloud-images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2"
        ]
    }, 
    "subjects": {
        "hosts": [
            "/home/till/cloud-images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2"
        ]
    }, 
    "ungrouped": {}
}

I have standard-test-roles-2.8-1.fc27 and ansible-2.5.2-1.fc27

So I guess the debug inventory is broken since the actual inventory is more complex:

What do you mean by "the debug inventory"?

So I guess the debug inventory is broken since the actual inventory is more complex:

What do you mean by "the debug inventory"?

With TEST_DEBUG=1 in the environment the inventory scripts does not kill the VM but gives some instructions including a link to an inventory file in /tmp. This inventory is different from the inventory that is created dynamically.

Yes, I can reproduce your issue by running the test with TEST_DEBUG=1 and then re-running it with the inventory generated in the first step. Thanks for the clarification.

@till you can use this approach:

- name: Add executor host
  add_host:
    name: real_localhost
    ansible_connection: local
    ansible_host: 127.0.0.1
    ansible_ssh_connection: local

next use:

tasks:
    - name: echo hostname localhost
      command: hostname
      delegate_to: real_localhost

or just, if you use beakerlib/basic role:

delegate_to: "{{ test_runner_inventory_name }}"

@astepano thank you. It seems to work with the dynamic inventory but not with the debug inventory. #215 fixes it for me. It creates the debug inventory like the dynamic one.

name: Add executor host
add_host:
name: real_localhost

The dynamic inventory defines a localhost group, not a localhost host, so the inventory created by TEST_DEBUG should do the same. Having a different environment for debugging kind of defeats the purpose.

(Why does there have to be a localhost group, by the way? I don't see anything in the standard test interface that requires this an even if there were anything, it would IMO be a bad design and should be changed.)

Login to comment on this ticket.

Metadata