How do I set register a variable to persist between plays in ansible?

DeploymentAnsible

Deployment Problem Overview


I have an ansible playbook, where I'd like a variable I register on one machine to be available on another.

In my case, I'd like to run a command on localhost, in this case git rev-parse --abbrev-ref HEAD, so I can make a note of the current git branch, and sha1, and register this output, so I can refer to it later when working any machine in the main group, in the second play.

However, it's not clear to me how I register a variable on localhost, so I can access it from main. When I try to access the variable in the second play I get this message:

    TASK: [debug msg={{ app_git_sha1.stdout }}] ***********************************
    fatal: [main] => One or more undefined variables: 'app_git_sha1' is undefined

Here's the play I'm using. Is there anything obvious I should be doing?

    ---
    - hosts: localhost
      connection: local
      gather_facts: no
      tasks:
        - name: register current branch
          command: git rev-parse --abbrev-ref HEAD
          register: git_branch
          sudo: no
          when: vagrant
          tags:
            - debugsha

        - debug: msg={{ git_branch.stdout }}
          tags:
            - debugsha

        - name: register the SHA1 of the branch being deployed
          command: git rev-parse origin/{{ git_branch.stdout }}
          register: app_git_sha1
          sudo: no
          tags:
            - slack
            - debugsha

        - debug: msg={{ app_git_sha1.stdout }}
          tags:
            - debugsha



    - hosts: main
      sudo: yes
      roles:
        - role: productscience.deploy_user
        # TODO reprovision using these roles, for consistency
        # - role: app.essentials
        # - role: zenoamaro.postgresql
        - role: productscience.papertrailapp
        - role: jdauphant.nginx
      tasks:
        - include: setup.yml
        # - include: db.yml

        - name: checkout source control when deploying to remote servers
          include: source.yml
          when: not vagrant
          tags:
              - deploy

        - include: django.yml
          tags:
              - deploy


        - name: include vagrant specific dependencies for local development
          include: vagrant.yml
          when: vagrant

      handlers:
        - name: restart postgres
          sudo: yes
          service: name=postgresql state=restarted
        - name: start restart uwsgi
          sudo: yes
          service: name={{ app }} state=restarted

    - hosts: localhost
      connection: local
      gather_facts: no
      tasks:
        - name: register the SHA1 of the branch being deployed
          when: not vagrant
          command: git rev-parse origin/{{ git_branch }}
          register: git_sha
          tags:
            - slack

        - name: Send notification message via Slack all options
          when: not vagrant
          tags:
            - slack
          local_action:
            module: slack
            token: "{{ wof_slack_token }}"
            msg: "Deployment of `{{ git_branch }}` to {{ app_url }} completed with sha `{{ git_sha.stdout }}`"
            channel: "#wof"
            username: "Ansible deploy-o-tron"

Deployment Solutions


Solution 1 - Deployment

The problem you're running into is that you're trying to reference facts/variables of one host from those of another host. You need to keep in mind that in Ansible, the variable app_git_sha1 assigned to the host localhost is distinct from the variable app_git_sha1 assigned to the host main or any other host. If you want to access one hosts facts/variables from another host then you need to explicitly reference it via the hostvars variable. There's a bit more of a discussion on this in this question.

Suppose you have a playbook like this:

- hosts: localhost
  tasks:   
    - command: /bin/echo "this is a test"
      register: foo


- hosts: localhost
  tasks:
    - debug: var=foo

This will work because you're referencing the host localhost and localhosts's instance of the variable foo in both plays. The output of this playbook is something like this:

PLAY [localhost] **************************************************************

GATHERING FACTS ***************************************************************
ok: [localhost]

TASK: [command /bin/echo "this is a test"] ************************************
changed: [localhost]

PLAY [localhost] **************************************************************

GATHERING FACTS ***************************************************************
ok: [localhost]

TASK: [debug var=foo] *********************************************************
ok: [localhost] => {
    "var": {
        "foo": {
            "changed": true,
            "cmd": [
                "/bin/echo",
                "this is a test"
            ],
            "delta": "0:00:00.004585",
            "end": "2015-11-24 20:49:27.462609",
            "invocation": {
                "module_args": "/bin/echo \"this is a test\"",
                "module_complex_args": {},
                "module_name": "command"
            },
            "rc": 0,
            "start": "2015-11-24 20:49:27.458024",
            "stderr": "",
            "stdout": "this is a test",
            "stdout_lines": [
                "this is a test"
            ],
            "warnings": []
        }
    }
}

If you modify this playbook slightly to run the first play on one host and the second play on a different host, you'll get the error that you encountered. The solution is to use Ansible's built-in hostvars variable to have the second host explicitly reference the first hosts variable. So modify the first example like this:

- hosts: localhost
  tasks:

    - command: /bin/echo "this is a test"
      register: foo


- hosts: anotherhost
  tasks:
    - debug: var=foo
      when: foo is defined

    - debug: var=hostvars['localhost']['foo']
      when: hostvars['localhost']['foo'] is defined

The output of this playbook shows that the first task is skipped because foo is not defined by the host anotherhost. But the second task succeeds because it's explicitly referencing localhosts's instance of the variable foo:

TASK: [debug var=foo] *********************************************************
skipping: [anotherhost]

TASK: [debug var=hostvars['localhost']['foo']] **************************
ok: ['anotherhost'] => {
    "var": {
        "hostvars['localhost']['foo']": {
            "changed": true,
            "cmd": [
                "/bin/echo",
                "this is a test"
            ],
            "delta": "0:00:00.005950",
            "end": "2015-11-24 20:54:04.319147",
            "invocation": {
                "module_args": "/bin/echo \"this is a test\"",
                "module_complex_args": {},
                "module_name": "command"
            },
            "rc": 0,
            "start": "2015-11-24 20:54:04.313197",
            "stderr": "",
            "stdout": "this is a test",
            "stdout_lines": [
                "this is a test"
            ],
            "warnings": []
        }
    }
}

So, in a nutshell, you want to modify the variable references in your main playbook to reference the localhost variables in this manner:

{{ hostvars['localhost']['app_git_sha1'] }}

Solution 2 - Deployment

Use a dummy host and its variables

For example to pass K8S token and hash from the master to the workers.

On master

- name: "Cluster token"
  shell: kubeadm token list | cut -d ' ' -f1 | sed -n '2p'
  register: K8S_TOKEN

- name: "CA Hash"
  shell: openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  register: K8S_MASTER_CA_HASH

- name: "Add K8S Token and Hash to dummy host"
  add_host:
    name:   "K8S_TOKEN_HOLDER"
    token:  "{{ K8S_TOKEN.stdout }}"
    hash:   "{{ K8S_MASTER_CA_HASH.stdout }}"

- name:
  debug:
    msg: "[Master] K8S_TOKEN_HOLDER K8S token is {{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"

- name:
  debug:
    msg: "[Master] K8S_TOKEN_HOLDER K8S Hash is  {{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"

On worker

- name:
  debug:
    msg: "[Worker] K8S_TOKEN_HOLDER K8S token is {{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"
    
- name:
  debug:
    msg: "[Worker] K8S_TOKEN_HOLDER K8S Hash is  {{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"

- name: "Kubeadmn join"
  shell: >
    kubeadm join --token={{ hostvars['K8S_TOKEN_HOLDER']['token'] }}
    --discovery-token-ca-cert-hash sha256:{{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}
    {{K8S_MASTER_NODE_IP}}:{{K8S_API_SERCURE_PORT}}

Solution 3 - Deployment

I have had similar issues with even the same host but across different plays. This thing to remember is facts not variables the things persistent across plays. Here is how I get round the problem.

#!/usr/local/bin/ansible-playbook --inventory=./inventories/ec2.py
---
- name: "TearDown Infrastructure !!!!!!!"
  hosts: localhost
  gather_facts: no
  vars:
    aws_state: absent
  vars_prompt:
    - name: "aws_region"
      prompt: "Enter AWS Region:"
      default: 'eu-west-2'
  tasks:
    - name: Make vars persistant
      set_fact:
        aws_region: "{{aws_region}}"
        aws_state: "{{aws_state}}"




- name: "TearDown Infrastructure hosts !!!!!!!"
  hosts: monitoring.ec2
  connection: local
  gather_facts: no
  tasks:
    - name: set the facts per host
      set_fact:
        aws_region: "{{hostvars['localhost']['aws_region']}}"
        aws_state: "{{hostvars['localhost']['aws_state']}}"


    - debug:
        msg="state {{aws_state}} region {{aws_region}} id {{ ec2_id }} "

- name: last few bits
  hosts: localhost
  gather_facts: no
  tasks:
    - debug:
        msg="state {{aws_state}} region {{aws_region}} "

results in

Enter AWS Region: [eu-west-2]:


PLAY [TearDown Infrastructure !!!!!!!] ***************************************************************************************************************************************************************************************************

TASK [Make vars persistant] **************************************************************************************************************************************************************************************************************
ok: [localhost]

PLAY [TearDown Infrastructure hosts !!!!!!!] *********************************************************************************************************************************************************************************************

TASK [set the facts per host] ************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXXXXXXXX]

TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXX] => {
    "changed": false,
    "msg": "state absent region eu-west-2 id i-0XXXXX1 "
}

PLAY [last few bits] *********************************************************************************************************************************************************************************************************************


TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
    "changed": false,
    "msg": "state absent region eu-west-2 "
}

PLAY RECAP *******************************************************************************************************************************************************************************************************************************
XXXXXXXXXXXXX              : ok=2    changed=0    unreachable=0    failed=0
localhost                  : ok=2    changed=0    unreachable=0    failed=0

Solution 4 - Deployment

You can use an Ansible known behaviour, that is using group_vars folder to load some vars at your playbook. This is intended to be used together with inventory groups but is still a reference to global variable declaration. If you put a file or folder in there with the same name as the group you want some variable to be present, ansible will make sure it happens!

As for example, let's create a file called all and put a timestamp variable there. Then, whenever you need, you can call that variable, which will be available to every host declared on any play inside your playbook.

I usually do this to update timestamp once at the first play and use the value to write files and folders using the same timestamp.

I'm using lineinfile module to change the line starting with timestamp :

Check if it fits for your purpose.

On your group_vars/all

timestamp: t26032021165953

On the playbook, in the first play: hosts: localhost gather_facts: no

- name: Set timestamp on group_vars
  lineinfile:
    path: "{{ playbook_dir }}/group_vars/all"
    insertafter: EOF
    regexp: '^timestamp:'
    line: "timestamp: t{{ lookup('pipe','date +%d%m%Y%H%M%S') }}"
    state: present
  

On the playbook, in the second play:

hosts: any_hosts
gather_facts: no

tasks:
  - name: Check if timestamp is there
    debug:
      msg: "{{ timestamp }}"

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionChris AdamsView Question on Stackoverflow
Solution 1 - DeploymentBruce PView Answer on Stackoverflow
Solution 2 - DeploymentmonView Answer on Stackoverflow
Solution 3 - DeploymentkradView Answer on Stackoverflow
Solution 4 - DeploymentguistelaView Answer on Stackoverflow