Question

Currently I am switching from puppet to Ansible and I am a bit confused with some concepts or at least how ansible works.

Some info on the setup:

I am using the examples from Ansible Best Practices and have structured my project similar with several roles (playbooks) and so on.

I am using Vagrant for provisioning and the box is Saucy64 VBox.

Where the Confusion comes:

When I provision, and I run ansible, tasks start to execute, then the stack of notifications.

Example:

Last task:

TASK: [mysql | delete anonymous MySQL server user for localhost] ************** 
<127.0.0.1> REMOTE_MODULE mysql_user user='' state=absent 
changed: [default] => {"changed": true, "item": "", "user": ""}

Then first notification:

NOTIFIED: [timezone | update tzdata] ****************************************** 
<127.0.0.1> REMOTE_MODULE command /usr/sbin/dpkg-reconfigure --frontend noninteractive tzdata
changed: [default] => {"changed": true, "cmd": ["/usr/sbin/dpkg-reconfigure", "--frontend", "noninteractive", "tzdata"], "delta": "0:00:00.224081", "end": "2014-02-03 22:34:48.508961", "item": "", "rc": 0, "start": "2014-02-03 22:34:48.284880", "stderr": "\nCurrent default time zone: 'Europe/Amsterdam'\nLocal time is now:      Mon Feb  3 22:34:48 CET 2014.\nUniversal Time is now:  Mon Feb  3 21:34:48 UTC 2014.", "stdout": ""}

Now this is all fine. As the roles increase more and more notifications stuck up.

Now here comes the problem.

When a notification fails the provisioning stops as usual. But then the notification stack is empty! This means that all notifications that where after the faulty one will not be executed!

If that is so then if you changed a vhosts setting for apache and had a notification for the apache service to reload then this would get lost.

Let's give an example (pseudo lang):

- name: Install Apache Modules
  notify: Restart Apache

- name: Enable Vhosts
  notify: Reload Apache

- name: Install PHP
  command: GGGGGG # throws an error

When the above executes:

  1. Apache modules are installed
  2. Vhosts are enables
  3. PHP tries to istall and fails
  4. Script exits
  5. (Where are the notifications?)

Now at this point all seems logical but again Ansible tries to be clever (no!*) stacks notifications and thus reload and restart apache will result in a single restart of apache run at the end of provisioning. That means that all notifications will fail!!!

Now up to here for some people this is fine as well. They will say hey just re-run the provisioning and the notifications will fire up, thus apache will be finally reloaded and site will be up again. This is not the case.

On the second run of the script after the code for installing php is corrected the notifications will not run due to design. Why?

This is why: Ansible will have the tasks that executed successfully, marked as "Done/Green" thus not registering any notifications for these tasks. The provisioning will be successful and in order to trigger the notification and thus the apache restart you can do one of the following:

  1. Run a direct command to the server via ansible or ssh
  2. Edit the script to trigger the task
  3. Add a separate task for that
  4. Destroy instance of box and reprovision

This is quite frustrating because requires total cleanup of the box, or do I not understand something correctly with Ansible?

Is there another way to 'reclaim'/replay/force the notifications to execute?

  • Clever would be either to mark the task as incomplete and then restart the notifications or keep a separate queue with the notifications as tasks of their own.*
Was it helpful?

Solution

Yeah, that's one of the shortcomings of Ansible to say compared to Puppet. Puppet is declarative and doesn't error out like Ansible (or Chef) for that matter. It has its positives and negatives, for example Puppet takes a little bit of time before it starts running because it needs to compile its catalog.

So, you are right if your Ansible script errors out then your notification updates won't happen. The only way we've gotten around it is by using conditional statements. In your playbook you can do something like this:

- name: My cool playbook
  hosts: all

  vars:
      force_tasks: 0

  tasks:
    - name: Apache install
      action: apt pkg=$item state=latest
      with_items:
       - apache2
       - apache2-mpm-prefork

    - name: Restart apache
      action: service name=apache2 state=restart
      when: force_tasks

Then when you run your playbook you can pass force_tasks as an environment variable:

ansible-playbook -i my_inventory -e "force_tasks=True" my_ansible_playbook.yml

You can accomplish this in similar fashion with tags.

OTHER TIPS

Run ansible-playbook with the --force-handlers flag. This tells Ansible to run any queued handlers even if a task fails and further processing stops. The Ansible developers plan to add this as an option to the ansible.cfg file so it can be set globally and forgotten about. I don't know what the time frame for that is.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top