Ansible Architecture and Capabilities
In this lesson we’ll be taking a look at ansible’s architecture and capabilities, and doing a brief overview of its operation. Both in a standard deployment, as well as how it operates when interfacing with Juniper devices. The first thing I would like to do is to get a sense of where ansible falls within our automation stack, so let’s bring up the diagram of the Junos automation stack:

To point this out, Ansible resides up at the top, along with Salt, Puppet and Chef. Ansible is written in Python, and utilizes the PyEZ library for interfacing with Juniper devices.
Now, PyEZ interfaces with our devices over netconf, which is interfacing with the XML API.
The XML API is hosted by the mgd process, which is running on our juniper hardware. So Ansible is something that is written in Python, and it tries to abstract away a lot of the coding involved with automating our devices. This can make it more approachable, and can be quicker development and easier to get started. Our ansible Plays, which are what’s used to describe the tasks that are done in an automated fashion, are written in YAML. You are not actually interfacing by coding in any Python. You don’t need to know Python at all in order to use Ansible, however it is very handy if you do and we will be covering some python basics later in this course.

So let’s take a look at some of Ansible’s features, and why we might want to use Ansible.
Ansible accelerates the deployment and modification of infrastructure. It was originally created to manage compute devices or servers, but it has now been extended to support networking devices as well. It is used to automate our configuration management, and it provides idempotent operation.
Now, what does idempotent mean? It means that changes are only made once, and it’s only made if they need to be made. So this is where our declarative structure comes into play. We, as the network engineers and network automation engineers, are declaring to Ansible what we want the final configuration state to be, and Ansible is figuring out the finer details of how to make that happen. When it’s figuring this out, it will not commit a configuration, unless it needs to be made.
If we are creating a playbook that outlines a task to “add our lo0 interface to our OSPF configuration”, then that will only be done if the interface doesn’t exist in the OSPF configuration already. If it is already in our configuration, a commit will not occur, because no changes were needed.
That’s something that’s a little unique and special about Ansible. It is not just running a set of instructions in an automated fashion, it is actually taking the state that we are providing, and determining what or if any changes need to be made to achieve that state.
So let’s take a look at how ansible typically operates and then we’ll be looking at how ansible operates particularly when interfacing with Juniper devices.

Ansible’s standard operation is that the modules are executed on our managed nodes. They are actually copied from our Ansible management node over to our managed nodes, being the servers or other compute devices that we are managing. Then the playbooks are also copied via SSH and they are executed by our management nodes.
So, some requirements here are that:
Python needs to be installed on our managed nodes in order for them to execute these Python modules.
Now, this is great when you have the ability to copy your modules over to a device that can execute them locally, because that allows for a very scalable distributed system, right? You can copy over all of the playbooks and required modules and they can execute it and report back what happened. That allows for an extremely scalable operation, however in the case of our Juniper devices, that would put some requirements on our Juniper device that would be perhaps problematic.
Having space to copy the modules over is not necessarily guaranteed with our network devices. Additionally, actually having Python available on our device to execute these modules is not necessarily the case. So, we changed this operation just a bit in the Junos Ansible operation.

When interfacing with Junos devices with Ansible, the modules are executed on the ansible management node. The playbook is read on the Ansible management node, the management server for Ansible, and those end up executing the modules. Those modules interface with our managed nodes via netconf over SSH, or could also potentially be serial or Telnet, and those execute RPCs or remote procedure calls on our managed devices.
The files are not copied to our managed nodes, they are executed locally on our Ansible management server, which then interfaces with our managed nodes via netconf sessions.
Let’s take a look at what the base environment is for Ansible.

Ansible management server requires are that you be running Linux, MacOS or OSX, or BSD. Unfortunately there is not Ansible available for a Windows environment. You do need to have Python 2.6 or later installed, or Python 3.5 or later installed.
Because Ansible is written in Python, it is itself a set of Python modules that it can be installed by pip. We can just do pip install ansible , and that will install Ansible for us on our server. The full information for the installation guide can be found at the official documentation for Ansible.
The Juniper integration requirements are that you do need to have PyEZ installed, because a lot of the modules reference PyEZ for executing our netconf sessions and interacting with our devices. PyEZ must be installed separately using pip install junos-eznc .
We also need to have jxmlease installed for interfacing with the XML API output, and this is installed using pip install jxmlease .
The juniper.junos Ansible Galaxy role is not actually required for interfacing with our Juniper devices, as there are a fair number of modules built into Ansible which can interface with Juniper devices, though you will get a much richer experience and have much more information available to you if you use the juniper.junos Ansible Galaxy role.
The juniper.junos Ansible Galaxy role that is written and supported by Juniper Networks. In order to install this role, you would execute the ansible-galaxy install juniper.junos .
Finally, in order to interface with our device via netconf over ssh, it does need to be enabled on our device. That is done through the set system services netconf ssh configuration line.
Let’s talk a little more here about the difference between the builtin and Ansible Galaxy Juniper modules.

Our Ansible module library is what is officially supported by Ansible, and it is written by Ansible. It’s what comes with Ansible when you install it, and these are what are supported by Ansible Tower. Ansible Tower is the graphical interface for Ansible. It is a premium product that is generally sold to enterprises. Now, Ansible Galaxy is a community driven and community supported library of modules. Ansible Galaxy is the official community for sharing Ansible roles. There are other communities out there, but Ansible Galaxy is the official one that Ansible has created to allow the community to share custom-made roles. This is the home of Juniper’s own juniper.junos Ansible role.
juniper.junos is written and supported by Juniper, not Ansible.

Tag:agentless, ansible, ansible galaxy, automation, devops, jncia, juniper, juniper.junos, netops, scripting, yaml