Deploy a production ready Nutanix OpenStack environment in minutes​

By Tom Arentsen

Subscribe to our blog

In this blog post we want to demonstrate Nutanix OpenStack, which delivers a complete production ready and distributed OpenStack environment in a matter of minutes. For load balancing of the different OpenStack services we have included an automatic deployment of AVI Networks. AVI Networks, is an amazing platform that delivers much more than just load balancing, and like Nutanix, it's build for the cloud.

OpenStack is an open-source software platform for cloud computing, mostly deployed as an infrastructure-as-a-service (IaaS). Nutanix delivers a set of OpenStack drivers (Acropolis OpenStack Drivers) that are responsible for taking the OpenStack RPCs from the OpenStack controller and translates them into native Acropolis API calls. This solution offers a more simplified OpenStack deployment as the complexity of the back-end infrastructure services (compute, storage, network) leverage the native Nutanix services.

Separating the OpenStack components

The Nutanix solution comes in a pre-package helper VM called the OVM. However, in order to have a fully-distributed and scalable OpenStack solution we decided to separate the different OpenStack Services across multiple VM's. First, we defined a set of OVM's that are behind an AVI Networks load balancer. Next to that we defined a set of OpenStack controller VM's, which are also load-balanced by AVI Networks.

To gain more insights in the Nutanix OpenStack solution, go over to the Nutanix Bible.

The following table details how we separated the different OpenStack Components:

Deploy Nutanix Openstack

The Nutanix solution

The following diagram depicts the entire Nutanix OpenStack solution:

Deploy Nutanix Openstack

Define and Automate

Because of the service delivery framework we are utilising, we are able to quickly build a fully-distributed OpenStack solution in minutes. First, we define the desired state of our Nutanix OpenStack service (including an AVI Networks deployment from scratch). The solution consists of a set of separate service definitions for the different services to form the complete stack. Next, we create an Ansible Playbook per service component responsible for executing the service definitions and in doing so deploy the OpenStack solution.

Scaling the OpenStack solution

Because we consider provisioning only part of the job, the ability to manage and maintain your service on a continuous basis completes the job. The service delivery framework we use allows us to make changes in a service definition, which initiates a trigger and forwards it to the automation layer (in this case an Ansible Playbook) for execution. This feature allows us to scale our OpenStack solution up and down at any given time. As we make sure the AVI Networks load balancer only starts serving the new services once the execution has been fully completed, users won't even notice we are scaling up or down.

For the above described solution we used the RDO version of OpenStack but, in fact, we could have used any other OpenStack distribution, like RedHat OpenStack or the VMware Integrated OpenStack and this would only require minor updates in our Ansible Playbooks.

Meanwhile we are working on many more other cool solutions that could be deployed individually or combined with the same simplicity, like for example, enabling Container as a Service (CaaS) in different flavors (Docker Datacenter and OpenShift) and many more.

Learn more about our service delivery approach here or download our ebook for an in-depth look into our way of working.

Towards an innovative IT infrastructure


By Tom Arentsen on 24 - 10 - 2016

Let’s discuss your situation