How to install Nutanix Openstack Pike drivers?

Nutanix released Openstack pike drivers recently. Let’s take a step by step look on how to consume these drivers in your Openstack Pike deployment.

Requirements:-

  1. Openstack Pike deployment
  2. Nutanix PE (Prism Element) set up

Steps:-

  1. Download Nutanix rpm/deb package from Nutanix portal.
  2. install the rpm ( rpm -i…..) on machines running neutron-server, nova-compute, glance and cinder-volume.
  3. Add your Nutanix PE details in /etc/nutanix_openstack_config.json
  4. neutron-server:
    1. Edit neutron.conf (/etc/neutron/plugin.ini) to have below config:-
      1. type_drivers=vlan
      2. tenant_network_types=vlan
      3. mechanism_drivers=nutanix
    2. Edit /usr/lib/python2.7/site-packages/neutron-<x.y.z>-py2.7egg-info/entry_points.txt to have below config
      1. [neutron.ml2.mechanism_drivers]
        1. nutanix=nutanix_openstack.neutron.driver:AcropolisNetworkDriver
    3. restart neutron service:  service neutron-server restart
  5. Glance:
    1. Edit /etc/glance/glance-api.conf to have below edit
      1. [glance_store]
        1. stores=http
        2. default_store=http
      2. Edit /usr/lib/python2.7/site-packages/glance_store-<x.y.z>-py2.7.egg-info/entry_points.txt to have below config
        1. [glance_store.drivers]
          1. http = nutanix_openstack.glance:Store
      3. restart glance api and registry services ex. service openstack-glance-api restart & service openstack-glance-registry restart
  6. Cinder-volume:
    1. Edit /etc/cidner/cinder.conf to have below config
      1. [DEFAULT]
        1. enabled_backend=nutanix_openstack
        2. glance_host=$glance_service_ip
        3. glance_api_servers=$glance_service_ip:$glance_api_port
      2. [nutanix_openstack]
        1. volume_driver=nutanix_openstack.cinder.driver.AcropolisVolumeDriver
    2. Edit /usr/lib/python2.7/site-packages/cinder-<x.y.z>-py2.7.egg-info/SOURCES.txt to have below config
      1. nutanix_openstack/cinder/driver.py
    3. restart cinder-volume service:  service openstack-cinder-volume restart
  7. nova-compute:
    1. Edit /etc/nova/nova.conf to have below config
      1. [DEFAULT]
      2. compute_driver=nutanix_openstack.nova.AcropolisComputeDriver
      3. vnc_enabled=True
    2. restart nova-compute: service nova-compute restart
    3. run nova vnc in background
      1. /usr/bin/prism_vnc_proxy –bind_address=0.0.0.0 –bind_port=<random-port> –prism_hostname=<cluster-ip> –prism_username=<prism admin user> –prism_password=<prism admin user password> –docroot=/usr/share/nutanix_openstack/vnc/static &
      2. Note that, bind_port should be same as defined in /etc/nutanix_openstack_config.json cluster vnc port number.

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s