Technical Note
©2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Inf ormation. Page 1 of 25
OpenStack Cloud Deployment on Cisco UCS C200 M2 Servers
This Tech Note steps through setting up an OpenStack Cloud (Cactus release),
comprising a cluster of compute and storage nodes each running Ubuntu 10.10. Each
node is a Cisco UCS C200 M2 High-Density Rack-Mount Server. This document builds
on installation instructions described in OpenStack Compute and Storage
Administration Guides, but is not meant to supersede those documents.
Table of Contents
Introduction ............................................................................................................................ 3
Cisco UCS C200 M2 High-Density Rack-Mount Server ........................................................... 3
Cluster Topology..................................................................................................................... 3
OpenStack Compute Installation .............................................................................................. 4
Installation on the Cloud Controller ...................................................................................... 4
Configuring the bridge ..................................................................................................... 4
Running the Installation Script ......................................................................................... 5
Post Script Installation ..................................................................................................... 6
Network Configuration using FlatDHCPManager ............................................................. 7
Testing the Installation by Publishing and Starting an Image ............................................. 9
Installing Compute Nodes ...................................................................................................11
Configuring the Bridge ...................................................................................................11
Running the Installation Script ........................................................................................12
Post Script Installation ....................................................................................................12
Testing the Installation on this Node ................................................................................13
OpenStack Dashboard Installation ..........................................................................................13
OpenStack Storage Installation ...............................................................................................14
Technical Note
©2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Inf ormation. Page 2 of 25
Install and Configure the Packages ......................................................................................14
Install Swift-proxy Service..................................................................................................15
Create the Account, Container and Object Rings:.................................................................15
Installing and Configuring the Auth Node ...........................................................................16
Installing and Configuring the Storage Nodes ......................................................................17
Install Storage Node Packages ............................................................................................17
Create OpenStack Object Storage admin Account and Verify the Installation .......................21
Troubleshooting Tips .............................................................................................................21
Compute ............................................................................................................................21
Not Able to Pull the Latest Cactus Release ......................................................................21
Not Able to Upgrade from Bexar to Cactus ......................................................................22
How to Create a New Network (and Delete the Existing One) ..........................................22
Not Able to Publish an Image (Getting an Invalid Cert Error) ...........................................22
Running Instance Hangs in the “Scheduling” State...........................................................23
UEC Image Instance Can Be Pinged, But Cannot Ssh ......................................................23
Socket Time Out Error During Dashboard Installation .....................................................24
Storage...............................................................................................................................25
Storage Services Do Not Start on the Storage Node..........................................................25
Unable to Start the Account Server on the Storage Node ..................................................25
Table of Figures
Figure 1: OpenStack Cloud Deployment on a C200 cluster ....................................................... 4
Figure 2: OpenStack Dashboard..............................................................................................13
Figure 3: OpenStack Storage Deployment on a C200 cluster ....................................................14
Technical Note
©2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Inf ormation. Page 3 of 25
Introduction
OpenStack is a collection of open source technologies that provide massively scalable open
source cloud computing software. This Tech Note documents our experience in setting up an
OpenStack Cloud (Cactus release), comprising a cluster of compute and storage nodes with each
running Ubuntu 10.10). Each node is a Cisco UCS C200 M2 High-Density Rack-Mount Server.
This document can be used as a reference for deploying a similar cluster. It builds on installation
instructions described in OpenStack Compute and Storage Administration Guides
1
, but is a more
streamlined method that is specific to our deployment. We also attempt to provide additional
details where the original documentation is short. We hope the reader finds our troubleshooting
and workaround tips useful if problems develop during and after deployment. Please note that
this document is not meant to supersede the official OpenStack installation and administration
document. We encourage the reader to first consult that documentation to understand the
OpenStack concepts and installation procedure.
Cisco UCS C200 M2 High-Density Rack-Mount Server2
The Cisco UCS C200 M2 Server is a high-density, 2-socket, 1 rack unit (RU) rack-mount server
built for production-level network infrastructure, web services, and mainstream data center,
branch, and remote-office applications. The configuration of each server used in our deployment
is as follows:
2 x Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
4 internal SAS or SATA disk drives; each 2 terabytes (TB)
24 GB of industry-standard double data rate (DDR3) main memory
4 Gigabit Ethernet ports
Cluster Topology
Our deployment consists of a cluster of four C200 servers. One server serves as the OpenStack
Cloud Controller. The other three servers are configured as compute nodes. We recommend
setting up the deployment such that the OpenStack management/control network is separate from
the data network. (By management/control network, we imply the network which is used to
access the servers, and on which the OpenStack processes exchange messages. By data network,
we imply the network on which the virtual machines instantiated by OpenStack communicate
with each other.) We leverage two network ports on each of these servers, such that one port is
on the management/control network, and the other one is on the data network. Please note that
the standard OpenStack installation process uses only one network for all communications.
Figure 1 shows the topology.
1
http://docs.openstack.org/
2
http://www.cisco.com/en/US/products/ps10891/index.html
Technical Note
©2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Inf ormation. Page 4 of 25
Figure 1: OpenStack Cloud Deployment on a C200 cluster3
OpenStack Compute Installation
The scripted installation works well for installing OpenStack, both on the Cloud Controller, and
also on the other compute nodes. We will follow that approach for the installation. In our
installation, we will run all the services on the Cloud Controller, and only the nova-compute
service on the compute nodes. Note that in this set up, the Cloud Controller also serves as one of
the compute nodes. We suggest this approach since you can get started running and testing
virtual machine instances even with installing just the Cloud Controller, and adding one or more
compute nodes later as required.
Installation on the Cloud Controller
Configuring the bridge
The virtual machine instances running on this node will communicate with the data network by
connecting to a Linux bridge. We will f irst need to configure this bridge. We will use the eth1
port on our server for the data network (and we will configure it as a slave of br100). Our
/etc/network/interfaces file looks like this:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
address 171.x.y.96
gateway 171.x.y.1
netmask 255.255.254.0
auto br100
iface br100 inet static
bridge_ports eth1
bridge_stp off
bridge_maxwait 0
bridge_fd 0
3
We have masked some of the digits on the 171. addresses used in this document with characters „x‟ and „y‟
Technical Note
©2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Inf ormation. Page 5 of 25
Restart networking:
/etc/init.d/networking restart
An IP address will get automatically assigned to the bridge when we run the nova-network
service. In this case, the first IP address in the range specified for the network will be used
(i.e.; 10.0.0.1). Currently, there does not seem to be a way to configure this.
Running the Installation Script
Download the installation script:
wget --no-check-certificate https://github.com/dubsquared/OpenStack-NOVA-Installer-
Script/raw/master/nova-CC-install-v1.1.sh
Ensure you can execute the script by modifying the permissions on the script file:
sudo chmod 755 nova-CC-install-v1.1.sh
Run the script with root permissions:
sudo ./nova-CC-install-v1.1.sh
You will be guided through the following prompts:
Step 1: Setting up the database.
mySQL User Config
#################
Desired mySQL Pass
:
Verify password:
Please enter a password for the “root” user on the MySQL database. Note this password as it
might be required later during troubleshooting to access the MySQL database using MySQL
client.
Next, you will be asked to enter the IP address for different services which run on the Cloud
Controller.
S3 Host IP (Default is 10.0.0.3 -- ENTER to accept):171.x.y.96
RabbitMQ Host IP (Default is 10.0.0.3 -- ENTER to accept): 171.x.y.96
Cloud Controller Host IP (Default is 10.0.0.3 -- ENTER to accept): 171.x.y.96
mySQL Host IP (Default is 10.0.0.3 -- ENTER to accept): 171.x.y.96
Note that we have entered the IP address of the Cloud Controller on the management/controller
network.
Next, you will be prompted for details about the project that will serve as the isolated resource
container for your activities.
Nova project user name:
Nova project name:test01
Desired network + CIDR for project (normally x.x.x.x/24):10.0.0.0/24
Technical Note
©2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Inf ormation. Page 6 of 25
How many networks for project:1
How many availible IPs per project network:256
Make a note of the username and the project name that you enter here.
Currently only one network is supported per project.
Next you will be asked to enter details for the bridge configuration:
Please enter your local server IP (Default is 10.0.0.1 -- ENTER to accept):
Please enter your broadcast IP (Default is 10.0.0.255 -- ENTER to accept):
Please enter your netmask (Default is 255.255.255.0 -- ENTER to accept):
Please enter your gateway (Default is 171.x.y.1 -- ENTER to accept):10.0.0.1
We have used the IP addresses on the data network for this configuration. In our case, the
defaults suggested by the script were correct (since we had already assigned an IP address to our
br100 earlier). However, if you do not see these defaults, enter the appropriate IP address details
as per the addressing scheme you have chosen for your data network.
Next, you will be prompted for the default name server.
Please enter your default nameserver (Default is 171.x.y.183 -- ENTER to accept):
The default is being suggested from your eth0 configuration. We accept that.
At this point, the script will start installing all the packages. Wait for it to complete successfully.
If successful, the installation will also start all the services. Check by doing the following:
#ps -eaf | grep nova
root 31750 31742 0 07:42 ? 00:00:00 /usr/bin/python /usr/bin/nova-objectstore --uid
117 --gid 65534 --pidfile /var/run/nova/nova-objectstore.pid --flagfile=/etc/nova/nova.conf --
nodaemon --logfile=/var/log/nova/nova-objectstore.log
nova 32323 1 0 Apr19 ? 00:00:00 su -c nova-network --flagfile=/etc/nova/nova.conf
nova
nova 32340 32323 1 Apr19 ? 00:28:43 /usr/bin/python /usr/bin/nova-network --
flagfile=/etc/nova/nova.conf
nova 32393 1 0 Apr19 ? 00:00:00 su -c nova-compute --flagfile=/etc/nova/nova.conf
nova
nova 32410 32393 1 Apr19 ? 00:28:59 /usr/bin/python /usr/bin/nova-compute --
flagfile=/etc/nova/nova.conf
nova 32454 1 0 Apr19 ? 00:00:00 su -c nova-api --flagfile=/etc/nova/nova.conf
nova
nova 32489 32454 0 Apr19 ? 00:00:15 /usr/bin/python /usr/bin/nova-api --
flagfile=/etc/nova/nova.conf
nova 32501 1 0 Apr19 ? 00:00:00 su -c nova-scheduler --
flagfile=/etc/nova/nova.conf nova
nova 32508 32501 1 Apr19 ? 00:21:55 /usr/bin/python /usr/bin/nova-scheduler --
flagfile=/etc/nova/nova.conf
Post Script Installation
Once the installation has completed successfully, you will see that a /root/creds/novarc file has
been created.
Technical Note
©2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Inf ormation. Page 7 of 25
The novarc file will look like this:
NOVA_KEY_DIR=$(pushd $(dirname $BASH_SOURCE)>/dev/null; pwd; popd>/dev/null)
export EC2_ACCESS_KEY=":test01"
export EC2_SECRET_KEY=""
export EC2_URL="http://171.x.y.96:8773/services/Cloud"
export S3_URL="http://171.x.y.96:3333"
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --
user 42 --ec2cert ${NOVA_CERT}"
alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url
${S3_URL} --ec2cert ${NOVA_CERT}"
export NOVA_API_KEY=""
export NOVA_USERNAME=""
export NOVA_URL=http://171.x.y.96:8774/v1.0/
Append the contents of this file to your profile file (eg: ~/.bashrc) and source it for this session.
cat /root/creds/novarc >> ~/.bashrc
source ~/.bashrc
You will also find some .pem files in the /root/creds/ directory. These .pem files have to be
copied to the $NOVA_KEY_DIR path. (You will see these .pem files being referenced in the
novarc file at that path.)
Create a “nova” group, so you can set permissions on the configuration file:
sudo addgroup nova
The nova.config file should have its owner set to root:nova, and mode set to 0640, since the file
contains your MySQL server‟s username and password.
chown -R root:nova /etc/nova
chmod 640 /etc/nova/nova.conf
These are the commands you run to ensure the database schema is current, and then set up a user
and project:
/usr/bin/nova-manage db sync
/usr/bin/nova-manage user admin
/usr/bin/nova-manage project create
Note that we had earlier used the project name “test01”, so we would have used that here.
Network Configuration using FlatDHCPManager
Edit the /etc/nova/nova.conf to change the network manager to FlatDHCPManager. For our
setup, the nova.conf looks like this:
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--verbose
--s3_host=171.x.y.96
--rabbit_host=171.x.y.96
--cc_host=171.x.y.96
Technical Note
©2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Inf ormation. Page 8 of 25
--ec2_url=http://171.x.y.96:8773/services/Cloud
--FAKE_subdomain=ec2
--routing_source_ip=171.x.y.96
--verbose
--sql_connection=mysql://root:password@171.x.y.96/nova
--network_manager=nova.network.manager.FlatDHCPManager
--network_size=256
--fixed_range=10.0.0.0/24
--flat_network_dhcp_start=10.0.0.11
Note that in the above configuration, we are indicating the VM instances should start getting
allocated with IPs starting from 10.0.0.11, since we want to reserve IPs 10.0.0.0 (network), and
10.0.0.1 to 10.0.0.10 for the bridges on the Cloud Controller and one or more compute nodes.
(However, configuring this file does not ensure that this configuration is correctly reflected in the
DB. Instructions to ensure that are provided later in this section.)
Check the MySQL DB if a network entries has already been created during the scripted
installation process.
mysql -uroot -p nova -e 'select * from networks;'
If you see one or more entries, then do the following:
mysql -uroot -p nova -e 'delete from networks where id > 0;'
mysql -uroot -p nova -e 'delete from fixed_ips where id > 0;'
This will remove any previous network configuration from your DB.
Now create the network:
usr/bin/nova-manage network create 10.0.0.0/24 1 255
This should populate two tables in the DB, the networks table and the fixed_ips table.
Check the networks table:
# mysql -uroot -p nova -e 'select * from networks;'
+---------------------+------------+------------+---------+----+----------+-------------+--------
-------+--------+----------+------------+------+------+--------------------+-----------------+---
------------------+------------+------------+-------------+---------+------------+-------+-------
-----+
| created_at | updated_at | deleted_at | deleted | id | injected | cidr | netmask
| bridge | gateway | broadcast | dns | vlan | vpn_public_address | vpn_public_port |
vpn_private_address | dhc