Installing OpenStack with the new configuration

In Chapter 1, Introducing OpenStack, we ran packstack with the all-in-one option, taking the defaults for a test installation. Packstack has the ability to save the default options into a configuration file that can then be edited with a text editor. We'll use this ability in this chapter to create a reusable configuration. Also, Packstack has the ability to apply the generated puppet manifests to remote machines over SSH. For this to work, you will need to be able to access SSH from the controller machine (where we'll be running Packstack) to the other machines in the deployment. It's a good idea to test this out prior to running Packstack.

To begin, start by installing a fresh copy of the operating system on the three servers that we outlined in the hardware table in the deployment plan. Each system needs to have the network interfaces specified in the network table configured before deployment, and all the requirements specified in the deployment plan will also need to be met (that is, the network manager needs to be disabled and the RDO repository needs to be enabled).

Execute the following command on the cloud controller (controller1) to generate an answer file. We'll use this as a template for our deployment configuration:

# packstack -gen-answer-file=packstack-answers.txt

Next, edit the packstack-answers.txt file, updating the following parameters:

Now, run packstack with the new configuration:

# packstack -answer-file=packstack-answers.txt

When the installation completes, the OpenStack deployment should be verified using the same steps as in the previous chapter. Service distribution can be verified by querying the Nova and Neutron schedulers:

# nova service-list

This command will output a table of all running compute services, the host they're running on, and their status. The output should show the nova-cert, nova-consoleauth, nova-scheduler, and nova-conductor services running on compute1 and the nova-compute service running on compute1 and compute2:

# neutron agent-list

This command will output a similar table of all running network services. The output should show the Metadata, L3, and DHCP agents running on controller1 and the Open vSwitch agent running on controller1, compute1, and compute2.