Monday, April 21, 2014

Automat openSack deployment on vagrant for development and reference test system

When I joined Deutsche Telekom more than one year ago, I had to share a common reference test system with everyone in the rooms, including all operators and developers. This is quite troublesome when you have new ideas to test without interfering anyone and also make sure that your experiments will not break things down and make your colleagues angry.

Figure1: a local integration test for experiment on new features
Like any development process, a local integration test system is required. It must support developers editing and debugging openStack on the fly, as well as operators or packaging-manager testing openStack packages. It's also nice to reset the test system from dirty changes and provision it again as fast as possible. This post introduces such system and now available upstream [1].

1. Overview of the vagrant openstack project

Figure 2: deployment of openStack by vagrant
Vagrant is responsible for bringing the VMs up, setting up host-only networks within Virtual Box. From now on there are two ways to deploy openStack depends on your needs. For development purpose, openStack is deployed by devstack. For the purpose of testing packages, a puppet is in use. The two deployments are configurable in a global file.

From my personal use case, I always need to switch between the 2 deployments: puppet for testing packages and devstack for coding. Switching between the two is also supported to keep the previous deployment save, separated and reuse.

1.1 Networking

Back to that time I only found projects that deploy all openStack components in one VM. This does not satisfy our needs because the all-in-one deployment does not reflect the behavior of the GRE data network within different openStack components. Figure 2 above shows control, compute and neutron node along with the 3 host-only networks for management, data GRE, and public network are brought up automatically.

Figure 3: SNAT for testing floating ips
In such testing environment, you also need to test the floating ips of the VMs over the public network. Well, it would be extremely boring if the nova booting VMs cannot connect to the Internet. For this reason, figure 3 shows how packages from inside the neutron node go out and back. Packages coming from br-tun, br-int, go to br-ex on neutron node, are forwarded to the NAT interface (vboxnet0) and SNATed so that they can find the way to go back.

1.2 Storage

For a simple nova volume setup, iSCSI is chosen by default. The VBOXManage command is very useful in this case to create a vdi storage and attach to the control node.

Of course not forget to format the storage, and create a volume group cinder-volumes for cinder [2].

2 Deployment environments

2.1 puppet

A VM puppetmaster is up with puppetdb installed. It pulls manifests from a configurable git repository to the directory /opt/deploy inside the vm and use these manifests to deploy openStack on the other VMs. By default manifests in [3] is provided as an example to try out the new Icehouse release with ML2 plugin and l2 population. You can also provide your own manifests by configuring a puppet repository and which site.pp to use for the nodes definition:


2.2 devstack

I like the deployment whereby provisioning script is provided directly inside the vm. For this reason, puppet master for deployment devstack is not necessary. Insteads devstack is directly cloned and setup inside all VMs. Devstack is also config to use the .pip repository of openStack [4]. Follow this article to use the remote-debugging that already prepared in this environment.

3. Performance boost

One issue is the long deployment time, especially if you have a low connection or connection drops in the middle of the deployment. So I tried out all tiny possibilities to reduce the time consuming.

2.1 Caching


When a VM is destroy and up again, it must download all packages from scratch. A simple solution for caching is implemented which cuts the deployment time by half. It's even more faster for a second deployment, since all packages and the glance image are cached for further use so internet access is not necessary.

Caching is supported for both environments: all .deb packages installed by puppet, as well as all .pip packages installed by devstack are cached and shared between VMs. The tables below just gives a clue how much time we can save for bringing up the machines with cache enabled (Internet download speed 4Mbit/sec, each vm 1cpu, 1024 ram).

Puppet deployment in secs
node no cache with cache
control 312 227
compute 110 83
neutron 109 62
total 532 ~230 (in parallel) win 5 min







Devstack deployment in secs
node no cache with cache
control 766 655
compute 764 341
neutron 224 208
total 1754 ~660 (in parallel) win 18 min







To test a custom package, simply replace it under the cache folder and bringing up new VMs.

2.2 Customizing your vagrant box

In additional to reduce the vagrant up time, a vagrant box is customized with packages pre-installed. The box is based on precise64 with packages such as VBox Guest Additions 4.3.8, puppet, dnsmasq, r10k, vim, git, rubygems, msgpack, lvm2 pre-installed. The box is also zero out all empty spaces and white out all logs to have a minimum size as possible (378 Mb). This cuts down 70 secs for each vm up (from 79 secs to 8 secs).

[1] vagrant openStack project

No comments:

Post a Comment