Salting the cloud for fun and profit

It has been quite some time since I posted.  This is largely because I have been busy digging into the wonderful world of OpenStack over at HP Cloud Services.

One of the things I have been working on is the new Load Balancing as a Service (LBaaS) offering with former Drizzle teammates Andrew Hutchings and David Shrewsbury.  You can find out more details and whatnot for LBaaS here, but this post isn’t so much about the details of load balancing as it is about the neat CI / CD tricks we are accomplishing via Jenkins and Saltstack.

What we needed was a way to spin up, configure, and manage test slaves without necessarily wanting to manage them forever and always.  I’m sure many Jenkins users out there have dealt with borked, messy, misconfigured slave machines that cost a lot of time and effort to get sorted.  So how to solve this? Hey, I know!  We have a cloud, so let’s use it.  Let’s create on-demand machines that we can configure as needed and then blow away.  Sounds good, right? Well, like many things in life it is easier said than done.

One of the first things we tried was the jclouds plugin for Jenkins.  While this is capable of some interesting tricks, we never really felt it integrated nicely into Jenkins (or maybe I didn’t properly grok its paradigm).  One can create Jenkins jobs to create vm’s, but those new vm’s kind of just float there.  You have to pre-configure the vm (creating a base image) at least a little bit (to allow Jenkins or anyone else to get into it) and perform a variety of other tricks to really ‘own’ the machines.  My experiments with this were rather messy and frustrating.

Enter Saltstack + salt-cloud.  Much like Puppet or Chef (or any number of other tools), salt aims to provide a means of creating, configuring, and controlling other machines.

What I like about it is:

  • it is written in Python
  • it is rather easy to grasp (ymmv)
  • provides a one-stop shop for this
  • it is written in Python

What salt-cloud does is provide cloud control / integration for salt.  One can define config files with your cloud credentials, profile files that define vms, and map files that define sets of vm’s based on the profiles.  From there, one can use the tool to create individual vm’s from the command line, or a number of vm’s from a map file.  The vm’s that are created are auto-registered to the salt-master and once they are up and running, they are full and proper salt-minions that you can use to do your bidding immediately >:)

We use the tool for a variety of purposes, but one of the more interesting applications is for testing.  Below is sample output from one of our Jenkins jobs.  What we do is create a vm via a salt-master slave, then configure it to have the python-libraclient and the lbaas test suite.  From there, we use salt to call the test suite and report on the results.  Once the test is done, we can blow away the vm and repeat the process as needed.  We no longer have to manage a lot of Jenkins slaves, just key ones and we can have a practically unlimited array of virtual machines for whatever purposes we may have.

Here, at the start of our test, we simply create a new base vm. The -p is the profile to use. One may have a variety of profiles using different OS’s, sizes, etc.


+ sudo salt-cloud -p hp_az3_large lbaas-client-install-tester1
[INFO ] Loaded configuration file: /etc/salt/cloud
[INFO ] salt-cloud starting
[WARNING ] 'AWS.id' not found in options. Not loading module.
[WARNING ] 'EC2.id' not found in options. Not loading module.
[INFO ] Creating Cloud VM lbaas-client-install-tester1
[WARNING ] Private IPs returned, but not public... checking for misidentified IPs
[WARNING ] 10.2.154.133 is a private ip
[WARNING ] 15.185.228.34 is a public ip
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added '15.185.228.34' (ECDSA) to the list of known hosts.
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added '15.185.228.34' (ECDSA) to the list of known hosts.
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added '15.185.228.34' (ECDSA) to the list of known hosts.
* INFO: /tmp/deploy.sh -- Version 1.5.1
* WARN: Running the unstable version of bootstrap-salt.sh
* INFO: System Information:
* INFO: CPU: GenuineIntel
* INFO: CPU Arch: x86_64
* INFO: OS Name: Linux
* INFO: OS Version: 3.2.0-32-virtual
* INFO: Distribution: Ubuntu 12.04
* INFO: Installing minion
* INFO: Found function install_ubuntu_deps
* INFO: Found function config_salt
* INFO: Found function install_ubuntu_stable
* INFO: Found function install_ubuntu_restart_daemons
* INFO: Running install_ubuntu_deps()

[INFO ] Salt installed on lbaas-client-install-tester1
[INFO ] Created Cloud VM lbaas-client-install-tester1 with the following values:
[INFO ] private_ips: [u'10.2.154.133']
[INFO ] extra: {'updated': u'2013-04-11T16:02:30Z', 'hostId': u'', 'created': u'2013-04-11T16:02:29Z', 'key_name': u'lbaas-saltmaster', 'uri': u'https://az-3.********.compute.hpcloudsvc.com/v1.1/42064206420642/servers/891293', 'imageId': u'48335', 'metadata': {}, 'password': u'thismachinewontlivelongenoughforyoutouseit', 'flavorId': u'103', 'tenantId': u'42064206420642'}
[INFO ] image: None
[INFO ] _uuid: None
[INFO ] driver: [INFO ] state: 4
[INFO ] public_ips: [u'15.185.228.34']
[INFO ] size: None
[INFO ] id: 891293
[INFO ] name: lbaas-client-install-tester1

Once the vm has been created and salt has been installed, we can call state.highstate to configure the machine to install the lbaas client and test suite:


+ sudo salt lbaas-client-install-tester1 state.highstate
lbaas-client-install-tester1:
----------
State: - pkg
Name: required_packages
Function: installed
Result: True
Comment: The following package(s) were installed/updated: python-pip, python-novaclient, git, python-requests, python-prettytable.
Changes: python-novaclient: {'new': '2012.1-0ubuntu1', 'old': ''}
python-setuptools: {'new': '0.6.24-1ubuntu1', 'old': ''}
git: {'new': '1:1.7.9.5-1', 'old': ''}
liberror-perl: {'new': '0.17-1', 'old': ''}
python-pip: {'new': '1.0-1build1', 'old': ''}
python-distribute: {'new': '1', 'old': ''}
python-requests: {'new': '0.8.2-1', 'old': ''}
git-man: {'new': '1:1.7.9.5-1', 'old': ''}
python-greenlet: {'new': '0.3.1-1ubuntu5.1', 'old': ''}
python-gevent: {'new': '0.13.6-1ubuntu1', 'old': ''}
git-completion: {'new': '1', 'old': ''}
python-prettytable: {'new': '0.5-1ubuntu2', 'old': ''}
----------
State: - git
Name: https://github.com/pcrews/libra-integration-tests.git
Function: latest
Result: True
Comment: Repository https://github.com/pcrews/libra-integration-tests.git cloned to /root/libra-integration-tests
Changes: new: https://github.com/pcrews/libra-integration-tests.git
revision: f6290d551188c9239248f0cd0ddcf22470c444d3
...

From there, we can use salt to execute the test suite through the client on the vm and get the results back:

sudo salt \lbaas-client-install-tester1 cmd.run_all cwd='/root/libra-integration-tests' 'python loadbalancer_integration.py --os_username=******** --os_password=******** --os_tenant_name=********-tenant --os_auth_url=https://********.identity.hpcloudsvc.com:35357/v2.0/ --os_region_name=******** --driver=python-client --max_backend_nodes=50 '
0
lbaas-client-install-tester1:
----------
pid:
6567
retcode:
0
stderr:
test_createLoadBalancer (tests.create_loadbalancer.testCreateLoadBalancer)
test creation of loadbalancers for libra ... 20130411-160445 PM Setting up for testcase:
20130411-160445 PM - test_description: basic_positive_name
20130411-160445 PM - lb_name: the quick, brown fox jumps over the lazy dog.
20130411-160445 PM - nodes: [{'port': '80', 'address': '15.185.42.06'}, {'port': '80', 'address': '15.185.42.07'}]
20130411-160445 PM - expected_status: 200
20130411-160445 PM load balancer id: 132651
20130411-160445 PM load balancer ip addr: 15.185.226.182
20130411-160445 PM Validating load balancer detail...
20130411-160446 PM Validating load balancer list...
20130411-160447 PM Validating load balancer nodes url...
20130411-160447 PM testing loadbalancer function...
20130411-160447 PM gathering backend node etags...
20130411-160447 PM testing lb for function...
20130411-160447 PM Deleting loadbalancer: 132651
ok
...

 

After the tests are finished, we simply blow the vm away via the magic of salt-cloud again:

+ sudo salt-cloud -y -d lbaas-client-install-tester1
[INFO ] Loaded configuration file: /etc/salt/cloud
[INFO ] salt-cloud starting
[WARNING ] 'AWS.id' not found in options. Not loading module.
[WARNING ] 'EC2.id' not found in options. Not loading module.
[INFO ] Destroying VM: lbaas-client-install-tester1
[INFO ] Destroyed VM: lbaas-client-install-tester1
- True
Finished: SUCCESS

</code>

 

From there, it is easy to create new variations of this work – one can create multiple client minions for stress testing, clients running different sets of tests, etc.  In addition to this, we are doing some neat tricks with Jenkins + salt for monitoring and management.  If you guys are interested in hearing more and learning some nifty cloud / salt voodoo, let me know in the comments section 😉

2 thoughts on “Salting the cloud for fun and profit

  1. We’ll, since it’s not stable yet, it’s not public, but, a salt-cloud-buildbot library is being cooked. Pretty similar to what you’ve done, but fully automated, no CLI hassle… Just the initial configuration effort(salt states, salt cloud configuration, etc)… once all that is done and proven working, let buildbot rule your tests on multiple machines at the same time!

Comments are closed.