In Part 1 we gave the general outline of what we are trying to do, the tools we’re using, and the architecture of the application.
In this part (Part 2) we’re going to work on building the development environment with Ansible. This includes the Jenkins, Gitlab, a private Docker Registry, and a proxy server so we can point DNS to the site.
In Part 3 we configure the load balancers and the web site.
In Part 4 we configure Jenkins and put it all together.
The Ansible code can be found on Github in the Cisco Live 2015 repo.
Get the CoreOS Image
We’ll need an image to work with. While we can do this on the command line, its not something we’re going to repeat to often so I think we’re ok doing this the lame way and use the GUI.
A simple search takes us to the OpenStack page on the CoreOS site. I just used the current stable version. Its pretty simple. You follow their instructions:
1 2 |
$ wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2 $ bunzip2 coreos_production_openstack_image.img.bz2 |
I downloaded this to some Linux server that was on the Internet. From there, I went into the Cisco OpenStack private Cloud dashboard and under images created a new one.
You can also do this through the command line just to make sure you’re still connecting correctly:
1 2 3 4 |
glance image-create --name CoreOS647.2.0 \ --container-format bare --disk-format qcow2 \ --file coreos_production_openstack_image.img \ --is-public True |
Ok, now we have a base image to work with. Let’s start automating this.
Ansible Base Setup

I’ve set up a directory called ~/Code/lawngnomed/ansible where all my Ansible configurations will live. I’ve spoken about setting up Ansible before so in this post we’ll just go over the things that are unique. The first item we need to do is setup our Development environment. Here’s the Ansible script for creating the development node, which I gave the hostname of ‘ci’:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
- name: Ensure CI environment is ready connection: local hosts: local vars_files: - vars/metacloud_vars.yml tasks: - name: Ensure CI environment is deployed nova_compute: state: present auth_url: "{{ lookup('env', 'OS_AUTH_URL') }}" login_username: "{{ lookup('env', 'OS_USERNAME') }}" login_password: "{{ lookup('env', 'OS_PASSWORD') }}" login_tenant_name: "{{ lookup('env', 'OS_TENANT_NAME') }}" name: ci image_name: "CoreOS 633.1.0" key_name: "{{ keypair }}" # 3 is m1.large flavor_id: 3 meta: group: ci security_groups: "{{ security_group }}" floating_ip_pools: - "{{ floating_ip_pool }}" user_data: "{{ lookup('file', 'files/cloud-config.sh') }}" - name: Make sure CI stuff is ready hosts: ci roles: - registry - gitlab - jenkins - ci-proxy |
This playbook does the following:
- Creates a new server called ‘ci’
- ci will use a security group I already created
- ci will use a key-pair I already created.
- ci will use the cloud-config.sh script I created as part of the boot up.
- Once the node is created creates the following roles on it: registry, gitlab, jenkins, and ci-proxy
The metacloud_vars.yml file contains most of the environment variables. Here is the file so you can see it. Replace this with your own:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
--- keypair: tco-gold # added port 22 to the security group security_group: default,loadbalancer # we use Coreos for this application #coreos_image_id: bc3c7ad4-33d5-4702-a01a-b4b65b5c14a3 #coreos_image_name: "CoreOS 633.1.0" coreos_image_name: "coreos-jenkins-slave-6331" #image m1large: 3 # this is the floating IP pool that we are able to get IPs from. floating_ip_pool: nova |
You can see I used a few images as I tried this out and eventually settled on using the same coreos image that my jenkins slaves run on. We’ll get to that soon.
You’ll need to create a security group so that all the services can be access. My security group looked as follows:
The other security group allows port 80 and 22 so I can ssh and go to the web browser.
The next important file is the files/cloud-config.sh script. With the script I needed to accomplish 2 things:
- Get Java on the instance so that Jenkins could communicate with it.
- Get Python on the instance so Ansible could run on it.
- Make it so docker would be able to communicate with an insecure registry.
CoreOS by itself tries to be as bare as it gets so after trolling the Internet for a few days I finally cobbled this script together that would do the job.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
#!/bin/bash ## Part 1: Set up python PYPY_VERSION=2.4.0 HOME=/home/core mkdir -p $HOME cd $HOME block-until-url https://bitbucket.org/pypy/pypy/downloads/pypy-$PYPY_VERSION-linux64.tar.bz2 # wget -O - https://bitbucket.org/pypy/pypy/downloads/pypy-$PYPY_VERSION-linux64.tar.bz2 |tar -xjf - mv -n pypy-$PYPY_VERSION-linux64 pypy # ## library fixup mkdir -p pypy/lib ln -snf /lib64/libncurses.so.5.9 $HOME/pypy/lib/libtinfo.so.5 mkdir -p $HOME/bin # cat > $HOME/bin/python <<EOF #!/bin/bash LD_LIBRARY_PATH=$HOME/pypy/lib:$LD_LIBRARY_PATH exec $HOME/pypy/bin/pypy "\$@" EOF # chmod +x $HOME/bin/python $HOME/bin/python --version ## part 2: insecure docker registry mkdir -p /etc/systemd/system/ cat > /etc/systemd/system/docker.service <<EOF [Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.com After=docker.socket early-docker.target network.target Requires=docker.socket early-docker.target [Service] Environment=TMPDIR=/var/tmp EnvironmentFile=-/run/flannel_docker_opts.env MountFlags=slave LimitNOFILE=1048576 LimitNPROC=1048576 ExecStart=/usr/lib/coreos/dockerd --daemon --host=fd:// --insecure-registry ci:5000 $DOCKER_OPTS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl restart docker |
Roles
A few directories with files were created:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
roles | |-ci-proxy | | | |- files/default.conf | |- tasks/main.yml | |-gitlab | | | |- vars/main.yml | |- tasks/main.yml | |- jenkins | |- tasks/main.yml | |- registry |- tasks/main.yml |
Let’s go through each task:
ci-proxy
This role is creates a docker container that acts as the reverse proxy. So that when requests like http://jenkins.lawngnomed.com come in, the proxy redirects the request to the right container.
The task file below copies the nginx configuration file and then mounts it into the container. Then it runs the container.
1 2 3 4 5 6 7 8 9 10 |
--- - name: Ensure nginx files are copied. copy: src={{item.src}} dest={{item.dest}} with_items: - { src: ../files/default.conf, dest: /vol/nginx/ } sudo: true - name: Ensure NGINX proxy is up docker: image="nginx" name=proxy volumes="/vol/nginx:/etc/nginx/conf.d" ports=80:80,443:443 |
The contents of the nginx default config file that will run in /etc/nginx/conf.d/default.conf is the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
server { server_name jenkins.lawngnomed.com; location / { proxy_pass http://ci:8080/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 150; proxy_send_timeout 100; proxy_read_timeout 100; proxy_buffers 4 32k; client_max_body_size 8m; client_body_buffer_size 128k; } } server { listen 80; server_name gitlab.lawngnomed.com www.gitlab.lawngnomed.com; location / { proxy_pass http://ci:10080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 150; proxy_send_timeout 100; proxy_read_timeout 100; proxy_buffers 4 32k; client_max_body_size 8m; client_body_buffer_size 128k; } } server { server_name registry.lawngnomed.com; location / { proxy_pass http://ci:5000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 150; proxy_send_timeout 100; proxy_read_timeout 100; proxy_buffers 4 32k; client_max_body_size 8m; client_body_buffer_size 128k; } } |
There could be some issues with this file, but it seems to work. There are occasions when jenkins and gitlab redirect to bad urls, but everything works with this configuration. I’m open to any ideas to changing it.
Once this role is up you can access the URL from the outside.
Gitlab
Gitlab requires a Redis container for key value store and a PostGrsql database. We use Docker for both of these and link them together. The Ansible playbook file looks as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
--- - name: Ensure Redis is up for Gitlab docker: image="sameersbn/redis:latest" volumes=/vol/redis/data:/var/lib/redis name=redis - name: Ensure PostGresql is up docker: image="sameersbn/postgresql:latest" volumes=/vol/postgresql/data:/var/lib/postgresql env="DB_NAME=gitlabhq_production,DB_USER=gitlab,DB_PASS={{gitlab_db_password}}" name=postgresql register: result - name: Wait for a few seconds if the postrgres just came up... pause: seconds=5 when: result|changed - name: Ensure Gitlab is up docker: image="sameersbn/gitlab:latest" volumes=/vol/gitlab/data:/home/git/data env="DB_TYPE=postgres,GITLAB_PORT=10080,GITLAB_SSH_PORT=10022" links="postgresql:postgresql,redis:redisio" ports=10022:22,10080:80 name=gitlab |
Notice that the gitlab_db_password is an environment variable created in the ../var/main.yml file. I set this up and then encrypted the file using Ansible Vault. See my post on how that is accomplished because its a pretty cool technique I learned from our Portland Ansible Users Group.
Jenkins
The Jenkins Ansible installation script is pretty straight forward. The only catch is to make sure the directory owner is jenkins and that you mount the directory.
1 2 3 4 5 6 7 |
--- - name: Make sure file is in place file: path=/vol/jenkins_home owner=1000 state=directory sudo: yes - name: Ensure Jenkins is up docker: image="jenkins" volumes=/vol/jenkins_home:/var/jenkins_home name=jenkins ports=8080:8080,50000:50000 |
Registry
No tricks, here, we’re just using the latest from the docker registry. This goes out and pulls the registry.
1 2 3 4 |
--- - name: Ensure Docker Registry is up. docker: image="registry:latest" volumes=/vol/docker-registry:/docker \ env="STORAGE=local,STORAGE_PATH=/docker" ports=5000:5000 name=registry |
Loose Ends
There are a few parts that I didn’t automate that should be done.
- The instance I created mounts a persistent storage device that I created in Metacloud. There are two pieces missing:
- It doesn’t create the volume in OpenStack if its not there yet.
- It doesn’t mount the volume onto the development server.
- For speed, its better to pull the docker containers from the local registry. So technically we should tag all the images that we’re using and put them in the local registry. This is a chicken and an egg problem because you need the registry up before you can download images from it. So I left it that way.
- There are still some things I needed to finish like putting some keys and other items I needed for Jenkins in the /vol directory. Its not perfect but its pretty good.
Creating the volume and mounting was pretty quick once the image was up. First I created the volume and assigned it using the Horizon Dashboard that Metacloud provides.
This was just a 20GB volume. Once the instance was up I ran a few commands like:
1 2 3 4 |
fdisk -l # walk through menu create a new fs, accept defaults: e.g: n,enter,enter,m to write it out. mkfs.ext2 /dev/vdb1 mkdir -p /vol mount /dev/vdb1 /vol |
This way all of our information persists if the containers terminate and if the instances terminate.
Finishing up the Development Server
Once you get to this point, you should be able to bring it all up with:
1 |
ansible-playbook ci.yml |
That should do it! Once you are in you may want to tag all of your images so that they load in the local docker registry. For example, once you log in you could run:
1 2 |
core@ci ~ $ docker tag registry ci:5000/registry core@ci ~ $ docker push ci:5000/registry |
At this point the idea is that you should be able to go to whatever public IP address was assigned to you and be able to access:
1 2 3 |
jenkins: <IP>:8080 gitlab: <IP>:10080 registry: <IP>:5000 |
If you’re there then you can get rolling to the next step: Ansible scripts to deploy the rest of the environment.
In Part 3 we’ll cover Ansible for bringing up the load balancers and web servers. We’ll also snapshot an image to make it a jenkins slave.