CI/CD continued: Drone

In my previous post I showed how we setup a CI/CD server.  In this post we’ll go into more detail in explaining how drone works.  I’ve previous written about how drone does secrets.  Well that has changed a bit, so in this update with drone 0.8.5 we’ll show how it works.  This time, we’re working on a python application.

Mac Command Line

Before we begin we need to get the drone command line tool. The easiest way to do this is to get it with homebrew

From here we need to add some environment variables to our ~/.bash_profile so that we can communicate with our drone server.  Essentially, we add:

The drone token you can get by logging into drone (see previous post) and account token in the upper left of the web interface.

Sourcing this file, we can now run something like

If this command shows your github credentials you are good to go!

Drone vs. Jenkins Level set

Drone differs from Jenkins in a few ways.  First off, the configuration file for the job is not stored in the “Jenkins Master”.  Instead, the job’s configuration file is stored in the repository with the actual code.  I love this because it makes sense that the developer own the workflow.  It’s just a paradigm shift and in my opinion much better.  It also means I could take this workflow to any drone CI/CD process and it should work fine.

The second difference is how the workflows are created.  With Jenkins you download “Plugins” and different tools and store them on the Jenkins server.  With Drone, every module is just a container.  That way they’re all the same.  I’ve written a couple as well and I really like the way its laid out.

That said, Jenkins is crusty, tried, and true and Drone is still evolving and changing.

Drone Workflow Config File

Inside the repo of the code to be tested and released we create a .drone.yml file.  Our final file (which will undergo several changes) is located here.

Let’s go over how this works.

First we specify the pipeline.  Each entry in this pipeline is an arbitrary name that helps me remember what is going on.

Notify

The first one notifies that a build is happening.  This first one looks as follows:

Since we are good Cisco employees we use Spark for notifications.  You might use slack or something you might find cooler, but we like Spark.  Notice we have a SPARK_TOKEN secret.  This secret needs to be added via the command line as we don’t want to check secrets into repositories.  That is a no-no.

Test

Next up, we want to test the python flask image using our test cases.  To do so, we created a test docker image that has all the necessary dependencies in it called kubam/python-test.  This way the container loads quick and doesn’t have to install dependencies.  The Dockerfile is in the same repo.  This step looks as follows:

You’ll notice that in each step we add the proxy environment variables.  The reason for this is that you have to remember we are behind a firewall!  So for the container to get out, it has to use the proxy configuration.

Docker

Next we want to build and publish the ‘artifact’.  In this case the artifact is a docker image.  This is done with the default drone docker plugin.

Since we are working on a branch that we want to create new images for we use the v2.0 tag to make sure we get the one we want.

Notice there are secrets here so we have to add these.  So we do:

Let it run

Now we have a great system!  Let’s let it go.  Launching it and we have it doing these three steps and publishing the docker container.  Now anytime someone pushes we’ll test it and if they all pass put it into production.  This is safe as long as we have sufficient test cases.  This is how we can go fast and make less work for everyone and be more productive.