ESXi Kickstart and automated vCenter registration

I haven’t worked on VMware for a while but needed to work on a project to automatically install ESXi on a few servers.  I invented a tool called KUBAM that was originally for deploying Kubernetes on UCS Bare Metal (KUBAM) but have realized there are a lot of people that can benefit from the use of vMedia policy installations.  I’ve written a few articles on this method here and there.

When looking to deploy ESXi we had made the kickstart portion work perfectly and even upgraded it to 6.7.  However, when looking for information on how to automatically register the ESXi servers to vCenter after the installation was concluded the best we found was information from the legendary WIlliam Lam at VMware from a post in 2011.  Here we are 7 almost 8 years later still trying to accomplish the same thing.  The problem is the post was written in 2011 and ESXi updated the version from Python 2 to Python 3 and so parts of the script don’t work.  I’ve updated it in a dirty way to make it work and checked the code into the KUBAM project.  It could use some cleaning up to make it nice like William’s python.  I may do that as time goes on.  For now here is the code:

A few notes here:

  • urllib2 was split into different urllib packages so that is no longer included
  • The top line sets the default context to not do cert checks.  Usually I find in enterprise companies there are no certs so people just accept the cert even though there is no root authority.
  • I only have IP addresses for hostnames, but if you have DNS then you will probably want to add contents from William’s script.
  • I’m pretty lazy with this and not updating the logs.  I’ll probably go back and spend some time doing that in the future if I need to.

The code is found in the KUBAM project here.

To use this code, you can include it in your ESXi Kickstart file.  An example in the same directory is here.  Notice the key are the last lines:

We put the script (renamed vcenter.py) in the ~/kubam directory of the installer.  Then as the machine boots up it grabs the file and runs the script registering itself.

The install is nice and without glamour.  It simply adds a new server to the cluster:

With my example I didn’t add another user account but I recommend it.  I also didn’t base encode the passwords but that is something you could do as well.

 

 

A brief history of on-prem serverless development

My colleague Pete Johnson, at Cisco has released a blog about a project called FONK.  I wanted to talk a little bit about why its important.

iOS Developers

Let’s first start back with the dawn of mobile application development.  Pretend you are an iPhone developer back in 2008.  You know Objective-C and you know how to make killer user interfaces and you make a fun game about dwarfs hunting butterflies. You release the game, the app is good, and you are happy.

As the game gets bigger you realize you’d like to add features to make it so certain parts of the game are stored in the cloud.  For example, you want to keep the all time highest score for all players who have ever played the game with their avatar name.  Bragging rights are cool, and you want your game to be cool.

The problem is, you don’t have any expertise in this.  You know how to write Objective-C but you don’t know how to manage servers.  In fact, you have probably never even installed an operating system in your life.  So even though someone may be really good at spinning up VMs and putting NGINX on it and running some back end ruby on rails code, that is more hassle than you want.  All you want is a backend that you don’t have to manage where the game can upload and retrieve all time highest player scores and display it.

To do this, you know you want a place in the cloud that accepts a JSON POST request to send the highest score and you want to be able to put a GET request to get the highest scores.  If you don’t know what POST or GET are, they are basic HTTP request methods.  Read more here.  You don’t want to manage servers, virtual machines, or even containers.  (Also, containers don’t really get popular until 2014 so we are still 6 years away from that).

Backend-as-a-Service

What you want is a service.  And in 2008 if we wait a little bit until 2011 then we start to see two cool solutions emerge for us:  Firebase and Parse.  Firebase was bought by Google and still lives on but Parse was bought by Facebook and closed down.  Both of these two companies offered a Backend-As-A-Service for your applications.  Really cool.  Now you could just use their GUI and put in database calls and then call it with your mobile app.  You didn’t have to learn about managing VMs and all that.  It was great!

The other thing that people start to realize is that it is good to accompany a mobile app with a web page.  To make it consistent, it would be great if the webpage would call the same APIs as the mobile app.  That way you only need to maintain one backend that both the web and mobile app can call.  Great!  How to do that?  Remember we still don’t want to manage VMs, OSes, or even the stacks that run that.  Well it turns out that AWS has offered us static web page hosting on S3.

Static web page hosting on S3 doesn’t sound like something that would call APIs.  But let’s understand what it is.  Our dwarf game is now hiring a web developer who knows Javascript.  When you visit a website that has javascript, the javascript doesn’t run on that site.  Javascript runs on your browser.  What happens is your browser goes to the URL and the URL gives you static HTML files.  Some of those files are javascript, css, and html that execute on your browser.  So we can actually host this in a public S3 bucket, point people to go there and they download the code.  You get replication, durability, high availability and uptime for free!  Still no VMs, no containers, no operating systems.

Enter AWS

It is now 2014 and you are a smart person at AWS.  You see this Backend as a service trend start to come and realize you could make some more money.  You also notice that some of these backend as a service companies also call your own products like your databases.  You’re also worried about the threat of Google and Facebook entering the market and taking more of your customers.  What do you do?  You build a backend as a service by stitching together your own services.  The advantage you have at AWS:  You already have some well known database services: RDS, DynamoDB, etc.  You also own S3 and your customers are already using it for static pages.  You also own API Gateway which allows you to call S3 services on the backend but it needs something a little more powerful.  So how can you complete the picture?  You introduce AWS Lambda.

In December 2014 AWS introduced AWS Lambda and many people were puzzled by the idea of function as a service.  What good was it for?  Why would you want to call a function based on an event? If you keep in mind the entire architecture then you realize that AWS Lambda just completes the picture.  No VMs, no containers, no operating systems.  No expertise needed in running this.  Just put code in their GUI and off you go.  Since you already use S3 you can easily put your functions in AWS and you are golden.

Let’s realize that this way of running applications is not the right way for every application.  Remember our dwarf game is pretty simple.  We just want to store the high score and get and receive it.  We may chose to add user identities and logins later as well, and our app can handle that.  But at some point if the complexity gets too much on the back end we’ll have to hire a backend engineer.  But for now we can still run it all with javascript and put our backend on AWS lambda, API Gateway, and DynamoDB.

Kubernetes

In 2014 something happened that probably wasn’t meant to.  Google introduced Kubernetes to the world and that was intentional.  After all, nobody was using their cloud and they wanted to make people realize that they could use containers on their cloud better than anyone else.  By open sourcing Kubernetes Google could make a splash and assert some dominance.  That was intentional.  But what wasn’t intentional was that when 2015 hit and Kubernetes 1.0 was released it actually started adding some parity between public clouds and private clouds.

Let me explain.  To have an AWS experience on Prem the best thing you could do was OpenStack or some other do it yourself project.  OpenStack was pretty complicated, to put it nicely.  There were very few orgs that successfully implemented OpenStack on prem and so many people kept migrating to the public cloud.  Kubernetes has been so impactful that companies that have tried to resist or compete against it have thrown in the towel and embraced it.  Check out the list:

  • Google: Offered GKE, the initial cloud based kubernetes platform
  • RedHat (now IBM): Completely threw away OpenShift PaaS code and built entirely on Kubernetes
  • Pivotal: Had their own solution but now pushes PKS
  • Azure: AKS Kubernetes offering
  • Amazon: AWS who resisted and offered several different ways to manage containers (ECS, faregate, etc) finally offered native Kubernetes with EKS.

Many of these organizations probably would have rather liked to own the solution but everyone is now a Kubernetes provider!  The way of the PaaS is dying.  They are morphing into opinionated Kubernetes services.

But the biggest impact of all, to me, was that it started giving parity between public clouds and private clouds.  The same Kubernetes platform that runs on AWS, GCP, Azure, PKS, could be run on your own data center on  your own bare metal servers.  You just leveled the playing field and made it easier for me to bring my apps back on prem.  (If I wanted to, but of course why would I?)  Well for the dwarf game developer, I have no need to go back on prem.  But if I’m a big enterprise with lots of space in my datacenter, this might look interesting.

Kubernetes is simple to install, and much easier to manage than OpenStack, but still has its challenges.  However, as enterprises mature, perhaps it won’t be so bad?  After all there are solutions now from vendors including Cisco that offer Kubernetes with enterprise support.

FaaS, Object Storage, NoSQL, Kubernetes

Back to the mobile app world.  When AWS lambda came out some people made a framework called serverless (check out serverless.com).  People thought this whole function as a service thing was pretty rad.  But now what makes it radder is we have  a solution to run it on our own data centers using just Kubernetes.

I worked with Pete a little on his FONK project by submitting some code samples for the Guestbook app running on Kubeless.  For a developer to not have to worry about creating Dockerfiles, Kubernetes YAML files, and being able to just write code and get it working it is very appealing.  This serverless business is still pretty cutting edge and even though we are 3 years from when AWS came out with Lambda, there is still a lot of buzz about it and people jumping into the space.  What I like about FONK is that it levels the playing field between what I can get on public clouds and private clouds.  Certainly many people would argue that using a private cloud is the only way to go, but I see great dangers in this.

  1.  I am not comfortable with a company as big as Amazon making even more money off of me and holding me hostage.  I know they are good. I’ve used their stuff. I just don’t
  2. At a certain scale it is more cost effective to run kubernetes on prem than running in any public cloud. Granted you must have the expertise
  3. I can get better performance on my own datacenter.  Sure if I have to burst, public cloud is great.  But for chugging apps, I like my on prem stuff.

You can entirely disagree with me and you could be right.  But just remember, in engineering it is always about tradeoffs.  It’s not wrong or right.  It’s what tradeoffs do you want.

 

Technical Details on Gas on Ethereum

I’ve been working over the last year off and on on different smart contracts.  What follows are some technical details I sent to some of my colleagues here at Cisco regarding gas price and gas limits.  Two important parts of Ethereum that are not understood very well: Units and Gas.

Units

1 Ether in Ethereum is divisible up to 18 decimal units.  The smallest unit is called a wei.  Thus, 1 wei  followed by 18 zeros equals 1 ether.  So:
The other important unit to know is a gwei.  A gwei is halfway between 1 ether and 1 wei.

In solidity everything is stored in wei values.  There are no fractions nor decimals, we just work with integers.  So by having this long zero values provide a way to have decimals without having decimals.

There are several functions I’ve used in some recent test cases that use this conversion as it is much easier.  Here are some examples:

Essentially, the fromWei function lets you format a value from wei to something else, like ether.  The toWei let’s you convert from ether to wei without having to write out all the zeros.  In Ethereum we have this whole other BigNumber class to deal with.  BigNumber is the class that allows us to store these giant 18 digit numbers as well as manipulate them.   It’s important to understand the units before understanding how gas works.

Gas

Gas is a unit or number of steps to be execute for your contract. Gas has fixed number of units for a specific operation/computation, this is fixed by Ethereum community. For example to add two numbers EVM consumes it may consumes 3 gas units. You set a Gas Limit in your transaction to tell how many steps you are willing to take.  The unit of gas is not in Ether, Gwei, or Wei.  It’s just an integer representing a number of steps.

To get basic fees to get some idea of how much gas, note the following:

  • 32k gas to create a contract.
  • 21k gas for normal transaction
  • 200 gas * 1 byte of byte code.
  • Transaction data costs 64 for non-zero bytes and 4 for zero bytes
  • Constructor cost

See this response on Ethereum Stack Exchange.

The limit of computation per block is not constant. Miners can set this.

VALUE field – The amount of wei to transfer from the sender to the recipient, this is the value we are going to send to either another user or to a contract.

GASPRICE value, representing the fee the sender is willing to pay for gas. One unit of gas corresponds to the execution of one atomic instruction, Gasprice is in wei. Wei is a smallest unit. (1 wei = 10^-18 ether or 10^18 wei = 1 ether. Because it is quite high, we use gwei.  1 gwei = 1,000,000,000 wei.

Total cost of transaction is Gas Limit * Gas Price

Gas price fluctuates. During normal times:

  • 40 GWEI will always get you in next block
  • 20 GWEI will get you in the next few blocks
  • 2 GWEI will get you in the next few minutes.

Depends on what minors are willing to take.

To see the current price for a unit of gas you can run:

The new function in web3.js 1.0.0 (still beta as of this writing) requires a callback:

(the callbacks are always error, response in 1.0.0 so in the new stuff you’ll get two values printed in the console log)

This will be about 20 Gwei. Or you can look to see what the Gas price is at the ETH Gas Station. The current price shown at the ETH Gas station is 11 Gwei (std) and 9 Gwei (safe low).  Creating contracts seem to cost a little more.

In practice when we’ve created contracts we’ve seen that we can use the estimateGas function to tell how much for storing the bytes:

The callback function for the contracts we’ve been working on shows the gas limit (or amount of gas) is about 645234.  But this is only the cost of storing the contract code on the blockchain.  You need to add more gas limit for creating the contract, for passing parameters, etc.  We’ve found that by adding 80000 more to our gas Limit the contract goes through.

Bluetooth Speaker and Microphone on Raspberry Pi

If you want your own Jarvis from Ironman the first step is to have a computer that can listen to what you say and then talk back to you.  The software for that is the hard part but with some AI and RNNs we can get some pretty good functionality.  That comes later, so let’s first start up with the microphone and audio.  I am using a Raspberry Pi 3 and will now set these peripherals up.

Microphone

I bought a basic USB microphone that cost me an outrageous $4.50.  After plugging it into the Raspberry Pi, we can test and use it.  First we see that it is there with:

To record something we will use this command to record 3 seconds of audio and take the input from the -D device plughw:1,0

This will output the test.wav file.  This file can now be played back when we setup our bluetooth speaker.

Bluetooth Speaker

My bluetooth speaker was a pretty cool model I bought for tunes while working out.  Turning it on and making sure its not connected to anything else I logged into the Raspberry Pi and entered the command:

I then entered the ‘scan on’ command (but didn’t have to) and a bunch of devices showed up.  I picked the one I wanted and got its mac address:

I then ran the commands:

The speaker then showed connected.  Now we know its there let’s try to set the system to use it. Running the command:

We get:

This is great because it shows the speaker is connected and ready to work.  Now we can play the wav file through this bluetooth speaker.  The default audio of the Raspberry Pi is set to play out of the analog jack.  So if we run:

It doesn’t play anything.  Changing the command we can specify the device to play through:

Here the mac address shown below is the mac address of the bluetooth device.  It plays but it is really staticy, so the recording isn’t may favorite.  To determine if its the microphone or the audio we simply play another file:

We can simplify this device configuration by adding it to the /usr/share/alsa/alsa.conf.d/20-bluealsa.conf.  We add the following at the end:

In the above I’ve just named my device “oontz” and put in my own MAC address of the device as well as a description.  Everything else should be the same for your setup.  Then we can run:

Now, to make this the default we can add a similar entry to the ~/.asoundrc file.  Mine looks like this:

It is similar to the previous entry, but we have named it pcm.!default.  Now when we play the wav file it will use this as the default:

Cool.  Default sound setup for bluetooth.  Persistent reconnect will be the next topic!

 

 

Kubernetes OnPrem Storage

Lately I came across a few bare metal servers in our lab and decided I would test some of my ideas on setting up a bare metal kubernetes cluster on these nodes.  I started thinking that if I were to do so I would need some persistent storage.  There were a few options I could choose from:

  • I could NFS mount to all the nodes a server that had a ton of disks in it.  I could also find an old NetApp array or something else in here that may do the trick.
  • I could use Cisco Hyperflex.  The only problem was that HX requires VMware or Windows, both are not options for me as I refuse to pay their licensing costs.
  • One project minio caught my eye as a great solution as I could get great object store on prem.  It is already managed by kubernetes so it becomes native storage.  I loved this idea.

As I started thinking more about this I realized how great the Kubernetes Storage Interface idea is.  Looking at Hyperflex, a lot of the heavy lifting done in the code is managing the nodes and installing the controllers.  Much of this Kubernetes gives you for free.  The future then of distributed storage that we’ve been getting from VMware with VSAN or HX, or Nutanix is looking more to me of a place that Kubernetes can dominate.

Another project that holds great promise to me is Rook.  Rook uses the idea of Kubernetes Operators to set up distributed storage including Minio and Ceph.  Operators become the expert in deploying and running these systems.

Examine what containers have done for software development:  A new way to easily distribute and run applications in consistent environments.  Now consider, by extension that Kubernetes is using some of that goodness to deploy infrastructure.  It used to be pretty complicated to setup and manage the lifecycle of databases, Kafka streams, or other services required by the applications.  Now, though, Kubernetes is making this easier for us.

I’ll be examining some of these ideas in some upcoming posts, but I’m pretty excited about what these new features enable us.

Finally, I’d say:  If you are a storage vendor or selling storage software, if you don’t have a kubernetes strategy, its time to start looking at it.  Not only will it increase your time to market, but it will make the upgrading, lifecycle, and maintenance of your storage system for free.  And I’m not talking about how Kubernetes can run on top of your storage, I’m talking about how your storage should be running inside Kubernetes.

Update:  Check out Rook / Ceph in action:

Docker based Jupyter Notebook on B200 M5 with Tesla P6

We have a fresh system, let’s install Docker.  Here we are using CentOS 7.5 and logged in as root.

First make sure your date/clock is correct, if not, fix:

Install Docker

Install NVIDIA Docker

Following the README we do:

The output of this command should give you something like the following:

We can now run the python notebook to get started with Tensorflow

This will give us a session.  The session will look something like this:

Take note of the token printed out above.  Yours will look different.  Since this is running on a server, we will open an SSH tunnel to be able to talk to the Jupyter notebook

We can now open our web browser to http://localhost:9999.  This will then redirect us to the :8888 port on the server.  This way we can access our machine.  It will ask us for a token.  Now we can put the token that was outputted when we started the docker container.

After entering the token we can go through the Jupyter Notebook Tensorflow tutorials.

Using Docker is a ton easier than installing everything on this server by hand.  We did omit some of the nvidia driver setup requirements found in my previous post.

References

https://www.garron.me/en/linux/set-time-date-timezone-ntp-linux-shell-gnome-command-line.html

https://medium.com/@lucasrg/using-jupyter-notebook-running-on-a-remote-docker-container-via-ssh-ea2c3ebb9055

https://blog.sicara.com/tensorflow-gpu-opencv-jupyter-docker-10705b6cd1d

Hands on with NVIDIA P6 and UCS B200 M5

Just got access to a new UCS B200 M5 blade!  My goal is to create a tensorflow lab on it.  Let’s get cracking!

It was installed with CentOS 7.5

cat /etc/redhat-release

Let’s make sure there is indeed a GPU:

Ok, now we need to get some drivers. We go to NVIDIA’s page, fill out the form and get some drivers for RHEL7.

While waiting for downloads, we make it so we can sudo without a password. Run sudo visudo and edit these lines:

Now let’s install the VNC server and some other packages we’ll need.  This will give us development tools and a remote desktop

Now let’s attach to it… hmm.  we can’t.  Is selinux running?

yep.  Let’s turn that off for now.  We don’t need this.  Use sudo to modify /etc/sysconfig/selinux

Have to reboot.  But first let’s install the driver:

edit /etc/default/grub to disable the nouvea driver.

load the new grub file

Add the nouvea driver to the blacklist by appending to (or creating in my case) /etc/modprobe.d/blacklist.conf

Back up the old stuff and make the new initrd

Now we reboot.

I installed the the 32 bit compatible libraries because diskspace is cheap and time is short.

CUDA Libraries

We want tensorflow with the CUDA libraries.  It makes tensorflow fast!  We get it by navigating to their page.  I downloaded the runtime one.

I answer the questions as follows:

Since all went well you will see the following output

(or at least something similar)

Now lets get the environment setup.  Append to ~/.bash_profile

(I’m using 9.2 as this is the version of cuda I’ve installed, it may be different when you install so change as updates come available.)

cuDNN Library

To download the cuDNN libraries you need to have an NVIDIA developer account.  You’ll have to login to the download site and download the Linux version

Here we are cuDNN v 7.2.1 with CUDA 9.2 as that is the library we used.

 

Installing Tensorflow

Install pip

Now we can install tensorflow:

 

References

https://www.nvidia.com/en-us/data-center/gpu-accelerated-applications/tensorflow/

http://developer.download.nvidia.com/compute/cuda/6_5/rel/docs/CUDA_Getting_Started_Linux.pdf

https://blog.sicara.com/tensorflow-gpu-opencv-jupyter-docker-10705b6cd1d

https://gist.github.com/lyastro/26e0cd8245bcf64914857dd5e8445724

 

Installing NVIDIA Drivers on RHEL or CentOS 7

Ubuntu 18.04 Jump server setup

In my environment I have limited IP addresses, so we’re creating a new network and then allowing one server, the jump server, to sit between these network.  To do this, my jump server, aka: jump104 is “dual-homed”.  This means it has two network adapters:  One on the public network, and one on the internal private network.  We are going to make this server a:

  • DHCP server to the 104 net
  • Router to the 104 net
  • DNS server to the 104 net
  • VNC Server to view things on the private network

Network Adapters

First, we install a brand new Ubuntu 18.04 operating system.  When I first set it up, I only had one interface which was configured correctly for the public network.  Now I need to modify the network to add the second network configuration.  This is done by editing /etc/netplan/01-netcfg.yaml

We add another stanza below what is already there:

To make sure that we can route traffic from the 104 net, we have to add some rules.  This is called IP masquerading or setting up a NAT service.

First, edit /etc/sysctl.d/99-sysctl.conf and uncomment

Then run

Next add the masquerading rules

Here, the -s is the source of the internal network: 10.99.104.0/24.  The -o is the Internet facing interface, so for my setup it is the ens160

DNS

Next up, we need to make it a DNS slave.

In my setup we are creating a zone called “ccp.cisco.com”   We have to modify and add a few files.

/etc/bind/named.conf.local

Above we added two stanzas.  First our new domain and where to do lookups and changes and second the reverse zone:  Where to get names from IP addresses.

/etc/bind/named.conf.options

Here we make sure to only listen on our private interface for queries and only allow queries from addresses in our private network.  We also specify that any DNS query that we don’t know about (most of them) be forwarded to the master DNS service which can be directed through this server as well.

/etc/bind/zones

Now we add the addresses to the zone files we listed above.

db.10.99.104

db.sjc.kubam.cisco.com

Let’s check that we did it right with:

/etc/default/bind9

We are only serving on IPv4, so add the -4 flag to the options

Once done we can now restart the DNS server and apply changes:

 

DHCP

DHCP is used for dolling out IP addresses to unsuspecting servers that come on the network.  This makes setting up IP addressing easy for VMs that pop up in our data center.

Now we add the DHCP range.  Here we want to create a dynamic range from 10.99.104.100-10.99.104.254.  By editing the /etc/dhcp/dhcpd.conf file we can make this happen:

But we want to be sure we only listen and respond to DHCP requests on the internal facing network interface.  This is done by editing the /etc/default/isc-dhcp-server

Since after running ifconfig I see that my internal interface is ens192, I update this file to look as follows:

Since I’m not serving up DHCP for IPV6 then I just leave that blank.  To make all these changes take effect I now run:

It’s funny that I haven’t done this type of configuration since 2005 but some things haven’t changed all that much.

 

VNC Server

Being that all the stuff we want is behind a network we can’t reach, we need GUI tools to access the services.  In my case I’m installing Cisco Container Platform which requires that I can open a browser up to the IP address of the virtual machine behind this network.  I can accomplish this by installing VNC and Firefox.  I remember doing this once while installing vSphere many years ago and getting to the very end only to discover that I needed Flash and that Flash was not supported on Linux at the time.  Those days are gone and you can do everything now without Windows.  This makes me very happy.

From here I can start it up by simply running vncserver.  This opens up port 5901 and writes some configuration information to our ~/.vnc/xstartup.  We can customize it to look as follows:

Start it with:

We are now ready to roll and get this private network all the goodness that it needs.

Sources:

  • https://www.digitalocean.com/community/tutorials/how-to-configure-bind-as-a-private-network-dns-server-on-ubuntu-18-04
  • https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-vnc-on-ubuntu-18-04

CI/CD continued: Drone

In my previous post I showed how we setup a CI/CD server.  In this post we’ll go into more detail in explaining how drone works.  I’ve previous written about how drone does secrets.  Well that has changed a bit, so in this update with drone 0.8.5 we’ll show how it works.  This time, we’re working on a python application.

Mac Command Line

Before we begin we need to get the drone command line tool. The easiest way to do this is to get it with homebrew

From here we need to add some environment variables to our ~/.bash_profile so that we can communicate with our drone server.  Essentially, we add:

The drone token you can get by logging into drone (see previous post) and account token in the upper left of the web interface.

Sourcing this file, we can now run something like

If this command shows your github credentials you are good to go!

Drone vs. Jenkins Level set

Drone differs from Jenkins in a few ways.  First off, the configuration file for the job is not stored in the “Jenkins Master”.  Instead, the job’s configuration file is stored in the repository with the actual code.  I love this because it makes sense that the developer own the workflow.  It’s just a paradigm shift and in my opinion much better.  It also means I could take this workflow to any drone CI/CD process and it should work fine.

The second difference is how the workflows are created.  With Jenkins you download “Plugins” and different tools and store them on the Jenkins server.  With Drone, every module is just a container.  That way they’re all the same.  I’ve written a couple as well and I really like the way its laid out.

That said, Jenkins is crusty, tried, and true and Drone is still evolving and changing.

Drone Workflow Config File

Inside the repo of the code to be tested and released we create a .drone.yml file.  Our final file (which will undergo several changes) is located here.

Let’s go over how this works.

First we specify the pipeline.  Each entry in this pipeline is an arbitrary name that helps me remember what is going on.

Notify

The first one notifies that a build is happening.  This first one looks as follows:

Since we are good Cisco employees we use Spark for notifications.  You might use slack or something you might find cooler, but we like Spark.  Notice we have a SPARK_TOKEN secret.  This secret needs to be added via the command line as we don’t want to check secrets into repositories.  That is a no-no.

Test

Next up, we want to test the python flask image using our test cases.  To do so, we created a test docker image that has all the necessary dependencies in it called kubam/python-test.  This way the container loads quick and doesn’t have to install dependencies.  The Dockerfile is in the same repo.  This step looks as follows:

You’ll notice that in each step we add the proxy environment variables.  The reason for this is that you have to remember we are behind a firewall!  So for the container to get out, it has to use the proxy configuration.

Docker

Next we want to build and publish the ‘artifact’.  In this case the artifact is a docker image.  This is done with the default drone docker plugin.

Since we are working on a branch that we want to create new images for we use the v2.0 tag to make sure we get the one we want.

Notice there are secrets here so we have to add these.  So we do:

Let it run

Now we have a great system!  Let’s let it go.  Launching it and we have it doing these three steps and publishing the docker container.  Now anytime someone pushes we’ll test it and if they all pass put it into production.  This is safe as long as we have sufficient test cases.  This is how we can go fast and make less work for everyone and be more productive.

 

CI/CD server behind a firewall

The goal of this post is to show how to create a CI/CD server that sits behind a firewall.  When we do a git push then we would like a webhook to be fired off to our CI/CD server.  The CI/CD server would then run our jobs and create our artifacts.  Normally this is pretty simple when you have a server that sits at a public IP address on the internet.  But then I’d have to pay for that.  Why pay for it when I have lots of computing resources behind my corporate firewall?

DRONE

First we set up our CI/CD server.  I’ll use drone just cause I think its cool the way it works using containers to do each step of the CI/CD pipeline.

NGROK

ngrok is recommended by Github as a way to get our private server an internet address.  Create an account by going to their homepage.

We download the app and put it on our server

After signing up we get our own authtoken with ngrok.  Run the command to get it set up

Now let’s start this server.  On the free plan you get a new domain name everytime you start it so ideally we’d like to keep this session up for good.  One way we can do that is to create a service that does this.  The other way is to create a screen session.  To be quick and pedantic we will use a screen session to make this work.

Now in this screen session let’s start our service

This now shows that our public IP address is ldb27123.ngrok.io.  Cool. We’ll use this now to set up our CI/CD server as well as Github.  You can detach from the screen using the Ctl-a-d command and the connection will stay up as long as the server is up. Notice that we told this to point to port 9001 on our server.  This is the port where we will have Drone run.

Github

We now go to our project and we will register a new application (drone in this case).  To do this on Github go to settings, developer settings , and register a new OAuth application.

On the next screen you’ll be shown the variables for client ID and client Secret.  Make note of those!

Environment Variables for CI/CD server

We’re almost ready to bring up our CI/CD server.  First we will put in environment variables into the ~/.bash_profile file of the server. We define them as follows:

 

From there you need to ‘source’ the bash_profile by running

Test that its running by doing

You should see the environment variables print out.

Install Drone

To start up our Drone service on this server you’ll need to have docker-compose installed.   We follow the instructions and created a docker-compose file that looks as follows:

Now we can start with

If you have problems, check the logs of the docker containers.  It may be that the environment variables are not set correctly.

We made it so anyone can register and will need to change this to something different so only authorized users of our org can use our CI/CD server.  You can see how to do this with more settings.  Check out the documentation on Drone’s web page.

Configure Drone

Now that you are installed you should be able to navigate to the drone web interface by going to the grok page.  You’ll be redirected to your CI/CD server and be ready to accept push events.  Next up, you’ll need to configure drone on your application to do all the wonders you like it to do!  Drone, unlike Jenkins, has a paradigm where you put the configuration of the CI/CD job into the github repo.  This works great as it allows jobs to be configured individually by the different applications. I’ll probably write about that in a different post as we explore that with our team!

Notes

  • I did find I had a few errors when putting it in production.  To start out with I was using an older docker version so I got an error that was resolved with this post.
  • Because my docker images were behind a proxy I had to configure proxy settings.  This was done in the environment variable inside the .drone.yml file. See the example here.