CoreOS, Ansible, OpenStack, and a Private Registry

This took me longer than I want to admit to figure out, so I thought I’d post this solution here.  I’m doing this on Cisco’s OpenStack Private Cloud (COPC) (formerly known as Metacloud).

Problem:  Want to deploy a CoreOS instance that can access docker images from a private registry.  I want to do this with Ansible.

Why its hard:  Not a lot of good documentation on this put in one place.  I kept getting this error:

Really started to aggravate me.

Ansible Playbook

Here’s the playbook in its final glory:

The coreos-cloud-config.yaml file looks like this:

There were a few things to note:

  1. If I used the config_drive: yes like it said on some documentation somewhere with this then I had some problems.
  2. I was using a different configuration for the cloud-config that had me do files instead.  Not sure why I did this, but figured it out by using the other flag.  As you can see I even opened up a problem on CoreOS github repo.   I think this is what you need to do in order to solve your own problems.  And the reason we all need a rubber duck.
  3. The CoreOS documentation shows a IP address range, but I just put in the actual registry for this and it works great.

Hoping that helps someone else not struggle like I did for hours…

 

Can AWS put out the private cloud?

TL;DR: No.

Today I watched a recording of Andy Jassy at the AWS summit in San Francisco from last month.  (Yes, I have a backlog of Youtube videos I’m trying to get through and John Oliver takes precedence to AWS)

The statistics he laid out were incredible:

  • 102% YoY revenue growth for the S3 service comparing Q4 2014 to Q4 2013
  • 93% YoY revenue growth for the EC2 service comparing Q4 2014 to Q4 2013
  • 1,000,000 active customers.  (Active customers do not include internal Amazon customers)
  • 40% YoY Revenue growth in total

Andy went on to say that AWS is the fastest growing multi-billion dollar enterprise IT company.  He backed this up by showing what I would guess internally they may be calling the “dinosaur slide”.  A slide that showed the lagging growth of other companies like Cisco (7%), IBM, HP, etc.  (never mind that he was comparing to the total Cisco business and not just the datacenter business that AWS competes with)

This presentation with its great guest speakers and the announcement of the new EFS service really set the Internet on fire.  There were posts such as this one: “How in the hell will any cloud ever catch AWS” and many more.

I love AWS.  After being stuck using VMware for a few years AWS just feels so right.  I love how easy it is to use and develop applications and I especially love the APIs.  But I have to take exception with the simple Dark Lord Sith logic of there is “the right way = AWS” and the “wrong way = whatever else you’re doing”.  This is what I call the Sith Lord slide that he showed:

A4E8582C-22F7-49DF-BADA-CE624A9CA0E0
The Sith Lord Slide from AWS Summit San Francisco April 2015

 

The slide and the pitch suggest rather explicitly, that if you build your own frozen datacenter and don’t use AWS you will get left behind.  Really, AWS says there is no need for your own data centers.  AWS releases new services nearly every day, 512 in 2014 (I’m not sure what all 512 are, nor what they consider a service, but this seems like marketing hyperbole).  And there is no way you or any other datacenter can catch up.

Also this last weekend there was an article that’s title looked like it was written by Bilbo Baggins called: “There and Back Again: Zynga’s Tale with Amazon’s cloud“.  The article talks about how Zynga, so hot, tried to ween itself from AWS and then after several years, decided it would ditch its own data centers and go back to AWS.  All in, according to their CEO on earnings call last week.

But that’s just like your opinion, man

So based on all that, it seems that building your own data center is a fools errand.  So in light of all this, I have to quote the Big Lebowski and say: Yeah? Well, you know, that’s just like, your opinion man.

That’s just like, your opinion, man.

A differing opinion, man

Let’s just talk in general now and then talk specifically about what we know from the Zynga article.

  1. The Enterprise: The final frontier
  2. Taxi cabs or rent a car?
  3. Feature complete or good enough?
  4. What can we build together on open source?

The Enterprise: The final frontier

AWS’s reach has largely been for startups.  My two instances in AWS count towards that 1,000,000 customer count.  Now the focus is to the enterprise where they are starting to see great success.  But the enterprise may not be as good of a fit to go ‘all in’ and there’s one good reason:

Large static workloads.  Startups have workload that comes and goes based on the customers an seasonality.  So too does the enterprise, but there is still a large amount of static workload that doesn’t change size.  Take the HR programs of these companies that are off the shelf and just live in their own data centers.  It makes good sense for a line of business organization to move a lot of its customer facing, variable driven, applications to the cloud.  But for that static work load?

Look, static workloads are as unsexy as a corner drugstore.  They are the old applications.  They are about as sexy as the applications that run on a mainframe (which who’s business continues to grow for IBM).  But for the enterprise, we still need them.  Perhaps these old workloads are the new mainframe workloads.

In the future these workloads will probably offered as SaaS based applications and then the enterprise can abandon them, but for now most of them aren’t.  In addition there are applications that are home grown, like a facilities application in certain universities that hasn’t been rewritten to be cloud aware (probably a lot of money in this by the way).

But its not just off the shelf software packs from Oracle.  It could be the companies own product:  A SaaS delivered to its customers.  If that is largely static, and we already have the data centers, why not just use them?  It doesn’t require extra capital outlay as we’ve already got them.  The only thing enterprises may be lacking is the cloud operations experience, but that is something they can buy from something like Metacloud, now Cisco OpenStack Private cloud.

Uber or rent a car?

Scott Sanchez talks about how workload infrastructure is similar to transportation options.  The cloud is like renting a taxi cab.  When you only need it now and then, it makes a lot more sense.  When I fly into Seattle, I grab a cab because its so much easier.  I don’t have to take the bus all the way to the rental car facility.  I hop into the cab and he takes the carpool lanes all the way to the city.  I pay and I’m done.  This is like the cloud.  Its super effective and faster.

But does hiring a cab make sense if I need a car all week with multiple meetings in different cities?  Hiring a cab gets really expensive and really may not be the best fit.  It costs a lot more to keep him sitting there unused while I’m in my meetings with the meter running.  In this case, I may be better off just renting a car.  Especially if its multiple days and different cities and in places where parking is free.

A best option for large enterprises would be to have a place where they can run static workloads more cost effectively and if things get dynamic, burst those workloads to the cloud.  Some apps go here, some go there.

Feature complete or good enough?

Can anybody catch AWS?  Oh yeah.  Let’s look at VMware as a case study.  Nothing even came close to offering its complete, feature rich, easy to use vSphere products.  But what happened?  In that space a new technology like AWS started to erode it and then Microsoft got ‘good enough’.  This will continue to happen, even though VMware is currently meeting earnings expectations I’m already predicting its demise.  Let’s check back in 5 years and see how 2020 earnings are.  Maybe I’m wrong?

Microsoft continues to get good enough and its new offerings are very compelling to the enterprise because they include a strategy on how to leverage their existing data center.  Their azure stack offers the link for hybrid cloud that AWS doesn’t have.  Its not perfect, but its on the track to do what enterprises need:  Connect the static with the variable.

What about feature complete?  No service is really feature complete, but for the average programmers and IT organizations, how many of those AWS services are people really using?   For the beginning organizations just give me EC2, ELB, S3, cloud formation, and cloud watch for autoscale and I’m pretty good.

Guess what?  OpenStack already has at minimum 2nd or 3rd generation projects that already do that.  Consider the rise of Digital Ocean with its cheaper services, limited features, and hyper growth.

The other issue is newer clouds don’t need the baggage of the old guard.  If we had a cloud service that was container based (docker, rkt) then we could use that.  AWS has ECS but this current version leaves a lot to be desired.  A container based cloud I’m convinced is the future and the only real cloud we’ll need.  (I’ve been wrong before… a lot), but as a full stack guy, I’m going all in on containers.  I don’t know if Docker will survive, but containers will.

What can we build together

This brings me to my last point and this is how I think everybody else can win:  If we standardize our private clouds based on an open architecture and then at some point connect those together then we can really do something incredible.  If we did that we could:

1.  Offer cheaper cloud services than even AWS to our customers.  Private cloud wins every time on cost, but it isn’t about cost why people use AWS:  Its about speed and features.  If we can’t get that with a private cloud there’s no reason to have it.  But if we can deliver both we win.

2.  Offer capacity up.  Think of it like a house with solar panels.  You make extra electricity and the electricity company has to pay you for what you use.  If we can create secure connections and secure data at rest then data centers can attach to this cloud-grid and consume and use services all over the place.  You will have more data centers than even AWS has to chose from.

Positioning your private data cloud around proprietary offerings limits you from the ability to engage with the larger community.  No doubt these private offerings have their place and will have their day in the sun, but I fail to see how they help build a community large enough to be sustainable; That is, unless they achieve critical mass, but then we’ve got the same problem with one player setting the rules.

An open platform offers a way for everybody to win.

Zynga

Let’s turn back to Zynga.  I used to tell everybody how Zynga got so big they had to make their own cloud because their AWS bill was too big.  Zynga gave great reasons, including over-provisioning as a reason they built their own cloud.  If they’ve gone back to AWS ‘all-in’ what does it say about them and what can we infer?  Does it mean AWS killed the private cloud?  Here’s what I infer:

1.  Revenue isn’t coming in.  Trimming its workforce by 18% and jettisoning data centers are cost cutting measures so they can free capital to invest in new games.  Gaming is a tough business and it seems there is no fixed workload.

2.  Variability.  Zynga’s games may be more variable than previously thought.  This probably relates to point 1.  Did the variability in Zynga’s games make it so that they had over built capacity?

3.  Is private Cloud best served as a product or a service?  We know a little about the Zcloud.  It was at one point based on Cloudstack with RightScale to provision workloads.  Citrix sells products.  Its not an operations company.  Zynga used CloudStack before it was open source.  While all reports generally show that CloudStack is getting better, perhaps its features are not enough for Zynga and maintenance and upgrades wasn’t so easy.  CloudStack is still a distant second as best from market adoption as OpenStack is.  But OpenSource alone isn’t going to save people.  A service like Metacloud, now Cisco OpenStack Private Cloud may have saved this.

4.  Were developers happy using the internal cloud?  If they weren’t and couldn’t move as fast then perhaps they didn’t want the Zcloud.  Perhaps the Zcloud was a cause of contention the whole time it was around.

Lastly, this interesting tweet from Adrian Cockcroft:

Screen Shot 2015-05-11 at 10.14.41 AM

Can’t argue with that, but I can question it:  Did they save $100M with their own data centers and just not redeploy the capital enough?  Obviously there was a TCO analysis done, perhaps it just didn’t work out because the bets didn’t pay off.  What if House of Cards was a flop?  Netflix still pays a lot of money for AWS and I would argue has more of a sustainable advantage than Zynga does in the marketplace.  Does the result of this experiment apply to everybody?

Conclusion

  • I still believe in a future of loosely federated clouds that can offer capacity to each other.  I’m not ‘all-in’ on the public cloud.  Just like I’m not ‘all-in’ on getting rid of mainframes.
  • I believe a large enterprise would save and benefit from a private cloud as a service offering from something like Metacloud rather than pure open source products alone.  Metacloud mitigates risk and delivers the core capability (IaaS) that AWS provides.
  • Organizations within Enterprises should use public clouds like AWS.  It makes a lot of sense.  Even if they have private clouds, I still advocate using public clouds like I do using Uber or Taxis… Just not all the time.
  • Zynga’s outcome doesn’t apply to everyone.
  • We need better hybrid cloud solutions.  We need better ways to connect the clouds.

 

 

 

 

Deploying Instances on COPC (metacloud) with Ansible

I wanted to show a quick example of how to deploy an instance on Cisco OpenStack Private Cloud (COPC or Cisco OPC or MetaCloud) with Ansible.  Since COPC is just a fully engineered and operated distribution of OpenStack from Cisco, this blog is also applicable to normal OpenStack environments.

I’m a big fan of Ansible because everything is agentless.  I also think the team has done a phenomenal job on the docs.  We’ll be using the nova compute docs here.  I don’t have to install anything on the instances to be able to do it and I can just run it from my laptop with minimal dependencies.  Here’s how I do it with CoreOS.

1.  Get Credentials

On COPC, you can navigate to your project and download the OpenStack RC File.  This is done from the ACCESS & SECURITY tab and then clicking on the API Access tab on the right.

COPC-Access&SecurityOnce you download this file, you put it in your ~/ directory.  I use a Mac so I just added the contents to my ~/.bash_profile.sh file. It looks like this:

Now, we’re ready to role.

2. Ansible Setup

I covered Ansible in previous posts.  So I’m going to assume you already have it.  Let’s create a few directories and files.  I put all my stuff in the ~/Code directory and then under the projects directory.  I then make sure everything in this directory belongs to some sort of git repo.  Some of those are on github (like this one) and others are in a private gitlab, or a private github repository.

./ansible.cfg

This file will have our info for where our inventory is.

This will be global settings for our environment.  We tell it not to use cowsay, but you can if you want.  Its kind of cute.  You may not have it installed.  We also tell it to use the contents of the inventory directory (which we’re going to create) to go to our hosts.

host_key_checking tells it that when we access a new server, not to worry if we’ve never seen the host before and attach to it anyway.  Finally, our remote user is core as this is the default user for the coreos instance that I’m using.

./inventory/hosts

We create a directory called inventory and add the file hosts.  We then add our one machine (our localhost!)  The contents looks like this:

You’ll notice here I also added which python I wanted to use, just in case I had other versions on the system.  This might be good too if you were using virtual environments.

./vars/copc_vars.yml

This is where we put the specifics of what we want deployed.  In our case we need to define the following:

The security group ‘default’ in my project, as seen from the dashboard actually includes port 22.  This is important so that I can ssh into it after its provisioned and do more things.

I imported my coreos image from the CoreOS OpenStack image website.  After importing it in from the dashboard, I clicked on the image to see the image ID:

Screen Shot 2015-05-01 at 2.50.42 PM   The floating IP pool is nova, I got that from looking at the dashboard as well.

Finally, the keypair is one I generated beforehand and downloaded into my server so I can log into it afterwards.

copc-one.yml

This file is our playbook.  It will provision a server.  Let’s look at the contents:

The great thing about this script is that none of the secrets are put into it.  Using the environment variables that we did by sourcing the ~/.project-openrc.sh file we are able to run the code perfectly.

Everything here is pretty self explanatory in that we are just passing in variables to the nova_compute task to bring up a new instance.  The name will be demo-server and everything else we’ve defined.  If the instance is already up, Ansible won’t go and try to provision a new one.  Its looking for demo-server, if he’s there, he won’t touch him.

3. Run the Playbook

We’re now watch the output on the dashboard and you can see it will spawn up.

Screen Shot 2015-05-01 at 3.19.13 PM

 

The next step is to make it so we can run Ansible playbooks on this host.  The problem right now is that coreos is just a stripped down barebones OS.  So there is no Python!  We’ll have to add a cloud init script or do something else to make this work.  I’ll save that for another post.  But if you were using Ubuntu or RedHat, you’d be good to go at this point.

Code

All the code in this is available at github here.

Cisco OpenStack Private Cloud Developer Demo

In my new role here at Cisco I show people how powerful the Cisco OpenStack Private Cloud solution can be for developers.  I made the below video last night to demonstrate continuous integration.

The video is a scenario for a new company called lawngnomed.com.  The company provides lawn gnomes as a service (LGaaS).  Under the cover of darkness LG technicians will place 100 gnomes on the front yard of an address you specify.  The person living in the address you specified will wake up to find 100 gnomes on their front yard with a special greeting.  LG wants to add a new feature to their ordering site to allow for a personalized greeting.

The flow goes as follows:

  • LG decides they want to try out a new feature
  • Developers implement feature and perform unit tests to add the feature.
  • Developers check in the feature to a private repository.  In this case they are using Gitlab, an open source version of the service that Github offers.
  • Gitlab has a trigger connected to Jenkins, the continuous integration server.  When Jenkins sees a code checkin from the project it performs integration tests on the merged code.
  • Jenkins integration tests that pass then are pushed into a new Docker Container that is uploaded to a locally hosted docker registry.
  • Jenkins has a production job that monitors the Docker registry.  When new containers are pushed up, a job is kicked off to go through the containers, take them off line and put up the new container.  The load balancer (NGINX) handles the load.

This demo will be posted on Github with all the devOps code as I continue to refine it.  Also, any suggestions are more than welcomed!  Perhaps I’m doing it wrong?  I will post my solutions and then let Cunningham’s law dictate how accurate I am.

 

Subview Rotation on iOS 8 with Swift while keeping static main view

That title is a mouthful.  Basically, I just want to know how to imitate the native iPhone camera app.  The camera button stays locked where the home screen is but the icons rotate with the device as well as the capture area.  Should be pretty simple right?

tl;dr:  See the code here on github that I did.

rs

Well with iOS 8 Apple introduced the concept of adaptive layouts.  See WWDC session 216.  This introduced size classes and traits which I think is fantastic.  Except for when you want to imitate the iOS camera application.  Then you don’t know what to think.

There were two main ideas I could have used that I came across:

1.  Using two windows.  This was brilliant and I think would work.  This even came with sample code to show how to keep the rotations separate.  I had read other people saying it was a bad idea to have two UIWindows.  I played with this a little bit but it seemed too much for what I needed.  Plus, I had the UI Tab Bar controller as the root image so it was somewhat complicated.

2.  UIInterfaceOrientation.  These methods all seem to be depreciated in iOS8 and may or may not work.  The problem with these methods is that the root method gets the notification and then signals to every body else.  I may have been able to work with this but I didn’t want to go through all the way down the hierarchy and implement all these methods for those that should be static and those that shouldn’t be.

I went with UIDevice.orientation.

Here’s the steps:

1. Subclass UITabBarController

Since my main project has a tab bar as the root interface I started here.  This way I want all the views to be able to rotate and use auto layout to do this.  There’s just one subview that needs to stay the same.  This was accomplished by adding the following method to the view controller:

This makes it so that this view 1 in the tab bar won’t rotate.

2.  Subscribe for notifications in the App Delegate

I may have been able to do this in the main class, but I did it in the app delegate in case it didn’t get alerts.  Then I had that propagate another notification.  This may be a redundant step, but figured I’d try it and was too lazy to change it back.

 3.  Subscribe to notifications in the non rotating view controller.

Now to react to these we’re going to rotate the subviews that need to be rotated.  This is done in 3 methods:

Maybe you have a better way?  I’d love to know!

There is one problem with this method:  If the application launches in landscape mode then you’ll have to rotate it a few times to actually work in the right mode.

See the full code here.

Corkscrew: SSH over HTTP Proxy

I found myself behind a firewall that didn’t allow SSH to the outside world.  They did allow an HTTP proxy though.  So I thought:  Let’s make sure we can get out!  After all, I needed to download some of my code from Github, make changes, and then upload again.

Here’s how I did it.

1.  Install Corkscrew

 

2.  SSH Config

Now let’s make github work.  We edit ~/.ssh/config

Now we can test by running: ssh github

This gives us the familiar response:

I won’t worry about the channel 0 failure.

3. Git Config

Inside our git repository, we can update .git/config to point to the alias of github.

Now we can do git push without any issues!

Secrets with Ansible: Ansible Vault and GPG

I was blown away last night at our Ansible PDX meetup by a great presentation by Andrew Lorente about how to track secrets with your applications.  Andrew gave a method of how to do this that I wanted to write down so I know how to do it.  Andrew has his own blog here where he wrote about the solution.  I wanted to go over it a little more in details cause I want to make sure it sticks in my head!  (bonus: I also learned about the pbcopy command on Mac last night!)  The other thing is since I didn’t have any of this on my machine it helps someone get started who hasn’t done anything with GPG yet.

His technique involves some pretty simple tools:

1. Generate an Ansible Vault Password

On my mac I run:

This gets me my tools!  Ok, so now I need to generate my pgp key.

Doing this I just accepted all the defaults.

Now I generate a password for the vault.

Now that I have that I follow Andrew’s instructions and create a file called open_the_vault.sh with the contents being:

Then make sure I can run this file as an executable

Add this to my ansible.cfg file

 2.  Setup the GPG Agent

If you now run the command ./open_the_vault.sh you’ll find that it says: “Hey, there’s no agent running!”.  There are a few ways we can start the agent.  You can create a LaunchAgent as shown here, or you can just configure something in your own shell.  I went with my own shell method and basically followed this post.

Create ~/.bash_gpg

Append to ~/.bashrc

Create ~/.gnupg/gpg-agent.conf

Opening up a new shell, I should now be able to run  the ./open_the_vault.sh command.  It will ask me for my password the first time, but if I run it again, it won’t ask me again.  Right now the default-cache-ttl is set to 600 or 10 minutes.  This can be increased if I want it open longer.

3. Encrypt the file

The file I will be encrypting is a main.yml file that contains all my variables.  Since it already exists I run the command

Now, if I look at this file roles/openstack-controller/vars/main.yml you’ll see its just a bunch of random encypted numbers!  Awesome!  Now my environment variables and my password file can all be committed with git.

Now obviously, if you look at the history of this project I’m working on, you’ll see the old unencrypted file, but that’s ok, I’ve changed the passwords now so its super secure!  From now on though, no more simple passwords and I’ll be using these methods to encrypt.

4.  Sharing Keys

Obviously this solution works great for one developer, but what if we have more developers?  They will also need to be able to run the key.  To do this, we just need to encrypt the file with all of our users.  We now decrypt the vault_passphrase.gpg with our open_the_vault.sh command.  We then get the output of our passphrase.

Now, we encrypt it again with all of our users.  The new user will need to share his key with you so that you can encrypt it.

Test that it works by trying to edit the file

 

VMware is the AOL of the private cloud

This week at VMware Partner Exchange there was a nice announcement of VMware Integrated OpenStack or VIO.  VIO attempts to do what most of the developers of OpenStack have failed to do:  Make OpenStack easy to deploy and manage.  (It’s not, anybody who tells you different is trying to sell you something).  The sound bite is this:  You can leverage your investment in both skill and knowledge in existing VMware products and transition to OpenStack.  Or this:  VMware is the easy button for OpenStack.

The first statement (paraphrased by me) is worthless and the second statement (also paraphrased by me) is just a lie.  Putting OpenStack on top of ESX is actually more difficult to manage and troubleshoot than if you were to run it on native KVM.  When you use VIO you are dealing with an appliance that has its own layers of complexity as well as more dependencies that don’t really need to be there.   I’m convinced that if you want to learn open stack, you can’t use these magical tools to install them, but that you first have to go through and manually install it.  It took me a week the first time and every time I’ve done it since I run into other issues.  Each time, however, I come away with a better understanding of the core components and have even made my own automated installer for learning purposes only.  So in that sense, you are not at all leveraging  your investment in VMware and it is instead just holding you back.

I caveat that above paragraph by also saying I’m not new to Linux administration nor managing large scale systems. So I shudder to think how the person with all their Microsoft and VMware certs with little Linux experience will do with this.

The analogy is AOL.  Remember in their sunsetting days when they started offering AOL broadband?  By that time we were all too advanced and realized we didn’t need all that bloatware, nor portal in our environment.  So we cast off the shackles of AOL and started making art with their DVDs.

This is how I view VIO.  However, for those of you who have not worked with OpenStack and want to just try it out, I say go for it!  Just like I would to those who had never been on the Internet before.  But there will come a time, where if you want to move forward with it, you’ll have to learn some new skills.  This is IT after all, and if you’re not learning something new every year, you quickly become irrelevant.

There are still workloads that don’t go well with an OpenStack nor AWS model.  Exchange and SharePoint in my opinion still do great on VMware or Hyper-V.  Any time you are treating applications like pets instead of cattle, then VMware vSphere, is a great solution for you and there really isn’t a need for a self service portal.  Certainly VMware software will continue to evolve, but there are few products from them (or even other traditional enterprises and even my own company Cisco that will help you)

The State of Things

Everybody is failing at making a successful private cloud.  I used to think it was 80% and that the other 19% were just in denial, but it turns out its 95% if you believe Gartner (which I usually don’t).  Why all the failures? Is it management that doesn’t get behind it and has no vision?  Is it that the teams are too siloed? Do the engineers just lack the skill set, or are too stuck with their own pet technologies?  Probably all of these.

And it seems to be getting worse for central IT.  We had one customer (Central IT in famous University) where we asked them what projects they were working on this year.  “What’s in your budget?  What do you guys want to do this year?”  He responded by telling us that his budget was cut to basically 0 and they are in maintenance mode.  Instead the lines of business have all gone to public clouds.  Our customer in Central IT is now just supporting file shares and legacy systems.  This is the future for most of central IT unless they evolve.

You’ll notice that there isn’t “The Private Cloud solution” for those that wish to serve their constituents like AWS can serve their customers.  Every legacy IT shop offers one, but its not selling off the shelves.  Cisco has UCS Director that tries to do it, HP has their tool, VMware has vCAC or whatever its called now… But OpenStack is the only one that is universally both hailed and derided.  OpenStack is the solution central IT would love to love, but can’t because its too X, where X = (immature, difficult, geeky, esoteric, complicated,…)

OpenStack is the solution central IT would love to love, but can’t because its too X, where X = (immature, difficult, geeky, esoteric, complicated,…)

 

The economics still show (too lazy to find link, exercise left to the user) that hosting your own private cloud is cheaper if you can do it.  There was one person I met a the last Ansible meetup who told us his start up company that had no shipping product, nor users was running a $20,000 AWS bill every month!  The case for moving to a public cloud isn’t to save cost, as my good friend tells me, its to become more competitive.  Its to move faster and get products out and make more money.  Plus, nobody trusts that their central IT can even deliver it and keep it running.

Well I hate to be all negative and doom and gloom.  So I’d like to propose a possible solution.  Its called MetaCloud, but was recently rebranded Cisco OpenStack Private Cloud.  I think we’ll be hearing a lot more about this offering this year as more enterprises embrace it.  Check it out and see the advantages.  Its the product at Cisco I am most excited about.  MetaCloud allows central IT to focus on services to their customers:  Database as a service, Load Balancing as a service, common app as a service, etc.  That is the new role of Central IT.  Providing higher level services instead of just infrastructure.

To conclude, I’d like to encourage central IT people to try out VIO as a way to test OpenStack, much like I would have encouraged people who wanted to get on the Internet for the first time to do a 60 day trial with their AOL disk they got in the mail.  But keep in mind that just having OpenStack deployed is not going to keep your customers from fleeing your firewall and going into public cloud services.  If you want to keep users, the real value IT needs to do is deliver higher level services.  Which services?  Take a look at the things you can get from AWS for examples, because that’s who you’re competing against.

iOS: ManagedObject doesn’t trigger NSFetchResultsController Reload

One of the problems I have been working on is to update a UITableViewCell with updates after the parent view controller reappears.  In this case the user clicks on a ‘comments’ button which segues to a new view controller where the user enters comments for the post.  Once the user enters these comments, they click the back button to return to the previous view controller.  That view controller should have one of its table cells updated with the new comment.

But it doesn’t work!

I tried several solutions and spent a few hours on this.  There were some helpful posts explaining what happened such as this one and this one.   None of those seemed to work for me.

I started out creating an unwind segue on the main view controller:

And as you can see from all the commented out garbage above, I tried just about everything to get that table cell to reload.  The curious thing was that the size did change, but the contents did not.

For completeness, on the exit for the comment view controller I had the code call this segue:

(Note: In storyboard you have to control drag from this view controller to the exit and select the “unwindComment” segue for this to be hooked up)

Well, none of that worked.  What I found was that the new comment wasn’t being saved in time for when the segue unwind was called.  I think if I put that segue in viewDidDisappear it might work as that is called later but I’m happy with my solution so won’t change it.

Instead the way I solved it was by putting in a notification in my Data Model when the comment was saved.

Here when its saved, we post a notification saying we saved something.

Now, back on the main view controller (where there is no editing) we listen for this notification:

Then we update the tableview when this is fired:

What a relief it was to see this work!

Now when the user clicks the back button after posting a comment, the main tableview is updated with that comment.

Git proxy

To install Ansible on my RH machine I had to get out from a proxy.

First, I modify .gitconfig

Mine looks like this behind the corporate firewall:

Or:

It does the same thing.

I also had to add to my .bashrc file:

Strange that I had to add it to two places.

I also found I couldn’t do it like the Ansible documentation said.  I had to do:

That got me a good environment.  Next I created /etc/profile.d/ansible:

That seemed to work for me!