Chris Ciborowski bio photo

Chris Ciborowski

I'm a technologist & entrepreneur helping companies see the endless possibilities provided by DevOps and cloud application delivery. I'm a Docker Captain and CEO & CO-founder at Nebulaworks.

Twitter Google+ LinkedIn Github

It’s now Saturday morning. I’m home and I’m exhausted, but what a week I had in Austin at DockerCon 2017.

This week I attended my third official DockerCon. While technically speaking I wasn’t a “paid attendee” of the first DockerCon in 2014, I did get in for a bit. It just so happened that I was right down the street at CloudFoundry Summit. So, you could technically say I’ve done four US DockerCons. Veteran Status? Maybe that’s what helped me get the title of Docker Captain. I digress.

Well, how things have changed over the years. I think the numbers have gone something like this for all the US DockerCon: 500, 1200, 2500, and 5500. Quite the growth. But there has been a constant vibe at each, something that other conferences distinctly lack: Energy. People at this year’s DockerCon were of the same DNA, excited to be learning something that is new and has the potential to transform their {business, development, IT operations}. There was a palpable excitement in the air not only in the keynote sessions but also in the breakouts. As a speaker, it helped me get fired up for my presentation and I hoped that it sparked that same energy for those who attended.

Energy, check. That’s a key component for a good conference. Let’s discuss the big announcements, the sessions and sum up the week.

The Moby Project and LinuxKit

Right from the gate, Solomon went big. He announced two projects that are quite interesting on the surface, but only when you dive into the details does the real impact become apparent.

The Moby Project: Consolidating all the open source docker projects under one roof

Starting with the Moby Project. I think there is some confusion around what this actually is. Think of it like this: Docker (the company, capital “D”) created a tool called docker (lowercase “d”) which is open source software. The docker tool actually included a number of open source tools, like libnetwork, containerd, and runc. In addition, Docker also created a number of other tools (InfraKit, HyperKit, LinuxKit) that were used internally to build products like Docker Datacenter, Amazon Edition. What Docker did, was to take the whole of these tools and put them all under the Moby Project.

By doing so Docker now has given the community a holistic way to create all of the components necessary to have everything from a fully functional container orchestrator deployed on public cloud infrastructure down to the container runtime.

So, what does this mean for the community and ecosystem at large? Let’s dive into the major implications:

  • All non-Docker (the company) projects currently using docker (the project) will be using moby moving forward. Kube and Mesos (using the docker containerizer) could be using moby in the near future, as in moby, the container engine built with just the containerd runtime.
  • If you are a contributor to the docker project, you are now going to be contributing to the Moby Project.
  • Trademark protection. Docker (the company and product, i.e., Docker Community Edition) is often confused with open source docker (the project). So when someone uses the docker engine in their project, they say “we use docker” and it implies Docker (the company) supports their project/product. Not anymore. These companies will either use moby, or Docker Community Edition.
  • Moby is meant for teams building platforms and products, not developers trying to build docker images. Too much assembly required for the higher-level functionality of building and shipping images.

This is a move that everyone has been asking for quite some time now. Docker can now focus on building commercial software using their trademark and offering it to companies that would like a consistent look and feel, release cadences that are well defined, and less confusing to all of the folks that, to date, haven’t spent time in the ecosystem (there are LOTS of people in this category). In addition, for companies that are interested in building something new, derived from the open source tools that once were docker, they have the keys to do so. If you are a fan of competition (I am) this is a good move. Let’s see what happens to projects currently leveraging docker! Options help the ecosystem become stronger and more quickly adapt to customer needs and challenges. And I am all for that!

LinuxKit: A new project released to automate the creation of purpose-built Linux OS

The second big announcement at DockerCon 2017 was LinuxKit. LinuxKit is a tool that helps automate the build of minimized Linux operating systems. See, this isn’t your standard-issue packaging system, rather, it takes a programmatic approach to building that Linux OS. The concept is simple. Add in the items that your OS needs, package it up, and run it. The best analogy is building an embedded systems OS. The resulting OS

The concept is simple. Add in the items that your OS needs, package it up, and run it. The best analogy is building an embedded systems OS. The resulting OS isn’t really meant to be administered like your current RHEL or CentOS based VMs. No sir. Think custom to the core, running ONLY what you need. The goal is to end up with something small, more secure with a lower attack surface, and immutable. That’s right. No SSH, no patching, admin-less. Hmm…haven’t I heard of this before? Yes.

Many people asked me why Docker went this direction. In my opinion, two reasons:

  1. Have an answer to CoreOS and Rancher who already have a Linux distro specific to running containers
  2. Compete commercially against RedHat and Canonical where shops are already using their distros

Compared to Container Linux and RancherOS there are some similarities. All three are meant to run a container image containing the dependencies and software which is destined to run on the host. This can be anything from a service (like HAProxy, redis, etc.) or your own application. And all three have some level of OS service abstraction from the container engine (notice I said this, rather than docker – uhm…moby) that is running your code. Also, all three have the capability to be modified and optimized for your given environment.

It appears that there is one area where LinuxKit diverges from the other two. That being, it seems like Docker is preparing folks to move towards unikernels. Docker does have a vested interest in this department, considering its acquisition of Unikernel Systems mid last year. The Unikernel Systems team was working on the MirageOS project (and others), and the approach that LinuxKit takes, I believe, is moving us toward the direction of unikernel adoption. Clearly, there is a big difference in a unikernel app deployment from a reduced footprint Linux OS running a few containers for services. But the way that LinuxKit is approaching building an OS from a library, I see some unikernel engineering manifesting itself in their implementation. Who knows, maybe DockerCon 2019 will see more unikernel discussion.

On the topic of containerized Linux operating systems, I was recently quoted in a CRN article about RancherOS. I still stand by my comments. The use of RancherOS is more aligned with a traditional Linux OS, providing mechanisms to admin (SSH) and update (ros). Their getting started docs include this information, speaking to more to the traditional admin approach. In the context of next-generation distributed systems, we must question how these operating systems are built and administered, and I feel that the embedded systems approach is more conducive to running discrete tasks like microservices.

I’ve written a quick how-to demonstration of getting started with LinuxKit on Mac OS X with xhyve. If you are interested check that out.

Modernizing Traditional Applications

This definitely was not an announcement for the technophiles in the audience. It was, in my opinion, Docker taking the opportunity to speak directly to the numerous folks who were attending on how to start adopting docker in their shops. Based on the discussions I had with current and prospective clients who were in Austin, they hit the mark. It got the traditional shops asking questions – and that my friends is a good thing.

The concept of taking an existing application and moving it to a new technology is nothing new. Some of us old-timers have done this a few times in the past. In fact, there were companies who had programs, similar to MTA, to bootstrap the process. Remember IBM trying to convert Sun customers to AIX right around the acquisition of Sun by Oracle? Yep, big technology companies trying to get a piece of the pie. And this is similar.

Docker has partnered with Cisco, HPE, and Microsoft to deliver a tightly scoped engagement to take a “single application” and do the P2C or V2C translation and run that application on Docker Enterprise Edition. In 5 days. Yes, 5 days. Being in the consulting business, to do this requires quite a bit of automation – which means quite a bit of tooling. If you’re asking if there is tooling involved, you bet there is: image2docker. This tool (actually two tools, one for Windows and another for Linux) is what the teams will use to lift and shift the application. And because there are tools involved, there’s not much room for customizing the tool, so the candidate apps will be pretty generic.

This is certainly a good step forward, especially for a Docker Premier Advanced Consulting Partner like Nebulaworks. We’re very excited for more of the large enterprise to get started with containers and there is no better way than having the backing of not only large technology companies but also the large global system integrators. It is a sign of that there is significant momentum and we’re pushing into mainstream early adopters.

Breakout sessions

Another sign of momentum are the topics of the large breakout sessions and the number of folks who attend. More specifically, what are the questions that the early adopters are asking and looking to solve by the innovators (like Nebulaworks, and our customers of 2+ years) and those who created some of the tooling (like Netflix and Facebook). To that end, there were quite a few excellent black belt sessions. Two imparticular:

  • Cilium: Network and Application Security BPF and XDP, by Thomas Graf of the Cilium Project
  • Container Performance Analysis, by Brendan Gregg of Netflix

I won’t go into super detail on these because I cannot do the justice that both sessions deserve. Suffice to say, that they were in the largest of the non-group session rooms, and they were packed full with folks standing and sitting on the ground. Reason? Yes, people are trying to figure out the harder questions of how to operationalize containers. See, changing the way that applications are deployed, to find new levels of speed and agility, creates a new set of challenges. These challenges, like how to properly distribute traffic, securely to applications at scale (Thomas’ session) and how to determine an offending, noisy neighbor service in a container (Brendan’s session) are what happens when containers are put into service. Judging by the sheer number of people and the questions, I’d reckon we’re seeing adoption.

Vendor Expo

I won’t spend too much time on this. I’ll just say, still a carnival atmosphere. I think most of the people cruising around the expo hall were more interested in getting swag and entering to win a drone, VR headset, etc. What was there? Big vendor sponsors, like IBM, Cisco, and HPE with booths talking about their approach. There were also the smaller vendor sponsors, startups and the like vying for a piece of the container market.

Personally, I didn’t see many companies that were groundbreaking or really interesting. Lot’s of logging done differently, container orchestration platforms, and CI/CD tools. Interesting to note, neither CloudBees or GitHub were present.

Moby’s Cool Hacks

I’m always excited to attend the closing session of DockerCon. Why, you ask? Because it is all about the cool hacks that the community comes up with. And this year, it was no exception. Especially, when the two cool hacks that won were a) developed by fellow Docker Captains, but that also are in use today with teams or have the potential to help drive container adoption.

Cool hack number one was Play with Docker (PWD), created by Marcos Nils and Jonathan Leibiusky. What makes PWD really cool, is that it takes the ability to test and learn docker to the masses. This rings true because we do A LOT of training on docker, and that requires infrastructure. This can get expensive, and, it does nothing to serve the purpose of helping folks learn docker and container concepts. PWD provides all the capabilities of docker running on infrastructure but does so by leveraging Docker in Docker (DiD). Since DiD provides many of the benefits of a dockerized application (like booting nearly instantaneously) as well as separating containers by virtual overlay networks it provides a solution to the need for infrastructure to provide training. Very, very cool.

Cool hack number two was FaaS – Functions as a Service, created by Alex Ellis. What makes FaaS really cool, is that Alex was able to create a framework to provide Amazon Lambda-like functionality using docker. That’s right, you can create your version of Lambda on premise, and leverage whatever application frameworks you’d like. Add to this, Alex integrated Prometheus, which enables his FaaS implementation to automatically scale your services up as necessary. I’m already discussing using FaaS to provide an API to a service which we don’t currently have one, that would allow us to do some really interesting things to CI/CD/CD pipelines we build.

In Summary…

Overall, DockerCon 2017 was fantastic. It was great to see all of our friends, customers, and partners in Austin. It is a great city – one that is conducive to getting around easy and offsite to get some good coffee. Combine this with the Rainey street party (unfortunately couldn’t partake in too much as my session was Wednesday morning) and the announcements and sessions made for a great conference. I’m certainly looking forward to DockerCon going, going, back, back to Cali next year (it’s going to be in San Francisco at Moscone). But we will all have to see what another year of time and maturity bring. It’s anyone’s guess at this point.

This post orignially appeared on the Nebulaworks website