Chris Ciborowski bio photo

Chris Ciborowski

I'm a technologist & entrepreneur helping companies see the endless possibilities provided by DevOps and cloud application delivery. I'm a Docker Captain and CEO & CO-founder at Nebulaworks.

Twitter Google+ LinkedIn Github

Some know, but for those who don not I am kind of a Solaris guy. I came up in this business working with Solaris. A lot of Solaris. Started very early on with Solaris 2.5. Back in the day, it was all about working on big iron running UNIX for me, a badge of honor if you will. E10K, check. E25K, check. And using core functionality of Solaris to manage efficient operations.

Not to mentioned there is something very gratifying about working on the command line for me - especially with a customer shoulder surfing while I was working. :)

Over the years, I came to know Sun (and later Oracle) as a great engineering company. And they churned out some really, really cool tech. DTrace and ZFS to name two. Solaris Zones happend to be another. Very cool tech indeed. Zones re-introduced us to OS virtualization, something that some of us worked with in BSD before. Solaris Zones could be used for a number of things, but most of them revolved around ops, not dev. For example, we could use them to upgrade infrastructure and run down-rev versions of Solaris. Solaris 8 on Solaris 10? You bet!

In addition there were some folks working on a translation layer to allow Linux binaries to run on a Solaris kernel, unmodified. But, alas, LX-branded zones - as they were known - went the way of many a Sun project. One reason being the the x86 platform going 64-bit, and LX zones only supproting 32-bit code.

Fast forward 11 years. Docker has re-energized the OS virtualization movement. And rightly so. But now, we are looking at it from the eyes of the developer. And for distributed microservices-based applications they are ideally suited. While running these containers on a laptop, or on an AWS instance, is trivial. However, taking a number of containers and running them on a scalable platform in production is a different story altogether.

By now you are probably asking how this relates to Joyent. And that is a good quetion. Short answer: Joyent has been using containers in production for a long, long time. Not Docker containers, but Solaris Containers. This makes them uniquely qualified, with real-world understanding of production concerns and with insight on the best way to develop a foundation for running Docker containers. And now they have. They are calling their container runtime Triton. Triton, a Greek mythological god was half man, half fish. The best of both worlds. And this is what Joyent has developed:

An infrastructure which is robust and well suited to container deployment, providing not only a Docker API, but also ZFS and DTrace support enabling the launch and operation of the best OS virtualization technology: Docker.

In this post I am not going to dive into the details of how Triton works…there are plenty of blog posts and details over at and their blog (be sure to check those out). However, I have put together a small demo of setting up the environment to connect to the Triton service, and some commands illustrating how it responds to the Docker commands just like a native Linux-based Docker engine.

So, sit back, grab a coffee and watch this 7:45 asciinema of Joyent Triton. It will look like there is more content past that point, but the asciinema does end at 7:45 :)