Habitat and Docker

This is a repost of a blog published on Medium by our friend Ben Dang.

Everyone knows how to Docker, right? Here is a typical scenario below.

# Build an Image
(docker)  $ docker build -t bdangit/helloworld .

# Run an Image
(docker)  $ docker run bdangit/helloworld

Here is how we can do it with Habitat.

# Build an Image
(habitat) $ hab build helloworld

# Run an Image
(habitat) $ hab start bdangit/helloworld

Now…You might be wondering why should I use Habitat, when it appears to do a few things similar to Docker.

The reason is that running a binary is pretty easy, but when you are tasked to start managing that thing you just launched, especially at scale, life becomes more interesting.

What does it take to productionize your application?

If you tell me “Easy, I’ll just dockerize my app and deploy that image into this host!”, then you’re going to really need to pick a new line of work because productionizing is a journey. To answer this question deeply, you should ask yourselves these questions:

  • How do you know if your application is running?
  • How are you going to monitor it?
  • How do you deploy your application?
  • How are you going to update your application? and cause minimal-to-zero downtime (if your business requires it)?
  • How are you going to configure your application?
  • How are you going to wire up your applications to its dependencies?
  • How do you know the application is live? or ready to receive traffic?
  • Where are you going to run your workload? on what physical infrastructure? (Yes, you can answer in the Cloud or give it to some container platform provider, but you still need to know what infrastructure you will deploy on.)
  • How about all the questions above running at scale (more than 1 container)?
  • and more…

For those who have been in the container world longer than I have, you might balk at this list and just say this is a piece of cake to answer. We have container frameworks, like Mesos, Docker Swarm, and Kubernetes, that can help us manage our container workloads with finesse and ease. We also have awesome service discovery systems like Etcd, Consul, and Zookeeper that can help us manage runtime configuration. We also have configuration management tools like Chef, Puppet, and Ansible, that allow us to push or pull changes through code.

Very true, but again as you start doing the work of containerizing your app, deploy them into these frameworks, and manage the runtime configuration through service discovery and/or classic configuration management tools, you’re still left with a cobbled set of glueware that aims to solve parts of your production story. Further, you may come to the realization that your application is just not container ready and to do it might take a complete overhaul.

So what about Habitat?

While Habitat might do a few things similar to Docker, there are a few things that can help you get you on your way to production.

Built in Health Check and Health Endpoints

(habitat) $ curl http://habitat_machine:9361/services/helloworld/default/health

helloworld.default: up for PT118.215512719S  0 --:--:-- --:--:-- --:--:--     0
...

(docker)  $ huh!? I have no health end point :(

You can use this endpoint to query the current state of your container and allow your orchestration and monitoring to let you know if its ready for traffic or not. For example, this is very useful if you are integrating with Kubernetes that uses a healthcheck to help you perform rolling deployments, direct traffic to healthy nodes, and more.

Yes, you could easily add in a RESTFul service and package it into your container, but then that is yet an additional set of glueware you have to manage. Instead, Habitat gives you a hook where you write a simple bash script that will parse through metrics, statistics, and just about anything you can imagine.

Configuration Templates

(habitat) $ hab config apply 1 somechange.toml
(habitat) $ HAB_HELLOWORLD='message="woof woof"' hab start bdangit/helloworld

(docker)  $ hrmm. vim this file and change that param and then do another build of the image then start it. or I could externalize the environment parameter and run my image with a new parameter. or whatever man, I'll just consul that in and have some way to parse through the key-values to affect change....

If you ever worked on Docker and had to make it configurable, you probably were scratching your head a lot. They do provide ENV, but you may be one of those folks with tons of tunable parameters or with multiple configuration files embedded into the application. At least in the Chef/Puppet world, you can template out the configuration files that the application needed. In Docker, you’re left with to rig up your own system. And yes everyone can solve this in really exotic ways.

With Habitat, you get a very similar experience to Chef/Puppet where you get a config folder that you can place all your template configuration files.

# config/server.xml

...

    <Connector port="{{cfg.port}}" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />
...
 
    <Connector port="{{cfg.ajp_port}}" protocol="AJP/1.3" redirectPort="8443" />

...
 
    <Engine name="Catalina" defaultHost="{{sys.ip}}">

...

You also get to provide sane defaults in a very similiar way that Chef/Puppet provide.

# default.toml

port = 8080
ajp_port = 8009
message = "meow meow"

All parameters that you expose in your configuration files are all accessible through Environment Variables or you can even upload key/value file to one of the Habitat supervisors and Habitat will take care of the updates to the application you need changed.

Supervisor that keeps your app up!

The Habitat supervisor will keep your app up. I’m sure you’ve woken up at 3 am and ask, “why SERVICE DOWN!! Just restart!!” You also wished you created an automatic service restarter. Well the supervisor will try its best to keep your app running. If it ever gets killed, it will attempt to initialize and run the application.

Deferred Infrastructure Decisions

This is by far one of the best features about Habitat, because it allows any startup, small business or enterprise to defer any of the decisions on what orchestration framework they should deploy their workload on. Maybe you are a company that is waiting for one of your platform teams to come up with a container PaaS, or maybe you are not sure whether you should invest your money into XYZ Cloud vs ABC Cloud vendor. If you were to go full-blown Docker, you also have to think about where you are going to store the Docker images (Artifactory? Public Docker Hub? your own private Hub?). This is also a similar problem that Habitat’s ecosystem will need to answer, too.

Really, as an application developer, these infrastructures decisions don’t really matter as much since its more important to get your application container ready.

With Habitat, you have the option to run your application on baremetal or virtual machines (like you have always done without containers). You also have the option to export the app as Docker images, as Mesos images, and my favorite as tarballs.

(habitat) $ hab pkg export docker bdangit/couchbase (habitat) $ hab pkg export mesos bdangit/couchbase (habitat) $ hab pkg export tar bdangit/couchbase

When you do export them to any poison you choose, you’ll notice that all your applications are still running behind the Habitat Supervisor which provides you all the nice Health Check endpoints, Config Templates and a bunch of other must haves. These are really the features that matter the most as an application developer.

In Conclusion

Whether you are running containers in production through trial by fire or just can’t fathom the amount of glue work you need to get containers running — Habitat has a few things up its sleeve that will help you on your journey to container land in a production environment.

Ian Henry

As the Technical Community Advocate for Habitat, Ian is actively helping the Community and ecosystem grow., He spends much of his time helping people learn about containerization, distributed systems, and the ways that Habitat makes those things easy. Prior to joining Chef, Ian spent a number of years as an operational and tooling engineer.