In the past 15 years software development has changed a lot. And most of the time, it got better. And especially, when talking about the development environment. Remember how you had three different Ruby-on-Rails projects, each using a different Ruby version and you needed to deal with the rvm? No? Good! And this was the better option. It actually changed versions, depending on your current directory. Awesome! But when it got to the point where you need to change your PATH to include different Java version it was always ugly.
This was the dependency hell we were living in. But there was also another annoyance, which everyone has used as an excuse at least once: “It works on my machine”. The customer says this doesn’t work, the developer says it works. Managers weren’t really happy about this. So everyone was forced to use the same hardware, the same operating system, the same editor even. This is also far from ideal, since you lock yourself in with a certain group of suppliers: hardware manufacturers, software companies, etc. If someone messes up, you’re done. And the developers weren’t happy they can’t use their favorite old-school unusable by anyone else editor. Right, Emacs users? (Vim users, don’t you laugh over there in the corner!)
Thankfully, a lot of people worked hard, to avoid doing these and many more problems ever again.
Enter Docker
Docker is only the last step in the software industry’s journey to simple operations. It is a tool, that allows you to use easily a Linux feature, rather than a library or framework which would be an integral part of your application. Its benefit is mainly easing the use of this feature for developers.
What Docker actually does, is creating a Linux Container, where your application is meant to run. This container has all the dependencies the application needs and provides hefty isolation from the rest of the operating system, making it simple to manage these dependencies. The container is, however, not provided by Docker, rather than configured by it.
Does that make sense? No? Let’s make a metaphor with the name, shall we?
The Harbour
Since the authors called this program Docker, I’ll follow the natural metaphor they must have meant, while thinking of the name.
I already explained, that Docker creates a Linux Container and is a chain of tools. Well, one can easily think of another place containers are used, and that would be on ships. There, containers are used to pack up goods from the storage, for easier transport. Then the container needs to be loaded onto a ship, and delivered. The person who loads the goods onto a ship is a docker.
Now, how does this metaphor map to what we do? Well, the goods, which need to be shipped, is actually the software. You develop an application on your local machine and then you need to let your clients use it. To use it, you have to deploy it on some hardware. Let’s consider the hardware to be the ship. In the past, the work done to load the software onto the hardware was done manually, by the operations guys.
The Stevedores of the IT industry
Before the invention of the intermodal container in the sixties, the docker had the job to load the cargo directly on the ship. Since the cargo consists of numerous products with different shapes, this takes a lot of time and effort, to actually do the job. Further, moving the goods from one ship to another, or from a ship to a truck or train, is as complex as just loading it for the first time.
In the IT sphere, the Ops needed to get the code from the developers, package it with all dependencies, load it onto a specific hardware with specific operating system and run it. This, as it turns out, is even more complex than loading a ship in the fifties. Especially, when the same hardware was used for more than one application.
With time, someone automated a part of it, another engineered a better dependency management, and finally we received the Linux Container. This allowed the Ops to do a lot more in a much shorter time. Update dependencies on a single application? Easy! We need to add more servers? In a minute. Deploy the new software’s version? Not a problem. Well, not so fast…
Real life containers are heavy
There is one problem with intermodal containers. You can’t move it alone. Not without machinery. It is a massive metal box! The smallest one is 6m x 2.6m x 2.4m. Empty, it weighs 2200kg!
Well, the Linux Container was kind of like this too. In the beginning, you needed to construct it yourself: all the namespaces, and control groups and everything. This is no job a developer wants to do. Mostly because we’re lazy and don’t care about anything but coding (if you are a manager: read ‘lazy’ as ‘efficient’). Even an operations person would be bored after a while. Or even annoyed.
So what does a docker do, when a container needs to be loaded? She uses a crane, of course. Tools! Technology!
The Software Wharf Machinery
While on a dock one would use a crane, forklift, etc. In the software industry, this would be defining a container, assembling a container, running it, and so on.
Now you might notice, that the analogy breaks a bit over here. This is because I listed the actions in the software industry, while for the physical world, I mentioned the machines. Let’s fix this:
- To define a container, you’d write a Dockerfile.
- To assemble it, use
docker build
- To run it –
docker run
- And finally to deliver it:
docker pull
Now we have a working metaphore, right?
Linux Containers don’t need to be heavy
While running, containers aren’t too heavy. That’s actually one of their major benefits, compared to virtual machines. This, however, does not mean they can be labour heavy.
The definition, building and running a container might become a tiresome burden, if we don’t have the right tools for it. And this is the great benefit of Docker.
Instead of manually creating a closed environment for the process, then installing the needed infrastructure and libraries, as well as installing the application itself, we can just task Docker with everything just by using a few commands.
There is no need to tinker with the kernel namespaces or the control groups features of your OS. You just let Docker do the heavy lifting.
The Factory workers of the IT industry
In our metaphor, there is something missing. The containers, which are loaded onto a ship, are full with goods. These goods are created in a factory, somewhere onshore. They can be a bunch of different stuff, packed together and loaded into the container, for transport.
Once upon a time, on the dockyard, the loading of the goods onto the ship was what the stevedores did. Containers moved this responsibility to the factories. This means the factory workers needed to learn how to load a container.
When we are talking about applications, these factory workers would be the developers of the application. Their task is to create the application and load it onto the container. This last part is tricky. Further, the developers are not happy when they are distracted with configurations.
This means, it has to be really easy to work with these containers. And indeed, Docker helps there too. Once the image definition is created, the developer only needs to change the code, rebuild the image and run it. This is done by only a few standard commands (or even only one).
The Factory on the Shore
Everyone has heard of Netflix and, I imagine, people in the IT industry have all heard about the way they work. Having no pure operations teams, no pure development teams. Not even having a single production server, and still managing to deliver their content to the whole world, every time, all the time.
Imagine all we talked about up to here did not exist. The process of software delivery worked the same way like it did in the 1990-s and the only thing a developer was doing after writing the code is to throw it over the wall to the operations teams and go home.
I bet this mode of operations wasn’t going to fit well with such businesses. No one was going to be looking at Netflix with awe and respect to what they have done. No one was going to praise them online on their perfect service either.
But even if they did manage to achieve such organization, that would have meant they need people who are outstanding developers, as well as operations experts. They would need to know in depth how the operating system works and what exactly needs to be installed on it.
If it were a factory on the shore, they would need to be specialized professionals who know how to produce the goods, as well as quite strong and fast stevedores, who execute ship loading without accidents.
Now, even if this was not too much of a problem, loading the ship takes time, in which they don’t really produce more goods. This means they need to either work double shifts – which is barbarian – or be really fast with loading. Like supernaturally fast.
Ok, Docker, but how?
Well, that’s the important stuff, isn’t it? I was all out praising Docker in this article, and there’s barely any info on how to do all that. Well the thing is, it’s still not a child’s play. There’s plenty of concepts and tools to learn even if you only aim at using Docker.
There are things like images, build process, Dockerfiles, multi-stage image builds, which are quite in hand, if you start developing a complex application.
And I’d like to take a look at them. And then continue with using containers in orchestration systems like Kubernetes or Docker Swarm.
I’ll make sure to keep you posted on what I learn.
Until then,
Happy coding!
Meanwhile, you can have a look at some articles of mine:
- A React series that starts here: Entering the 21-st century: Learning React
- I built myself a table
- Some philosophy: Always Test Your Code