Anyone writing code has probably seen a bunch of references to Docker by now. It’s like this new toy that’s all the rage, but for people like me — where picking up new things takes a crap-load more work than it used to — the general reaction is “I’ll learn that when it’s unavoidable.” Alas, a few weeks back it became unavoidable, and I’m here to report back.
If you’re even mildly inquisitive, a quick scan of Docker says a lot about containers and it’s pretty obvious that a container is some kind of virtual environment, but there’s not much that tells you why you should give a damn. I’m going to try to fix that. I’m not going to describe how to use Docker here, just what it’s for. There’s lots of great resources on making use of it elsewhere.
If you got into development in a time when your workplace had a nice air-conditioned room with raised floors and racks of servers and keyboard/console switches, then this post is for you.
The TL:DR on this is the learning curve here is a lot less than it seems at first, and the flexibility it gives you to set up and change configurations is truly powerful. Invest a few days in learning the basics and you can build, rebuild, and reconfigure virtual server rooms that significantly reduce the amount of time needed to maintain your own local environment. As a common example, if you’ve ever sworn at LAMP/ XAMP / MAMP configurations only to start from scratch or if you’ve tried to get two versions of just about anything running on the same system, then Docker definitely is for you.
Let’s start with what we’d find in a typical old server room: a file server, a database server, some kind of web server. Probably the business was running in a Microsoft environment, so the file server would be running Windows NT. Unless you were very unlucky the web server was running Linux/Apache. The database server could be just about anything MySQL running on a Linux box, Microsoft SQL Server, maybe even Oracle running on an IBM mid-range under AIX. It’s worth noting here that the sum total computing and storage capacity of these servers was a fraction of what you can get now in a gaming laptop.
Now let’s say your development team is charged with building the Next New Thing. The New Thing is expected to be in production for six to ten years, so you’re not working with the ancient versions of all those tools that are in production. You’ve got new operating system versions, new DB engines, and so on. This means the development side of that server room has duplicates of all the production servers, running in a newer environment.
If you’re really lucky, someone in IT is responsible for keeping all these machines up to date, but even then you’ve got to coordinate updates with the whole team to make sure nothing breaks. IT also spends a lot of time running cables, hooking servers to switches and configuring routers to make sure the data goes where it needs to.
If you’re in a really big company with good resource allocation, you’ve actually got four sets of all those servers… test environments where updates can be applied and verified before moving to live, one set for the production side, one for the development side.
Then there’s users. Ideally you want a room full of machines, each with distinct configurations of OS, browser, applications. You simulate most of this or have a bunch of people in the company who will “try it out” for you and report back.
Fast forward a decade or three. Your team is distributed, your source code control is distributed, you have automated tests, the production server room had been replaced by a bunch of instances in the cloud. IT folk aren’t running cables, they’re distributing workloads across virtual clusters. Your development machine is running everything that used to be in that machine room. That maintenance that used to be done by IT is on you. Getting a working environment running is a time sink, and doing something like a major upgrade is typically not reversible. Downgrading your database server means uninstalling the new one and restoring data from backups. Your development environment is far more productive — when it’s working — but when it’s not, tool chain and environment maintenance is a frustrating time sink. Oh for the days when all these things were just there on your network to hook into and work with!
Enter Docker. Docker is a lightweight virtualization environment. This means it can run things that behave like distinct machines but without the overhead (and without some of the isolation) of a full virtual machine. Docker comes with a number of base environments. Want to try out PHP 8 on Apache? “FROM php:8.0-apache” gets you the base configuration. Configuration is reasonable: I’ve got a 27 line Docker file that sets up Apache for a specific application and configures it for remote debugging. Each container can roughly correspond to one of those machines in our old school server room. You can get a web server running Apache, or one running Nginx, that server can be running an ancient PHP or the latest version, your database server can be PostgreSQL, MySQL, MariaDB… just chose the base. Better yet, you can have all of them. At the same time.
But what about all those cables and router configurations? Docker Compose handles that. With Docker Compose you can set up a virtual network of systems. A relatively straightforward configuration lets you define all the machines in your virtual data centre, how they talk, what data they share on your physical machine (via volume mapping or just making a copy). Want a data centre that runs your old web server but the latest database? Create a Compose file for it and fire it up!
The great joy here is that all these containers are relatively low maintenance. Rebuild the container, wait a minute or three and there it is, with the latest updates, ready to go. You don’t have to worry about how your new database engine will interact with your web server configuration because they’re running in their own containers and can’t even see each other except through the virtual network.
What about if you’re working on multiple projects? Odds are you’re down in Apache, messing with virtual host setups, maybe allocating alternate ports to keep things separate, which works — until your connections need to be SSL. Maybe you’ve got some tweaks in there so one host runs a different version of PHP. This is possible, but it’s not fun: it’s tricky to get right, brittle, and it usually impacts the code in some unnecessary way. With Docker, switching between projects means shutting down the old project, firing up the new one (which just takes seconds) and there’s your application, running on ports 80 and 443 like it’s on it’s own server, because effectively it is. It’s this, beyond all the other things, that really sold me on getting off the dock and onto Docker.
The last cool thing (from a development perspective) is being able to share containers. Got your application working great and want to let the front end folks validate their interactions with your API? Send them the container, let them fire it up and test it “just like live” on their own systems. Want to stress test it? Stick it on a cloud instance and fire up a bank of cloud test instances to generate load. Docker offers similar advantages for deployment but other sources cover that aspect well.
Docker is definitely worth picking up. In just a few days you’ll be ripping outdated applications out of your main development environment, simplifying configurations and reconfiguring your virtual data centre like a pro.
Photo by Taylor Sondgeroth on Unsplash
Recent Comments