With the market demanding faster, higher quality release cycles at the lowest possible cost, the very nature of software development is changing — shifting from complex, monolithic code bases to easily consumable and rapidly deployable microservices.
“Microservices” refers to an architectural approach to software development where a single application is broken down into a set of smaller services. Each microservice is a virtual, autonomous entity that delivers a very specific API, running its own processes and using simple mechanisms to communicate.
The result? QA testing is made much simpler, allowing dedicated development teams to turn out small sets of independent features faster. Rather than releasing application updates quarterly or twice a year, teams can release features within days or weeks — and sometimes hours.
Replacing a large-scale application with multiple microservices can be a great way to squeeze more performance out of the same hardware footprint. But if you really want to get the most value from your microservices, you’ll also need to incorporate Docker containers into your development architecture.
The benefits of using containers
Docker containers provide a vehicle for quickly and reliably deploying microservice features. They provide a complete filesystem that includes everything the service needs to run, enabling the independent configuration of the packages required to run it as well as the resources it can consume. They use Linux namespaces to limit resources (such as memory, CPU, and networks) without requiring the full overhead of a virtual machine. They also allow small independent environments — complete with an operating system and application code — to run on the same dedicated hardware or virtual machine.
Docker containers are:
Because they share the operating system’s kernel, container-based architectures are lightweight implementations: they launch quickly, use less memory and consume less power.
While they share the operating system’s kernel, containers are otherwise isolated from each other. One container’s dependencies are invisible to other containers on the same system, which allows developers to use the same toolsets and workflows regardless of the host platform’s operating system.
Most containers are stateless, allowing them to be scaled up or down regardless of physical hardware. Each container handles only a small piece of the functionality, which means it can be transparently replaced by a different container with similar functionality.
Fast updates of self-contained, stateless microservices in containers allow teams to be more flexible. Bugs can be fixed and updates quickly deployed. Subsystems can be completely altered without derailing other developments, and the code path is always automatic.
Containers allow smaller teams to manage the smaller “chunks” of the code base and resolve issues quickly. Errors in containerized code can be quickly reverted and immediately fixed without affecting other features or containers, allowing other teams to push ahead and continue deployment.
The portable nature of containers means you could run them on any Linux system?and be certain they will operate consistently, whether they’re running on a developer laptop or in a production environment.
While introducing Docker containers into a microservice architecture can promote flexibility and efficiency, using them is not without its challenges. For instance, how do you effectively orchestrate, schedule, and manage the containers? And how do you secure them against attackers?
In my next post, I’ll take a closer look at the ideal approach to developing a container-based architecture to mitigate these (and other) potential risks.
You can learn more about microservices and containers in Pythian’s white paper Getting Started With Microservices and Containers, including a more detailed look at each of the steps that make up the approach to adopting a container-based architecture.