Due to the progress in fields like data storage, networking or computational capability, computing has experimented a huge advancement in the last decades. As Moore said 50 years ago, the number of transistors in a microprocessor will double every two years and, nowadays, this is still valid in a quite accurate way. Because of this, this development increases the economic cost, making it available only for a few. Due to this, one of the greatest challenges of the computing nowadays relies in the optimization of the use of the hardware resources. In the field of virtualization, an example of this is the containerization, which not only allows a better efficiency in costs due to less hardware requirement and human resources for the deployment, but it also consumes less energy because of the requirement of less complexity.


First, what is a container?

The software containers are packages of elements that allows the execution of a specific application in any Operative System. To achieve this, the container has, packaged, the image of the OS for which it was designed, together with all the dependencies that the application needs in order to run, and, of course, the source code of the application. In other words, the OS that is “running” inside the container along with the application may not be the same as the OS of the host. That allows the portability of the app to every operative system. In this aspect resembles the virtualization. Volumes, shared files, ports, hardware specifications and some other aspects are defined when the container is created. For this, static documents such as the application source code are packaged together with the image. Dynamic files, such as databases or logs, must be saved in a volume mapped to the host machine, allowing the permanency of the data. This is because all the containers share OS files that have in common and, when one of them tries to modify it, it makes a local copy of the file inside the container. When the container is stopped, those modifications are lost.

Virtual Machines vs Containers

Let’s make a comparison between Virtual Machines and Containers in order to understand them.

Virtual Machines

  • Hardware abstractions that divide the server resources bewteen the machines.
  • Each VM includes a full copy of de OS, containing aplications, dependencies, libraries… This makes them really heavyweight.
  • Low compatibility.
  • Slow deploy.

Docker Containers

  • Abstractions in the aplications layer that group code and dependencies together.
  • Each container shares OS Kernel as another Project. That is, they share common libraries and dependencies. This makes them really lightweight.
  • Full compatibility.
  • Fast deploy.


Docker is an open source project that allows deploy and management of applications inside software containers. This software provides an additional abstraction and automation layer of virtualization. In simple words, Docker is the software that manages the intermediate layer between the container and the host. Docker is responsible of the resource allocation between the host and the containers, manages the memory space isolation, network interfaces…. And, of course, Docker also provides tools for container management, such as create one, start it, stop it…

It is considered as the most useful and complete containerization tool.

Get Started

There are some useful tools in order to work with Docker containers. However, to get started with the environment, there are some useful commands:

  • Execute a container with an image: docker run -itd image
  • Start a container: docker start container
  • Stop a container: docker stop container
  • List running containers: docker ps
  • List docker images: docker images
  • Delete container: docker rm container
  • Delete image: docker rmi image

Writter: Alberto Moragrega
Reviewer: César Hernández