Docker crash course. Master the docker architecture

Share on facebook
Share on twitter
Share on linkedin

Never miss a post!

Sign up for our newsletter and get FREE Development Trends delivered directly to your inbox.

You can unsubscribe any time. Terms & Conditions.

To meet the needs of your business, IT delivers thousands of applications. Every business would have different requirements. They also use different languages, databases, and tools. To deploy, configure, manage, and maintain the complexity takes people, expertise, and the right systems, infrastructure, and architecture. This translates to time and money. Read our docker crash course to master the Docker architecture and advance your knowledge as a DevOp engineer.

What is Containerization?

Containerization is the technique of bringing virtualization to the OS level. While virtualization brings abstraction to the hardware, containerization brings abstraction to the OS. Containerization is also a type of virtualization, but containerization is however more efficient because there is no guest OS and utilizes a host operating system, share relevant public libraries and resources as and when needed, unlike the virtual machines. So, application specific libraries and binaries of containers run on the host kernel, which makes processing and execution very fast. Even if you talk about booting a process, it takes only a fraction of a second in the case of containerization because all the containers share the host operating system and holds only the application related binaries and libraries. And that is why they are lightweight and faster than virtual machines.

What is Docker?

Docker is a software containerization platform which means that you can build your application, package them along with the dependencies into a single unit ( a container ) and ship them to run on other machines. For example, let’s consider a windows based application that has been written both in Go and Python. So, this application requires a specific version of Windows, Go, and Python in order to avoid any version conflicts on the user’s end. A windows docker container can be created with the required versions of Go and Python installed along with the application, so now the end-users can use the application by running the container without thinking about the dependencies or any version conflicts.

Docker lets you ship code faster, standardize application operations, seamlessly move code and save money by improving resource utilization. With docker, you can get a single object which will reliably run anywhere. The syntax of docker is simple and straightforward, it gives you full control. It has a wide adoption which means that there is a robust ecosystem of tools and off-the-shelf applications that are ready to use with Docker. On average, you can ship software seven times more frequently by using docker, so you can ship isolated services as often as needed. With docker, small containerized applications make it easy to deploy, identify issues, and roll back for remediation. Docker based applications can be seamlessly moved from local development machines to production deployments in any environment. With docker containers, you can easily run more code on each server, which improves your utilization, and thereby you save a considerable amount of money.

Docker vs VM

  • The basic architecture of docker containers and virtual machines differ in their operating system support. Docker containers are hosted in a single physical server with the host operating system, which is shared among them, but the virtual machines, on the other hand, have a host operating system and an individual guest operating system. Inside each virtual machine, irrespective of the host operating system, the guest operating system can be anything like it can be Linux, Windows, or any other operating system.
  • Docker containers are suited if you want to run multiple applications on a single operating system kernel, but if you have applications that need to run on multiple operating system flavours, then virtual machines are required.
  • Sharing the host operating system between the containers makes them very lightweight and helps them to start in just a few seconds. That is why the overhead to manage the container system is very low compared to that of virtual machines.
  • In docker, since the host kernel is shared among the containers, the container technology has access to the kernel subsystems. As a result of which a single vulnerable application can hack the entire host server providing root access to the applications. So, running them with superuser privileges is therefore not recommended in docker containers because of these security issues. On the other hand, virtual machines are unique instances with their own kernel and security features. They can therefore run applications that need more privilege and security.
  • Docker containers do not have a separate guest operating system, so they can easily be ported across different platforms. On the other hand, virtual machines are isolated server instances with their own operating system. They cannot be ported across multiple platforms without incurring compatibility issues for development purposes where the applications have to be developed and tested on different platforms. When it comes to portability, docker containers are the ideal choice.
  • The lightweight architecture makes docker containers less resource-intensive than virtual machines. As a result of which containers can boot up very fast compared to virtual machines, and also, the resource usage varies. Unlike the case of virtual machines, there is no need to allocate heavy resources permanently to containers. Scaling up and duplicating the containers is also an easy task compared to that of virtual machines, as there is no need to install an operating system in them.

Docker Architecture

Docker architecture includes a docker client, which is used to trigger docker commands, a docker host, which runs the docker demon, and a docker registry that stores the docker images. The docker daemon running with the docker host is responsible for the docker images and docker containers. To build a docker image, we can use the command-line interface or the client to issue a build command to the docker daemon, which runs on the docker host. The docker daemon will then build an image based on your inputs and save it in the registry, which can either be a DockerHub or a local repository. So, if we do not want to create an image, then we can just pull an image from the DockerHub, which would have been built by some user. And finally, if we have to create a running instance of any docker image, we can issue a run command from the CLI or the client, which will create our docker container. So, this was the overall functioning of docker architecture.

Docker Components

  • Docker engine is the heart of the docker architecture. It is the docker application that is installed on the host operating system of the host machine. It is a long-running program called the daemon process and works as a client-server application.
  • Command-line interface or the docker client enables users to interact with docker.
  • The REST APIs are used for communication between the CLI client and the docker.
  • Docker image is a template that is used to create docker containers. They are created using the docker build command. These read-only templates are used for creating containers by using the docker run command. You can use a ready-made docker image from the DockerHub or create a new image as per your own requirements.
  • Docker containers are the ready applications that are created from docker images. You can also say that a docker container is the running instance of a docker image, and this docker container holds the entire package needed to run the application. A docker container happens to be the ultimate utility of docker, and the application runs inside a container.
  • Docker registry is where the docker images are stored. The registry can be either a local repository of a user or a public repository like the DockerHub, allowing multiple users to collaborate in building an application. Even multiple teams within the same organization can exchange or share containers by uploading them to the DockerHub. DockerHub is docker’s very own cloud repository which is similar to GitHub.

Docker Installation

You can install docker on all kinds of operating systems: Linux, Windows, macOS. Most of the time, it is used on Linux OS, and in this article, I will show how to install docker on an Ubuntu machine.

Step 1:  Update all the existing packages on Ubuntu:

[email protected]:~$ sudo apt update

Step 2: Install the prerequisite packages required for installing docker which let apt use packages over HTTPS:

[email protected]:~$ sudo apt install apt-transport-https ca-certificates curl software-properties-common

Step 3: Add the official docker repository GPG key on your system:

[email protected]:~$ curl -fsSL | sudo apt-key add –

Step 4: Add the Docker repository to APT sources:

[email protected]:~$ sudo add-apt-repository “deb [arch=amd64] focal stable”

Step 5: Update the packages on the Ubuntu system, which will add docker packages from the newly added repository:

[email protected]:~$ sudo apt update

Step 6: Run the command below to install docker:

[email protected]:~$ sudo apt install docker-ce

Step 7: The above step should install docker on your machine, and the docker daemon must be in a running state. To check if docker is running or not, run the status command:

[email protected]:~$ sudo systemctl status docker

From the output, you can clearly see that the docker service is active and is running currently.

docker status

Now that docker is present on the system let’s go ahead and run some important docker commands.

Docker Commands

If you just run the docker command, you will get the list of all kinds of docker commands and what those commands are for.

[email protected]:~$ docker

docker commands

Check docker version

This command will tell you the current docker version running on your machine.

[email protected]:~$ docker –version

docker version



This docker command will pull the specified docker image on your machine from DockerHub (a repository of docker images). Here, I am pulling the Nginx docker image with the latest tag.

[email protected]:~$ docker pull nginx:latest

docker pull



This docker command will list all the docker images on your machine.

[email protected]:~$ docker images

docker images


This docker command will run a docker container for the docker image you specify. Here, I am running a docker container for nginx docker image.

[email protected]:~$ docker run -it -d nginx

docker run


This docker command will list all running containers with their details. Currently, 3 containers are running on my machine.

[email protected]:~$ docker ps

docker ps

When you run the ps command with -a, it lists all the containers which ran in the past and exited and also the containers which are currently running.

[email protected]:~$ docker ps -a

docker ps


This docker command is used to start a session with the docker container ID specified, and the session starts from the default directory on the container.

[email protected]:~$ docker exec -it e89d6aa683f6 bash


If I run the ls command, it will list all the files and directories inside the default path of the container

[email protected]:/# ls


You can use the exit command to stop the session with the container.

[email protected]:/# exit


Docker exec


This docker command is used to stop a running container.

[email protected]:~$ docker stop e89d6aa683f6

docker stop


This docker command is used to remove a docker container.

[email protected]:~$ docker rm e89d6aa683f6

docker rm

Note: If you try to remove a running container without stopping it, you will receive an error as shown below:

docker rm

Now, if I list the running docker containers, I only have 2 containers running.

[email protected]:~$ docker ps

docker rm



This docker command is used to login to the DockerHub account.

[email protected]:~$ docker login

docker login


This docker command is used to upload the docker image specified on the DockerHub. Here, xyz is the DockerHub account name, and demo is the docker image.

[email protected]:~$ docker push xyz/demo

docker login


This docker command is used to logout from the DockerHub account.

[email protected]:~$ docker logout

docker logout



This docker command is used to list all the networks available for docker on the machine.

[email protected]:~$ docker network ls

docker network


This is a generic docker command which lists information related to docker like the app name, total containers, containers running, etc.

[email protected]:~$ docker info

docker info


This docker command is used to show the logs of the docker container specified.

[email protected]:~$ docker logs 9836ac8c1a95

docker logs


This docker command is used to search for a docker image with the name specified in the command.

[email protected]:~$ docker search Prometheus

Final Toughts

Today, there is a better way to package applications and their necessary components with containers. Containers help organizations become more consistent and agile. They abstract the underlying host operating system. As a result, applications can be packaged with all of their dependencies, and the developers can choose the tools and environments as per their project needs. And operations team can deliver applications in a consistent way for all the environments. It is easy to see the benefits when you deploy your first container-based app.

Before docker came into the picture, there was the concept of virtualization or the virtual machines. So, virtualization is the technique of importing a guest OS on the top of an operating system. This technique was a revolution in the IT world because with VM, developers could run multiple operating systems in different virtual machines, all running on the same host operating system. Virtual machines, in a way, eliminated the need for extra hardware resources. The advantages of virtual machines or virtualization are multiple OS can be run on the same machine. Another point here is the maintenance, and the recovery were easy in case of failure conditions. Also, the total cost of ownership was also less due to the reduced need for infrastructure. But with microservices architecture, the virtual machines can be really costly and resource consuming if we run each small service on a separate VM. This is where containerization comes into the picture.

Share on facebook
Share on twitter
Share on linkedin

Our website uses cookies that help it to function, allow us to analyze how you interact with it, and help us to improve its performance. By using our website you agree by our Terms and Conditions and Privacy Policy.