Demystifying DOCKER – I : The What and How

Demystifying DOCKER – I : The What and How

Docker is a set of coupled software-as-a-service and platform-as-a-service products that use operating-system-level virtualization to develop and deliver software in packages called containers. The software that hosts the containers is called Docker Engine.

If you are like me and didn’t understand any word of the above sentence, WELCOME, you are in the right place !!!


I stumbled upon the name ‘DOCKER’ when I started my research in Computational Mechanics at IIT-Roorkee. I was told that I need to install this thing in my Windows so as to comfortably run FEniCS, a PDE solver which was essential for our research. I tried to read about it in the web, but all the terminology and concepts associated with it were too Computer-Science for my Civil-Engineering mind. In between I had to shift to Ubuntu and thankfully there were no need of DOCKER there.

Fast forward one semester, a weak computer and a lot of COVID breaks; I had to uninstall my Ubuntu environment and had to completely shift to Windows. So now I had no other option, but to understand and install the Docker thing. With the help of the official docker documentation, many web resources and my seniors, I started to crack its complexity to the limit of my need. On my journey, I realized there are so many others like me, who wants to utilize the applicability of docker in their work, but are too non-computer minded to understand it’s installation and usage. This blogpost is written for such people. I have tried my maximum to make it in layman terms as possible. Still, this is by someone who is an absolute beginner in Docker and containerization. If you find some technical mistakes in this post, kindly let me know in the comments.


Suppose you are an application developer or someone who writes code in collaboration. Obviously, you are not the only one who ever sees and uses the application/code in development. You might have to send your application for testing or send your code to get input from your collaborator. There will be so many other people/teams involved including the end-user of your work. Now you might be using any application/framework for your work, such as Python or C++ or Javascript. All these applications come with many libraries and dependencies. For example, numerical coding in Python almost always require libraries such as Numpy, Scipy etc… Some of these dependencies may even be Operating System (OS) dependent. It is only essential that everybody who runs your code should have the exact same libraries and dependencies that you used when writing the code for smooth running. Even a slight variation in the version of a library can cause the code not to work in someone else’s system and can lead to confusions and blame-games. Now the obvious solution is to send the dependencies and libraries while sending the code, but this might not be smooth if there are unknown dependencies or OS dependent libraries.

Docker gives a solution to this dilemma. In the simplest of terms, docker creates a lightweight platform where you can develop your application, after which you can send this entire platform to everyone who has to run it. This platform will have the OS (Ubuntu), the application in development, its libraries, and dependencies. Anybody else can download and run this platform as a separate entity in their system and use your application/code in the way it is. Let us find out how.

Your computer system is basically a physical hardware on which your OS is installed. This OS could be Windows, macOS, or any of the Linux distributions. This is the Host OS. In this Host OS, or simply in your Windows/macOS/Linux environment, you can install the Docker. This docker can install another OS known as the Guest OS. This Guest OS will be just like a regular OS and you can do anything you want in this. You can install software, build code, run simulations, etc.. Now this Guest OS can be shipped, or in other words, it can be given to someone else. And they can load the same Guest OS in their Host OS. In short, you dont have to worry about libraries or dependencies or versions in the other person’s computer (Host OS) because you have done everything in your Guest OS and have given them this everything by shipping this Guest OS.

The idea of creating a Guest OS is not limited to docker. You can have other platforms like the hyperviser over which you can create Guest OS and then ship it. But if you are working on multiple projects, Guest OS needs to be created for each project, which ultimately burdens the hardware and makes it more expensive.; because the Guest OS/Virtual Machine concept is actually bulky in size. There in we bring docker. The major advantage that is specific to docker is the concept of containers. A container is, in simple terms, the environment in which you work, consisting of your dependencies. Multiple projects require multiple containers, just like the Virtual Machines. But the advantage with containers is that they all share the same OS backend. That is, by using docker, you can work in multiple applications using a single Guest OS. Also, they are very lightweight; meaning they have a size in the order of Megabytes only when compared to the Gigabytes size of Guest OS. The difference between a typical Virtual Machine setup and a docker setup is illustrated below


Dockerfile is a textfile that you, the developer makes. It is like a cooking recipe, which contains all the ingredients and instructions to make a dish. Similarly dockerfile will specify the OS for the container(s) along with the required languages, databases, libraries, dependencies and all other components of the application/code development. An example Dockerfile can be found in this Github repository. This is the dockerfile developed by Abhinav Gupta, that our research-lab adopts for working in FEniCS. The Docker Hub is the official repository where many docker files (images) are publicly available. You can also create your own dockerfile to use locally or push it into the Docker Hub for public use.

The dockerfile creates docker images using docker build command.

Docker Image

Dockerfile is basically just a textfile containing the instructions. Invoking/building it creates docker image. A docker image is the portable file containing the specifications for which software components the container will run and how. When you run the Docker image, it becomes one instance (or multiple instances) of the container.

In other words, you create and send the dockerfile (instructions) to everyone who uses your code. When they build it, instances (docker image) of the environment to run the code are created in their system wherein they can efficiently run, analyze and make change to your code/application.

Docker Run

docker run is the command that actually launches a container. This is given in a command line interface such as command prompt/cmder/terminal. An example command is like this

docker run -v D:\Codes\:/root/ -w /root/ -it iitrabhi/fenics

which goes to the local directory containing the dockerfile and launches a container consisting of the docker image iitrabhi/fenics . More on the above command will be detailed in the next part.

Installing Docker

Docker is available as a desktop-interface for Windows and MacOS. Download and Install Docker in your OS by following the instructions in the official docker forum.

Once you have installed docker, open Docker Desktop. Once it is up-and-running, go to your favourite command line interface and start running docker images.


Well. That’s it. That is all a layman needs to know in order to install and run docker. Creating your own dockerfile or cloning from a public repository is a bit of a complex task and I have purposefully skipped it to keep this post simple. But you can find some excellent documentation here and here.

The next part to this blog is more specific in the sense that it is aimed at explaining the usage of Docker to run the FEniCS PDE solver in Windows. Nevertheless, you may find some general statements and usage of docker there as well.

Leave a Reply