Age of ‘Dock’ing


With software stacks becoming more are more complex with new competitors and dependencies, the task of ‘docking’ or shipping software is becoming increasingly difficult.

 Where Docker comes into play

Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters,

public clouds and more. Docker helps the  to abstract the entire software ecosystem into something that’s easy to handle. The whole idea of Docker is for developers to easily develop applications, ship them into containers which can then be deployed anywhere.2

Virtualization with and without Docker


The real selling point of docker enabled architecture is that we don’t have to waste hardware resources for running the guest operating systems. Docker engine saves us from the hassle of running multiple configurations for creating the environment.

Features of Docker

  • Docker has the ability to reduce the size of development by providing a smaller footprint of the operating system via containers.
  • With containers, it becomes easier for teams across different units, such as development, QA and Operations to work seamlessly across applications.
  • You can deploy Docker containers anywhere, on any physical and virtual machines and even on the cloud.
  • Since Docker containers are pretty lightweight, they are very easily salable.

Docker Components

Docker for Mac/Linux/Windows

It allows one to run Docker containers on the OS X / Linux / Windows. This serves the purpose of multiple virtual machines.

Docker Engine

Used for building Docker images and creating Docker containers. The Docker Engine builds Docker files and turns them into usable containers. It is the core of Docker and nothing else will run without it.

Steps involved:

  1. Pull the Dockerfile of required image from Docker Hub into local
  2. Build the image from the Docker file
    • docker build -t my_image
  3. Start a container based on the image
    • docker run -name my_image_container -i -t my_image.

Docker Hub


The Docker Hub is the official source of pre-written Docker files, providing public (for free) and private (paid) repositories for images.

Docker Compose

Compose makes assembling applications consisting of multiple components (and thus containers) simpler; you can declare all of them in a single configuration file started with one command.

Eg: docker-compose.yml is a compose file that creates three instances of the Crate database and an instance of the PHP framework Laravel (with some extra configuration). Crucially, the containers are linked with the links configuration option.


All these instances and their configuration can now be started by running the following command in the same directory as the docker-compose.yml file:

docker-compose up


How Docker works?

Docker follows a client-server architecture. Docker Daemon or server handles containers. They receives commands from Docker client via CLI/REST APIs. Docker client and daemon can run in the same host or different hosts.


Images are the basic building blocks in Docker. An image is a ‘prototype’ of a container just like classes are prototypes of objects in Object Oriented Programming Paradigm. Images can be configured with applications and used as a template for creating containers. Images are organized in a layered manner. Every change in an image is added as a layer on top of it.

This itself enhances the modularity of our configuration. We can find several of these Docker Images uploaded by the Docker Community in Docker Hub, a repository for Docker Images provided by docker Inc. With Docker Hub, we can upload Images to and download Images from a central repository with the power to maintain both public and private Images. This is essentially like using git- you can create images in your laptop, commit them locally and push these into Docker Hub.

Container is the execution environment for Docker. Containers are created from images. It is a writable layer of the image. You can package your applications in a container, commit it and make it a golden image to build more containers from it. Two or more containers can be linked together to form tiered application architecture. Containers can be started, stopped, committed and terminated. If you terminate a container without committing it, all the changes made to the container will be lost.


From the date of its release, Docker has been experiencing a huge increase both in the number of users and contributors. This massive boom itself highlights the importance of the project. Another flagship feature of Docker is collaboration- Docker images can be pushed to a repository and can be pulled down to any other host to run containers from that image. Moreover, Docker hub has thousands of images created by users and you can pull those images down to your hosts based on your application requirements. Essentially Docker hub is becoming the Github for Docker images.

1 thought on “Age of ‘Dock’ing”

  1. I have been creating a list of articles for the new trainees who are not experienced with Docker, this will fit perfectly. Keep sharing such clear and concise resources.

Leave a Reply

%d bloggers like this: