A Beginner’s introduction to Docker

kartikkhk
4 min readFeb 20, 2021

--

What is docker?

Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. (wiki)

In layman’s terms, Containers allow a software engineer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.

Why docker containers?

Docker containers enable engineers to provide reliable and reproducible environments for their applications. For example a production server could be running on node v12.18.03 but the developers machine has node v14.x . A piece of code that works in the dev machine might fail in the production server. A docker container offers a fixed and reliable environment which can be used to avoid such cases.

Virtual Machines VS Docker containers

Virtual Machines Pros:

  • Seperated Environments
  • Environment-specific configurations are possible
  • Environment configurations can be shared and reproduced reliably

Virtual Machine Cons:

  • Redundant Duplication and waste of space
  • Performance can be slow boot times can be long
  • Reproducing the machine on another computer / server is possible but still tricky.
Virtual Machines vs Containers

A hypervisor, also known as a virtual machine monitor or VMM, is software that creates and runs virtual machines (VMs). A hypervisor allows one host computer to support multiple guest VMs by virtually sharing its resources, such as memory and processing.

So How exactly do docker containers solve problems we face due to Virtual Machines?

  • In containers we still have our host operating system: windows, macOS, linux etc. but instead of install separate machines in the system (which as mentioned above is quite computationally expensive), we instead utilise our built-in container support(which is something that docker takes care of) and then we run a tool called docker engine / container engine on top of that.
  • And now based on the docker engine we can spin up containers which contain our code and the crucial tools required to run that code. And these containers don’t contain a bloated operating system with tons of extra tools. They might have a small operating system layer inside of the container but even that will be a very lightweight version of the original OS.

So basically Docker containers can be used to spin up low impact operating systems, which take up minimal disk usage. Sharing, re-building and distribution is really easy (done through images which I will mention in my subsequent articles). Moreover, with docker we can encapsulate whole environments without adding anything “extra”.

Spinning up a simple node js container!

I have setup a simple node js server which listens on port 3000 for this example. Github link: link

As you can see above I have also added a Dockerfile which I’ll use to create a container for my application.

Dockerfile:

FROM node:14WORKDIR /app
COPY package.json .
RUN npm install --silent
COPY . .
EXPOSE 3000
CMD ["npm", "start"]

The Dockerfile mentioned above uses a base layer of node v14 uses /app directory as the working directory and copies package.json to the working directory. After that it installs the dependencies, copies the code to the working directory, starts the server and exposes port 3000 so that we can access it from outside the container.

Now to build the image we run: docker build -t node-docker:latest .

Now we can run the container with the command:

docker run -p 3000:3000 node-docker

Please note that we are using the -p flag to expose the port 3000 of the docker container to be mapped to port 3000 of our machine so that we can send requests to that port.

Now send a simple curl request and it will look something like:

> curl http://localhost:3000 | json_pp{
"msg" : "AoK"
}

--

--

No responses yet