Docker Tutorial–Explained step by step

Docker vs Container
Docker vs Container

Issues with Legacy systems of using virtual machines while deploying applications:

Note–None of the above solutions are used today as they were very hectic , costly and time consuming process.

Hypervisor is a virtual machine software that you install on local machine to create virtual machine. Each virtual machine guest OS will be allocated memory and CPU from the host OS whereas in Docker the applications would be running on container which will fetch memory directly from Host OS , no need to allocate dedicated memory .

Docker is a replacement of this virtual machine concept. Instead of installing Hypervisor developer install Docker software called docker engine so they can directly deploy their application and run without creating any guest operating system and run.

What is a Docker

Docker is a configuration management tool used to automate the deployment of applications on lightweight containers.

What is a container

  • A container is a standard unit of software that packages all the code, all the dependencies, and everything required to run your application so that your application runs quickly and reliably from one environment to another. Some of the benefits include there is less overhead. So, containers require fewer system resources than traditional hardware or virtual machine environments.
  • It also increases portability. So applications running in containers can be deployed quickly to multiple operating systems and hardware platforms.
  • It also offers more consistent operations, so the DevOps teams know that applications in containers will run the same, regardless of where they are deployed.
  • It also provides a lot of efficiencies. For example, your containers allow applications to be more rapidly deployed, patched, or scaled.
  • Containers are light weight as the memory they consume is in few MBs.

How is container different from Virtual Machines:

In container we virtualize the operating system instead of the Hardware.
Whereas Virtual Machine provides an abstraction of the physical hardware. It will turn one particular server into multiple servers. So literally divide the hard disk space and install separate operating systems on that.

Docker


Steps in creating docker container

  • Docker has a repository where different images are stored . The repository is known as Docker hub.
  • One can download the images from Docker hub.
  • To have this images running on machine, we need to have docker runtime by downloading docker host.
  • Create container from the image.

Example–

  • Suppose you want to install CentOS application on Ubuntu Server, so first download Docker on Ubuntu machine, get to site hub.docker.com and search for CentOS image.
  • Download CentOS image.
  • Create a container from image.
  • Deploy your application into container.

Not possible to delete an image once the container is created.

How to install Docker
1. Launch an Ubuntu EC2 server
2. Sudo apt update–  update list of available packages
3. Sudo apt install docker.io
4.Check Docker version with command  docker –version

Docker Architecture

Steps in creating docker container:

Docker commands:

Docker file

Docker Image can be created only with the help of Dockerfiles. Dockerfile consists of specific commands that guide you on how to build a specific Docker image. 

Docker file format is as shown below.Docker file contain all the commands necessary to build an image.

Entry point command runs after container is created.ENTRYPOINT allows specifying a command along with the parameters.

How to Build a Docker Image and Docker Container Using Dockerfile?

Mentioned below is the syntax of a Dockerfile:

The specific commands you can use in a dockerfile are:

FROM, PULL, RUN, and CMD

  • FROM – Creates a layer from the ubuntu:18.04
  • PULL – Adds files from your Docker repository
  • RUN – Builds your container
  • CMD – Specifies what command to run within the container

First of all, you should create a directory in order to store all the Docker images you build.
Move Docker image into that directory and create a new empty file (Dockerfile) in it:
Now, we will create a directory named ‘test’ with the command:
mkdir test

Move Docker image into that directory and create a new empty file (Dockerfile) in it:
cd test

touch test

Open the file with the editor. In this example, we opened the file using vi:
vi test

Then, add the following content:
FROM ubuntu

MAINTAINER techcurious

RUN apt-get update

CMD [“echo”, “Welcome to Docker learning”]

5.Save and exit the file.

Building Docker image with Docker file.

Let’s first declare the path where we will be storing the dockerfile test

docker build [OPTIONS] PATH | URL | –

Now, let’s build a basic image using a Dockerfile:

docker build [location of your dockerfile]

Now, by adding -t flag, the new image can be tagged with a name:

docker build -t test_image

How to create a container from image:

docker run –name test image name

Docker Networking:

Containers inside an EC2 instance are assigned an private IP range of 172.17.0.0/17 and all other containers will be assigned an IP range from this range only.

Docker has defined different types of network for diff use cases:

By default applications hosted in containers on different EC2 instances would not be able to communicate as each container is a different network though they have same IP range 172.16.0.0/16.

Docker network types:

  • Bridge Network
  • Overlay network
  • Host network

Bridge Network:

By default docker server creates a default bridge but you can create your own bridge network and assign your container to that custom bridge on start. Bridge networks are usually used when your applications run in standalone containers that need to communicate.

Containers have their own private IP range and EC2 instance on which it is hosted has its own private IP range. By default Bridge network is enabled.

To check IP address of container run the command

Docker inspect “containername”

To go inside the container, run the command

docker exec -it xx(container name or initials) /bin/bash

Bridge Network Mode is of 2 types:

  • 1.Default–We can ping only to the container IP, can not ping using container name.
  • 2.Custom –We can ping by both container name and container IP.

To see docker network types, run the command:

docker network ls

Default bridge network is automatically created when docker is installed.

To create custom bridge mode, run the command

#docker network create–driver bridge net.

Creating custom network, creates a network in new IP range as shown in diagram below:

#docker network inspect bridge Inspect the bridge network to see what containers are connected to it.

Next you can run your containers in this custom bridge mode network.

docker run -dit –network net–name ubuntu-net centos

Overlay Network: Used when multiple containers running on multiple EC2 instance. Bridge network is suitable only when you are running your application on single EC2 instance. Overlay network works as if the containers running on different Docker hosts are in same network. Overlay network is used when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services. It is recommended that you use separate overlay networks for each application or group of applications which will work together.  The docker_gwbridge connects the ingress network to the Docker host’s network interface so that traffic can flow to and from swarm managers and workers. If you create swarm services and do not specify a network, they are connected to the ingress network. 

Host network: If we select host network, container will by default take the Docker host address range. For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly .Host network is used for networking standalone containers which bind directly to the Docker host’s network, with no network isolation.

Docker compose

Docker compose is a YAML file in which we write script which would create containers along with the required dependencies.

To run docker compose-

#docker compose up-d

Docker Volume

Docker volume should be persistent. For example, if a container goes down, the volume attached to it will be lost, which should not be the case.We can attach multiple containers to same volume.

Shared Volume is created at the same location where docker is installed.

Also we can share a directory in docker host(EC2 instance) with multiple containers.

Bind Mounts: Docker host(EC2 instance) directory is binned to container directory. Same file will be reflected in both the directories. Use in situation where developer wants any modification in their local file to be reflected immediately to their container for testing.

pwd command—gives current directory path.

command for Bind Mounts:

Volume Binding: We can connect same volume to multiple containers.

#docker volume create test this command will create a test volume.

#docker volume ls this command will list all the volumes.

#docker run -dit –v test:/app httpd:latest this command will mount volume to container. Volume is mounted on location app in container.

So now same data can be shared with multiple container by sharing same volume.

Docker Swarm

The Docker swarm is one of the container orchestration tools that allow us to manage several containers that are deployed across several machines. Docker Swarm manages a cluster of instances(docker hosts) on which containers run.

Docker Swarm ensures your application is in always running state.

Docker Swarm monitors the microservices application management.

Docker Swarm Architecture:

A docker host can either be a manager or worker.

Keys points about Docker Swarm:

Master maintains state of the worker.

There are 3 manager nodes and rest worker nodes. Out of 3 manager nodes, one is leader node and it manages the worker nodes. The other 2 master nodes are always in sync.

Manager nodes assigns tasks to worker nodes.

By Default a manager node can work as a worker nodes as well.

Docker recommends the master to be in odd numbers–1,3,5,7 etc. Docker recommends to have maximum 7 masters in a cluster.

Worker nodes receives all the tasks assigned by master and executes them.

Agent installed on worker nodes, which will tell the state of worker nodes to manager nodes.

Unless (n-1)/2 where n is number of master nodes in cluster, for example if number of nodes is 7, then (7-1)/2=3 so if out of 7 if 3 manager nodes goes down it will not impact the cluster.

Worker nodes needs to authenticate themselves to Master nodes with TLS token authentication.

Docker Swarm commands:

To initialize the cluster…Docker swarm init

Docker node ls –this will tell how many nodes under the cluster.

docker swarm join–token (to join the cluster)

  • #docker swarn init
  • #docker swarm join
  • #doker node ls (This command can not be run on worker nodes) and can only be run on swarm manager. Worker nodes can not be used to view or modify cluster states.
  • #docker swarm join-token worker (if you have misplaced original token from master)
  • #docker swarm join-token manager (this will add the node as a manager)
  • To remove a node from cluster– docker swarm leave
  • #docker info
  • #docker node rm worker-id(to remove the worker node from master list)
  • #docker node rm -f worker 1 (if we want to remove active worker node from master)

Swarm Node promote and Demote

  • #docker node inspect –will give detailed information about docker node including node role whether it is worker or manager.
  • #docker node promote worker1 worker2 to promote worker node to manager.
  • #docker node demote worker1 worker2 to demote manager node to worker.

Docker Service(create,ls,logs)

Docker service will be the image for a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment.

When you create a service, you specify which container image to use and which commands to execute inside running containers. You also define options for the service including:

  • the port where the swarm will make the service available outside the swarm
  • an overlay network for the service to connect to other services in the swarm
  • CPU and memory limits and reservations
  • a rolling update policy
  • the number of replicas of the image to run in the swarm

To create a service at cluster , service is defined on Manager node.

  • #docker service –help
  • #docker service create
  • example #docker service create -d (image name) ping 192.168.25.10(command which needs to be run inside containers)
  • #docker service ls This command will list all the services created.
  • #docker service inspect service id This will give detailed information about service.
  • #docker service logs This will show logs.
  • #docker create service –replicas 4 create -d alpine ping 192.168.25.10 This will create 4 Replicas of the service created in the worker nodes.
  • #docker service ls To check status of Replicas.
  • #docker service ps “service id”

Docker Service scale, port mapping

#docker service scale “service-id”=7” this command will scale the service replica to 7.

#docker service scale qv=5 9m=9 this will scale both the service qv and 9m to 5 and 9 respectively in same command.

#docker service rm qv 9m this will remove the service

#docker service create -d -p 8090:80 nginx(image name) This will create service and do the port mapping as well, so inside cluster on any node ip with the mapped port you can access the service.So this service can be accessed by any node inside cluster on specific port.

Docker Service Mode (Replicated, global)

Global Mode — These are services performed by using the swarm manager to schedule a single task to every available Node that meets the resource requirement and service constraints. To be used when you want a particular service to be running on each of the node.

#docker service create mode=global alpine ping 8.8.8.8

If we add a new worker node in the cluster, same command will run inside it.

Replicated Services: Swarm mode replicated services performed by the user by mentioning the particular number of replica tasks for the swarm manager to assign that particular task to every available Node that has all required service constraints.

Docker Swarm Label and Constraint

Constraint is used if you want a particular service to be run only in a certain node.

#docker service create –replica=3 –constraint=”node.role==manager” alpine ping 192.168.25.30 This will create the service and 3 replicas only on master

#docker service create –replica=7 –constraint=”node.role==worker” alpine ping 192.168.25.30 This will create the service and 3 replicas only on master. This will create the service and 7 replicas only on worker.

Labels:

#docker node update –label-add=”ssd=true” worker01 This will add the label ssd=true to worker 01.

#docker service create –constraint=”node.labels.ssd==true” –replicas3 -d alpine ping 192.168.25.30

The services will be created in an node where they match label as ssd=true.

We can assign labels with 2 types-engine level and node levels.

Docker Swarm Node Availability

Node availability is of 3 types–active, pause, drain

Active means they are ready to take task from master.

Pause–No task would be assigned when the status is paused.

Drain–when the status is drain, all the containers will be shifted to other nodes. We can perform all maintenance related activity after moving the node to drain status.

#docker node update –availability=pause worker2 this will pause worker 2.

#docker node update–availability=drain worker2 this will drain worker2.

Docker Swarm Service Create Options

We can reserve our container to be created on instance with specific CPU and memory

Docker Secret: If we have any sensitive data and we do not want to transfer it over the network, then we use Docker secret.

#docker secret create dbpass – (- will read input values then enter password press enter +control D)..this will create the secret.

#docker secret ls

#docker secret inspect dbpass

or

#docker secret create mytestfile testpw(file name)

#docker service create -d –secret dbpass(secret name) alpine ping 8.8.8.8 with this command the container would be able to access the secret dbpass.

To create DB service

#docker service create -d –secret dbpass -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/dbpass(fine name) mysql (password is always stored under /run/secrets)

Docker Overlay Networking

Docker Overlay Networking allows communication between different docker hosts and containers and services within it. Same range IP network range allocated to different docker hosts.

Overlay networking does not support windows operating system.

Ingress network–Ingress network is type of Overlay Network with inbuilt Load Balancer.

If a user tries to access a particular application on host 3 on port 80, even though it is running on host 1 and 2, it will redirect it to host 2 because of Ingress network.

Ingress network is by default created when you initialize swarm.

#docker network create -d overlay(driver network type–can get by docker network ls) test.

If we do not define while creating service which overlay network to attach, this will by default attach to Ingress Overlay network.

#docker service create -d –network test coolgaurav/sleeper traceroute

By default we can not attach container to overlay network.

To attach containers to overlay network run the below commands

#docker network create -d overlay –attachable test1

#docker container run -d –network=test1 “image name.”

container#ipconfig to check IPs allocated to ether1 and ether 0 whenever we attach overlay network , on eth0 overlay network IP gets attached and eth1–gw bridge network is attached by default.

Whenever we assign port, by default ingress network is attached also along with overlay network.

Docker Stack

Docker compose used to work on a single docker host.Docker compose does not use swarm mode to deploy services to multiple nodes in a swarm.To deploy your application across the swarm, use “docker stack deploy.

  • #docker stack deploy -c docker-compose.yml “stackname” This will deploy a new stack or update an existing stack.
  • #docker stack ls this will list stacks.
  • #docker stack ps stackname this will list the tasks in stack.
  • #docker stack services stackname this will list services in the stack.
  • #docker stack rm stackname ..this will remove the stack.

 Docker Events

To check events on cluters.

#docker events

#docker events –filter ‘event-create’ this will show events only related to container creates.

#docker events –filter ‘container=container id‘ to see events only for specific container

#docker events –filter ‘image=ubuntu:14.04’

Be the first to comment

Leave a Reply

Your email address will not be published.


*