Containerization with Docker

Containerization with Docker – Based on Centos 7.2

*********************************************************************

1. Background

There are two technical terms called Containers and Virtual Machines which are around for sometimes now. Both of them targeted to abstract the workload from underlying hardware layer, however there are important differences between these technologies. I briefly describe each of them first and then highlight the differences between them.

Virtualization

It is the creation of a virtual version of something which in our case here is referring to virtual creation of an OS. So operating system virtualization means abstracting OS from underlying hardware or software.

This happens by using a software layer called Hypervisor in order to emulate the underlying hardware such as CPU, RAM, I/O and network. So basically guest OS (virtual machine) usually do not have any idea that it does not interact with real Hardware, but a software layer that emulates the hardware. There are different virtualization technologies related to OS, such as installing hypervisor directly on the Hardware (called bare-metal) or hypervisor being installed at top of another OS like Virtual Box.

  • There are several advantages of using OS virtualization:
  • Utilizing better the Hardware (resource optimization)
  • Isolate applications

Containerization

It is also called operating-system virtualization or container-based virtualization. I would say containerization is simply an alternative to fully OS virtualization but in a very lightweight without a need for Hypervisor. Let’s get a feeling what is container.

A container consist of a complete runtime environment which encapsulated into one package targeted to abstract away underlying infrastructure. So basically we encapsulate everything (such as complete OS or an application) into a package.

As a result of this encapsulation, we are able to move our container between different systems and it should work out of the box as long as appropriate container technology like Docker exist in that system.

The following figure shows a good comparison between virtualization and containerization. As can be seen in container technology like docker, some key features of the real OS being used like kernel and related features such as namespaces.

selection_713

Virtualization vs Containerization

a. Both can be used for process isolation with following key differences

  • Container use much less resources since do not need hardware emulator (Hypervisor) for running another OS
  • Virtual machine provides complete process isolation but much heavier (more resources needed). It is possible to run thousands of containers on a host but not the case for vm.
  • Security: isolation between the host and the container is not as strong as in virtualization simply due to the fact that containers share the same kernel of the host. So I would say it might be more probable that a process in one container penetrate into kernel space of the host.

b. Start time of container is much faster than virtual machine since all containers share the same kernel of the host system. This gives them the advantage of being very fast with almost zero performance overhead compared with Vms.

c. It is not possible to run complete OS in containers like we do in virtualization. All containers are sharing same kernel which is belong to the system which container tech has been installed.

d. allocation of resources need to happen beforehand in virtualization, so the resources is guaranteed.

e. Type of containers that can be installed on the host should be compatible with the kernel of the host. So it means we cannot install Linux containers on a windows system or vice-versa.

f. Maybe one the best paper that highlight the comparison is “An Updated Performance Comparison of Virtual Machines and Linux Containers” from IBM research group. Following figure shows the good comparison of performance measuring with Linpack.

selection_718

Their results show that both KVM and Docker introduce negligible over-head for CPU and memory performance (except in extremecases).

Conclusion: Full machine virtualization offers greater isolation at the cost of greater overhead, as each virtual machine runs its own full kernel and operating system instance. Containers, on the other hand, generally offer less isolation but lower overhead through sharing certain portions of the host kernel and operating system instance.

2. Containerization in Depth

The concept of containers in Linux such as LinuX Container (LXC) and runC is not new. It has been used for years specially for Platform-as-a-service vendors. What highlighted the concept of container in recent years is the emergence of new technologies like Docker that in spite of using LXC or RunC in nature, but make it more robust and easier to implement. I would say we can use container in general for two use cases:

a. Containerizing OS: you can think of it as Virtual Machine except we do not have any hypervisor and container is sharing its kernel with host OS (basically container don’t have any kernel). It has been designed to run multiple processes (services). It is useful when you want to run an identical or different flavors of distros. Following figure shows properly the concept of OS containers.

selection_714

b. Containerizing application: It mainly designed to run a single process (service/application) in a container. Docker is a good example of this type which I will discuss in detail later.

Let’s have a short overview over LXC and runC and then I will go completely through Docker. Let me one more time define the term Container here before I dive into depth:

Container: it is one or collection of processes that runs in an isolated environment located directly at the top of Host OS. It has a same kernel as Host OS (if ever need kernel though) which can obviously constrained to use a defined amount of resources.

LinuX Containers (LXC)

It is an OS-level virtulization (means don’t need hypervisor) method for running container(s) on a Host OS without a need for having separate Kernel. In spite of the fact that you can encapsulate any application/process in container(s) here, I would say the main use of LXC might be running complete OS (linux distributions) in container(s). The container is sharing the kernel with the Host OS, so its processes and file system are completely visible from the host.

LXC uses two Linux kernel features to give container functionality (process isolation) which are:

  • namespace isolation: it provides complete isolation of an application’s view of the operating environment such as process trees, file-system, network.
  • Cgroup: it provides limitation and prioritization of resources such as CPU, memory, I/O that being used by processes.

Description: Linux implements several different namespaces (such as Network NS, User NS, PID NS,…) targeted to to wrap a particular global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource. On of the places that we can take advantage of namespace is in implementing container. The ‘chroot’ command which is available in Linux and I used also a lot during xCAT implementation (xCAT thread) is a form of namespace which makes a process sees a completely new file system root and has no access to the original one.

Description: Control group (cgroup) allows to devote resources among user-defined group of processes (tasks). We can monitor the cgroups we configure, deny cgroups access to certain resources, and even dynamically configure it on a running system.

runC

It is a lightweight, portable implementation of the the Open Container Format (OCF) that provides container runtime. In spite of the fact that it shares many of the low-level code with Docker project, but it is completely independent of Docker platform. However it is still possible to run docker formatted images with runc. I don’t go into detail of this container platform, but it is enough to know that you can install runC in Linux such as Redhat and use it.

****************************************************************************

As I mentioned earlier among all container technologies, docker became very famous and draw many attention simply cause of simplicity of usage. Let’s understand Docker in depth.

Docker

Docker is one of the famous containerization technologies which has been written in Go language. It takes advantages of several Linux kernel functionalities such as Namespace and Cgroup and a specific file-system called Union file-system. Let’s consider each of them shortly:

a. Docker uses namespaces to create container. When we run a container, the set of namespaces is created for that container which all are completely isolated and its access is only limited to that namespace. Docker uses several linux namespaces such as following:

  • The pid namespace: Process isolation (PID: Process ID).
  • The net namespace: Managing network interfaces (NET: Networking).
  • The ipc namespace: Managing access to IPC resources (IPC: InterProcess Communication).
  • The mnt namespace: Managing filesystem mount points (MNT: Mount).
  • The uts namespace: Isolating kernel and version identifiers. (UTS: Unix Timesharing System).

b. cgroup allow docker to share available Hardware resources to containers and if needed enforce limitation to resource usage such as limiting memory usage to a specific container.

c. Union file-system is a file-system which is based on creating different layers. Docker Engine uses UnionFS to provide the building blocks for containers.

Important: Docker daemon (engine) combines namespace, cgroup and UnionFS into a wrapped called a Container Format which the default one is libcontainer at the moment. Docker might support other container format in the future.

Goal

The goal is to create a docker container. A docker container consist of a complete runtime environment of an application which encapsulated into a complete file-system targeted to abstract away underlying infrastructure (simply means packaged with everything the application needs to run on the host server are isolated from anything else they don’t need). What I mean with runtime environment is all the stuff that is needed in order one application being able to run such as:

  • application/program itself
  • all related dependencies such as libraries, binaries, configuration files and whatever

So docker container basically guarantee that the software which encapsulated into container will run the same regardless of its current environment.

Docker design

The architecture of docker is based on client- server architecture which can be installed on the same server or separately. It has 3 major components as can be seen in the following figure which are:

selection_716

a. Docker engine (docker daemon): it is a process (long-running program) which basically is the heart of docker. The daemon creates and manages Docker objects, such as images, containers, networks, and data volumes.

b. REST API: it is a piece of software that allows docker engine to talk to the docker client. So basically docker client send their request to the docker engine through REST API by http protocol.

c. docker client (command line interface): this being used to interact with docker engine through REST API.

Key Docker daemon parts

Before I go practically into docker implementation and configuration, I would highlight three main parts of the docker which is essential in proper understanding of the system.

a. Docker images

They are read-only templates (base image) consist of a series of layers that containers are created from. We can build an image from scratch or we can download and modify prepared images. What is really important, understanding the fact that docker uses a special file system called Union File-System in order to combine these layers (might be files or directories from separate file-system) into a single image (single coherent file-system). Based on this layering system, whenever we change something is one layer, other layers remain unchanged. A good example could be when updating an application as part of an image into new version. In this case only a new layer is built and replaces only the layer it updates. And as a result if we want to distribute the updates, only updated/new layer need to be transferred.

This layering strategy has several benefits such as:

  • docker becomes lightweight
  • speed up distribution of images

b. Docker container

It is built from a read-only image by running it which as a result adds a read-write layer on top of the image (using uniofs). It uses the Host machine’s Linux kernel.

So in summary I have to say container is an image (the image defines the container’s contents, which process to run when the container is launched, and a variety of other configuration details) plus all stuff like configuration files, metadata such as networking info and variables that is added at top of the image in runtime.

Selection_717.png

Description: Dockerfile is basically a text file that has all the commands in an order that is used for creating a new image. Please keep it in mind that each commands (line of Dockefile) can be executed directly in command line as well, and the purpose we put all in a text file is efficiency by running all commands in row for automated build purpose. We use “docker build” command for this purpose that can create an automated build that executes several command-line instructions in succession.

The Docker daemon runs the instructions in the Dockerfile one-by-one, committing the result of each instruction to a new image if necessary, before finally outputting the ID of your new image. Note that each instruction is run independently, and causes a new image to be created. Whenever possible, Docker will re-use the intermediate images (cache), to accelerate the docker build process significantly. This is indicated by the Using cache message in the console output.

Let’s understand what happens when we run RUN (run cli or API) command in above figure:

a. Loading stage: docker engine will looks for image whether locally or in docker Hub.

b. Adding several layers:

  • file-system is created and read/write layer is added to the image.
  • Network/bridge interfaces are added (let container to talk to local Host)

c. Configuring IP address (taken from available pool)

d. If any other process specified in the command will be executed

In summary: An image is a filesystem and parameters to use at runtime. It doesn’t have state and never changes. An image can start software as complex as a database, wait for you (or someone else) to add data, store the data for later use, and then wait for the next person. A container is a running instance of an image.

c. Docker registries

It is simply a place which you can find prepared images. There are public registry such as Docker Hub which serves a huge collection of existing images and also let us to contribute our own. There are also repositories that we can buy or sell docker images such as Docker store.

Installation

1. make sure the system is completely update otherwise use ‘yum update’

2. Create a repository in /etc/yum.repos.d

  • [root@hadoop-master yum.repos.d]# cat docker.repo
  • [dockerrepo]
  • name=Docker Repository
  • baseurl=https://yum.dockerproject.org/repo/main/centos/7/
  • enabled=1
  • gpgcheck=1
  • gpgkey=https://yum.dockerproject.org/gpg

3. Installing the docker-engine

  • [root@hadoop-master ~]# yum install docker-engine
  • Installed:
  • docker-engine.x86_64 0:1.12.2-1.el7.centos
  • Dependency Installed:
  • docker-engine-selinux.noarch 0:1.12.2-1.el7.centos
  • [root@hadoop-master ~]# docker version
  • Client:
  • Version: 1.12.2
  • API version: 1.24
  • Go version: go1.6.3
  • Git commit: bb80604
  • Built:
  • OS/Arch: linux/amd64
  • Server:
  • Version: 1.12.2
  • API version: 1.24
  • Go version: go1.6.3
  • Git commit: bb80604
  • Built:
  • OS/Arch: linux/amd64

4. Starting the docker service

  • [root@hadoop-master ~]# systemctl start docker.service

Important: Our systemd file related to docker is located at /usr/lib/systemd/system/ which has been written automatically during installation. The content of my installation is as follow:

  • [root@hadoop-master ~]# cat /usr/lib/systemd/system/docker.service
  • [Unit]
  • Description=Docker Application Container Engine
  • Documentation=https://docs.docker.com
  • After=network.target
  • [Service]
  • Type=notify
  • # the default is not to use systemd for cgroups because the delegate issues still
  • # exists and systemd currently does not support the cgroup feature set required
  • # for containers run by docker
  • ExecStart=/usr/bin/dockerd
  • ExecReload=/bin/kill -s HUP $MAINPID
  • # Having non-zero Limit*s causes performance problems due to accounting overhead
  • # in the kernel. We recommend using cgroups to do container-local accounting.
  • LimitNOFILE=infinity
  • LimitNPROC=infinity
  • LimitCORE=infinity
  • # Uncomment TasksMax if your systemd version supports it.
  • # Only systemd 226 and above support this version.
  • #TasksMax=infinity
  • TimeoutStartSec=0
  • # set delegate yes so that systemd does not reset the cgroups of docker containers
  • Delegate=yes
  • # kill only the docker process, not all processes in the cgroup
  • KillMode=process
  • [Install]
  • WantedBy=multi-user.target

5. Verify docker is installed correctly by running a test image in a container

[root@hadoop-master ~]# docker run –rm hello-world

  • Unable to find image ‘hello-world:latest’ locally
  • latest: Pulling from library/hello-world
  • c04b14da8d14: Pull complete
  • Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
  • Status: Downloaded newer image for hello-world:latest
  • Hello from Docker!

This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:

  • 1. The Docker client contacted the Docker daemon.
  • 2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
  • 3. The Docker daemon created a new container from that image which runs the
  • executable that produces the output you are currently reading.
  • 4. The Docker daemon streamed that output to the Docker client, which sent it
  • to your terminal.

To try something more ambitious, you can run an Ubuntu container with:

  • $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:

For more examples and ideas, visit:

You have new mail in /var/spool/mail/root

Description: hello-world is a prepared image that already built by docker company and we can just use it. The command “run” is a subcommand that creates and runs a docker container.

Important: The docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can access it with sudo. For this reason, docker daemon always runs as the root user. To solve this problem, I run everything as root user. To let simple users also use docker commands I did following step:

6. I need to add my user (hossein) to the docker group (if docker group does not exist just simply create it by ‘groupadd docker’

  • [root@hadoop-master ~]# usermod -aG docker hossein
  • [root@hadoop-master ~]# su – hossein
  • [hossein@hadoop-master ~]$ docker run hello-world
  • Hello from Docker!

This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:

  • 1. The Docker client contacted the Docker daemon.
  • 2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
  • 3. The Docker daemon created a new container from that image which runs the
  • executable that produces the output you are currently reading.
  • 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.

We can check the current local images that exist in our system with following command:

  • [hossein@hadoop-master ~]$ docker images
  • REPOSITORY   TAG       IMAGE  ID        CREATED            SIZE
  • hello-world     latest    c54a2cc56cbb    3 months ago     1.848 kB

Configuration

I hope the concept be clear by now regarding the docker. I will go through more details by having some practical examples which will clear everything.

Example 1 – running a container

Let’s start with a image called ‘whalesay’ that is a program which print random or custom messages. It contains an adaption of the Linux cowsay game. The game was originally written in 1999 by Tony Monroe. This image is used by the Docker demo tutorial purely as a teaching tool. It is publicly available in Public Hub and you easily can go to the docker public hub website and search for ‘whalesay’. It should include information such as what kind of software the image contains and how to use it.

1. Since I don’t have this image locally I need to load it from Public hub through Internet and then run it. I use following command for:

  • downloading the image to my local system
  • create a container out of image
  • lastly run the container and show the results

[hossein@hadoop-master ~]$ docker run docker/whalesay cowsay boo

  • Unable to find image ‘docker/whalesay:latest’ locally
  • latest: Pulling from docker/whalesay
  • e190868d63f8: Pull complete
  • 909cd34c6fd7: Pull complete
  • 0b9bfabab7c1: Pull complete
  • a3ed95caeb02: Pull complete
  • 00bf65475aba: Pull complete
  • c57b6bcc83e3: Pull complete
  • 8978f6879e2f: Pull complete
  • 8eed3712d2cf: Pull complete
  • Digest: sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b
  • Status: Downloaded newer image for docker/whalesay:latest
  • _____
  • < boo >
  • —–
  • \
  • \
  • \
  • ## .
  • ## ## ## ==
  • ## ## ## ## ===
  • /””””””””””””””””___/ ===
  • ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
  • \______ o __/
  • \ \ __/
  • \____\______/

If we check again the existent local images, we will see that whalesay is there.

[hossein@hadoop-master ~]$ docker images

  • REPOSITORY TAG IMAGE ID CREATED SIZE
  • hello-world latest c54a2cc56cbb 3 months ago 1.848 kB
  • docker/whalesay latest 6b362a9f73eb 17 months ago 247 MB

If we run again the above command, docker will not download again the image to the local computer unless image’s source has been changed in meantime public hub.

Example 2 – running a container

In this example, we will use an image called tutum/hello-world which is a Apache server with a ‘Hello World’ page listening on port 80.

  • [root@hadoop-master ~]# docker run tutum/hello-world

same as before, if the image does not exist, it will download it first from public hub and then run it. Open another session to the docker Host and check if the container is running by following command:

  • [root@hadoop-master ~]# docker ps
  • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  • 716cbb70ee7b tutum/hello-world “/bin/sh -c ‘php-fpm ” 3 minutes ago Up 3 minutes 80/tcp ecstatic_minsky

As we can see the port 80/tcp is used inside the docker but it still don’t have connection to our Host system ports. So we need specifically define the port 80 of current container needs to be forwarded to which local host ports. This is a very good feature since we can start many instances of the same container and forward it to different port of my local host. Here I will change my running command for this purpose.

  • [root@hadoop-master ~]# docker run -p 8080:80 tutum/hello-world

docker: Error response from daemon: driver failed programming external connectivity on endpoint berserk_hodgkin (b0c217213ad00476096c7dfc6d731664e2771441d649147332fa9a1255713185): Error starting userland proxy: listen tcp 0.0.0.0:8080: bind: address already in use.

  • [root@hadoop-master ~]# docker run -p 8081:80 tutum/hello-world

At first I tried to forward it to port 8080, but it is already taken and being used by our Ambari service for hadoop cluster. So I change it to port 8081.

I highlight some useful commands:

We can run the container in background with using -d flag.

  • [root@hadoop-master ~]# docker run -d -p 8081:80 tutum/hello-world

We can also give the container a name with using –name flag

  • [root@hadoop-master ~]# docker run -d –name hrz1 -p 8081:80 tutum/hello-world

We also remove or stop a container easily with “docker stop containerID” or “docker rm containerID” command.

In order to delete images we can use following command:

  • [root@hadoop-master ~]# docker rmi hossein-whale

if face with a problem can force it by adding -f flag.

In order to remove a container we can use following command-line

  • [root@hadoop-master ~]# docker rm docker-whale

Example 3 – Building our own image

The target is to create a new improved image from previously existent image that I created in example 1 called ‘whalesay’. If we recall again how we execute the run command in example 1 was like following:

  • [hossein@hadoop-master ~]$ docker run docker/whalesay cowsay boo

Which we need to use some random words like “boo” in our case for outputting something plus referring to cowsay game specifically. The target is to create an image from whalesay image that requires less/fewer words to run.

a. We need to create a Dockerfile as follow:

  • [root@hadoop-master docker]# pwd
  • /root/docker
  • [root@hadoop-master docker]# cat Dockerfile
  • FROM docker/whalesay:latest
  • RUN apt-get -y update && apt-get install -y fortunes
  • CMD /usr/games/fortune -a | cowsay

Instruction of Dockerfile

  • The first command always start with ‘FROM’ in order to determine the Base Image which we need to build from. Whalesay is cute and has the cowsay program already, so we’ll start there.
  • The second command ‘RUN apt-get’ is used to install packages. Always combine RUN apt-get update with apt-get install in the same RUN statement.
  • The third command CMD should be used to run the software contained by your image, along with any arguments. Here this line tell the fortune program to pass a bunch words (quote) to the coswsay software as a result we don’t even need to use cowsay in ‘docker run’ command.

Description: CMD should almost always be used in the form of CMD [“executable”, “param1”, “param2”…]. Thus, if the image is for a service, such as Apache and Rails, you would run something like CMD [“apache2″,”-DFOREGROUND”]. Indeed, this form of the instruction is recommended for any service-based image.

In most other cases, CMD should be given an interactive shell, such as bash, python and perl. For example, CMD [“perl”, “-de0”], CMD [“python”], or CMD [“php”, “-a”]. Using this form means that when you execute something like docker run -it python, you’ll get dropped into a usable shell, ready to go.

b. Now we use the build command to create our new image. The build is run by the Docker daemon, not by the CLI. The first thing a build process does is send the entire context (recursively) to the daemon. In most cases, it’s best to start with an empty directory as context and keep your Dockerfile in that directory. Add only the files needed for building the Dockerfile.

  • [root@hadoop-master docker]# pwd
  • /root/docker
  • [root@hadoop-master docker]# docker build -t hossein-whale .
  • Sending build context to Docker daemon 2.048 kB
  • Step 1 : FROM docker/whalesay:latest
  • —> 6b362a9f73eb
  • Step 2 : RUN apt-get -y update && apt-get install -y fortunes
  • —> Running in 3c79bd9a3d70
  • Ign http://archive.ubuntu.com trusty InRelease
  • Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
  • Get:2 http://archive.ubuntu.com trusty-security InRelease [65.9 kB]
  • Hit http://archive.ubuntu.com trusty Release.gpg
  • Hit http://archive.ubuntu.com trusty Release
  • Get:3 http://archive.ubuntu.com trusty-updates/main Sources [475 kB]
  • Get:4 http://archive.ubuntu.com trusty-updates/restricted Sources [5247 B]
  • Get:5 http://archive.ubuntu.com trusty-updates/universe Sources [213 kB]
  • Get:6 http://archive.ubuntu.com trusty-updates/main amd64 Packages [1137 kB]
  • Get:7 http://archive.ubuntu.com trusty-updates/restricted amd64 Packages [23.5 kB]
  • Get:8 http://archive.ubuntu.com trusty-updates/universe amd64 Packages [501 kB]
  • Get:9 http://archive.ubuntu.com trusty-security/main Sources [152 kB]
  • Get:10 http://archive.ubuntu.com trusty-security/restricted Sources [3944 B]
  • Get:11 http://archive.ubuntu.com trusty-security/universe Sources [52.1 kB]
  • Get:12 http://archive.ubuntu.com trusty-security/main amd64 Packages [673 kB]
  • Get:13 http://archive.ubuntu.com trusty-security/restricted amd64 Packages [20.2 kB]
  • Get:14 http://archive.ubuntu.com trusty-security/universe amd64 Packages [183 kB]
  • Hit http://archive.ubuntu.com trusty/main Sources
  • Hit http://archive.ubuntu.com trusty/restricted Sources
  • Hit http://archive.ubuntu.com trusty/universe Sources
  • Hit http://archive.ubuntu.com trusty/main amd64 Packages
  • Hit http://archive.ubuntu.com trusty/restricted amd64 Packages
  • Hit http://archive.ubuntu.com trusty/universe amd64 Packages
  • Fetched 3572 kB in 5s (604 kB/s)
  • Reading package lists…
  • Reading package lists…
  • Building dependency tree…
  • Reading state information…
  • The following extra packages will be installed:
  • fortune-mod fortunes-min librecode0
  • Suggested packages:
  • x11-utils bsdmainutils
  • The following NEW packages will be installed:
  • fortune-mod fortunes fortunes-min librecode0
  • 0 upgraded, 4 newly installed, 0 to remove and 87 not upgraded.
  • Need to get 1961 kB of archives.
  • After this operation, 4817 kB of additional disk space will be used.
  • Get:1 http://archive.ubuntu.com/ubuntu/ trusty/main librecode0 amd64 3.6-21 [771 kB]
  • Get:2 http://archive.ubuntu.com/ubuntu/ trusty/universe fortune-mod amd64 1:1.99.1-7 [39.5 kB]
  • Get:3 http://archive.ubuntu.com/ubuntu/ trusty/universe fortunes-min all 1:1.99.1-7 [61.8 kB]
  • Get:4 http://archive.ubuntu.com/ubuntu/ trusty/universe fortunes all 1:1.99.1-7 [1089 kB]
  • debconf: unable to initialize frontend: Dialog
  • debconf: (TERM is not set, so the dialog frontend is not usable.)
  • debconf: falling back to frontend: Readline
  • debconf: unable to initialize frontend: Readline
  • debconf: (This frontend requires a controlling tty.)
  • debconf: falling back to frontend: Teletype
  • dpkg-preconfigure: unable to re-open stdin:
  • Fetched 1961 kB in 1s (1100 kB/s)
  • Selecting previously unselected package librecode0:amd64.
  • (Reading database … 13116 files and directories currently installed.)
  • Preparing to unpack …/librecode0_3.6-21_amd64.deb …
  • Unpacking librecode0:amd64 (3.6-21) …
  • Selecting previously unselected package fortune-mod.
  • Preparing to unpack …/fortune-mod_1%3a1.99.1-7_amd64.deb …
  • Unpacking fortune-mod (1:1.99.1-7) …
  • Selecting previously unselected package fortunes-min.
  • Preparing to unpack …/fortunes-min_1%3a1.99.1-7_all.deb …
  • Unpacking fortunes-min (1:1.99.1-7) …
  • Selecting previously unselected package fortunes.
  • Preparing to unpack …/fortunes_1%3a1.99.1-7_all.deb …
  • Unpacking fortunes (1:1.99.1-7) …
  • Setting up librecode0:amd64 (3.6-21) …
  • Setting up fortune-mod (1:1.99.1-7) …
  • Setting up fortunes-min (1:1.99.1-7) …
  • Setting up fortunes (1:1.99.1-7) …
  • Processing triggers for libc-bin (2.19-0ubuntu6.6) …
  • —> b34e5ea0e3a2
  • Removing intermediate container 3c79bd9a3d70
  • Step 3 : CMD /usr/games/fortune -a | cowsay
  • —> Running in cdc3c1723583
  • —> 820f8aaf2fa1
  • Removing intermediate container cdc3c1723583
  • Successfully built 820f8aaf2fa1

So basically I change the shell to the directory that I have my Dockerfile and then ran the command as can seen here:

  • [root@hadoop-master docker]# docker build -t hossein-whale .

Instead of using dot at the end of the command, we can also use the complete directory that Dockerfile is exist like:

  • [root@hadoop-master docker]# docker build -t hossein-whale /root/docker

Description

The above command takes the Dockerfile in the current directory and builds an image called hossein-whale out of it. So If I check the current images in my system, I should see the created one.

  • [root@hadoop-master ~]# docker images
  • REPOSITORY TAG IMAGE ID CREATED SIZE
  • hossein-whale latest 820f8aaf2fa1 8 minutes ago 256.3 MB
  • hello-world latest c54a2cc56cbb 3 months ago 1.848 kB
  • tutum/hello-world latest 31e17b0746e4 10 months ago 17.79 MB
  • docker/whalesay latest 6b362a9f73eb 17 months ago 247 MB

c. now we can run the new created image. I remind you that we don’t need to specify the cowsay software neither the random words. Plus the image exist locally and there is no need to download anything from public hub repositories.

  • [root@hadoop-master docker]# docker run hossein-whale
  • _______________________________
  • / Everyone is in the best seat. \
  • | |
  • \ — John Cage /
  • ——————————-
  • \
  • \
  • \
  • ## .
  • ## ## ## ==
  • ## ## ## ## ===
  • /””””””””””””””””___/ ===
  • ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
  • \______ o __/
  • \ \ __/
  • \____\______/

Example 4 – running a container

This example is divided in 3 parts which all are based on a ubuntu image which is publicly available in docker public hub.

a. We would like to download the ubuntu OS image first from public hub and and then run a simple command inside the OS.

  • [root@hadoop-master ~]# docker images
  • REPOSITORY TAG IMAGE ID CREATED SIZE
  • hossein-whale latest 820f8aaf2fa1 31 minutes ago 256.3 MB
  • hello-world latest c54a2cc56cbb 3 months ago 1.848 kB
  • tutum/hello-world latest 31e17b0746e4 10 months ago 17.79 MB
  • docker/whalesay latest 6b362a9f73eb 17 months ago 247 MB

As can be seen we don’t have any ubuntu image in the system.

  • [root@hadoop-master ~]# docker run ubuntu /bin/echo ‘Hello Hossein’
  • ..
  • ..
  • ..
  • Status: Downloaded newer image for ubuntu:latest
  • Hello Hossein

So it downloaded first the ubuntu image and then run the simple echo command inside the ubuntu OS.

  • [root@hadoop-master ~]# docker images
  • REPOSITORY TAG IMAGE ID CREATED SIZE
  • hossein-whale latest 820f8aaf2fa1 33 minutes ago 256.3 MB
  • ubuntu latest f753707788c5 6 days ago 127.1 MB
  • hello-world latest c54a2cc56cbb 3 months ago 1.848 kB
  • tutum/hello-world latest 31e17b0746e4 10 months ago 17.79 MB
  • docker/whalesay latest 6b362a9f73eb 17 months ago 247 MB

In the above example, when the container finish running the echo command, it will be stopped. So practically it is just running instantly and then stop. However, it is also possible to put the container up and running and then we interact with it whenever we need it. At the end we can intentionally stop it.

b. Run an interactive container. For this purpose we run following command

docker run -t -i ubuntu /bin/bash

  • ubuntu is the image we would like to run.
  • -t flag assigns a terminal inside the new container.
  • -i flag allows you to make an interactive connection by grabbing the standard in (STDIN) of the container.
  • /bin/bash launches a Bash shell inside our container.
  • [root@hadoop-master ~]# docker run -t -i ubuntu /bin/bash
  • root@45c04510b99b:/# ls
  • bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
  • root@45c04510b99b:/# ping 4.2.2.4
  • bash: ping: command not found

As you can see by running the docker run command, our shell automatically redirected to root@45c04510b99b:/#’ which is our Ubuntu linux shell inside the container. I used the ‘ls’ afterward as one of the example that you see the system. At the end we can use exist command to come out of the ubuntu system.

As can be seen in the above, I tried to ping a public dns server, but the ping command does not exist in our ubuntu OS. Recalling from example 3, we can create our own ubuntu image in a way that complete iputils-ping package being already installed.

For This I created following Dockerfile:

  • [root@hadoop-master ubuntu]# pwd
  • /root/docker/ubuntu
  • [root@hadoop-master ubuntu]# cat Dockerfile
  • FROM ubuntu:latest
  • RUN apt-get -y update && apt-get install -y iputils-ping

and then I create my ubuntu image which is called hossein-ubuntu

  • [root@hadoop-master ubuntu]# docker build -t hossein-ubuntu .

So this time I run my new ubuntu image and then test one more time the ping command-line

[root@hadoop-master ubuntu]# docker run -t -i hossein-ubuntu /bin/bash

  • root@fc6958d35b90:/# ping 4.2.2.4
  • PING 4.2.2.4 (4.2.2.4) 56(84) bytes of data.
  • 64 bytes from 4.2.2.4: icmp_seq=1 ttl=55 time=13.3 ms
  • 64 bytes from 4.2.2.4: icmp_seq=2 ttl=55 time=13.0 ms

c. In this part the target is to run our ubuntu container as a daemon which basically means the container runs as a background process rather being under the direct control of an interactive user (detached container that ran in the background).

The command that I want the container run is a simple shell script as following

  • while true; do echo time is; date; sleep 5; done

There are 2 ways to run this command as a daemon inside our Ubuntu image. One way is to pass the command directly during docker run command as we can see here:

  • [root@hadoop-master ~]# docker run -d hossein-ubuntu /bin/sh -c “while true; do echo Time and Date is; date; sleep 5; done”
  • 4e87d7ae7fca563b09a9222caf82ae5ba3ddc0a99976222875f686cd28ac792a

We see that the output is a container ID. We can see that our daemon which is ubuntu OS that is running a specific command is running:

  • [root@hadoop-master ~]# docker ps
  • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  • 4e87d7ae7fca hossein-ubuntu “/bin/sh -c ‘while tr” About a minute ago Up About a minute dreamy_borg

Here docker automatically specified a name for our container. As we have seen before, we can specify a name ourself too (–name X).

But how to check the result of our daemon. We can use a ‘docker logs’ command that shows the standard output of a container.

  • [root@hadoop-master ~]# docker logs dreamy_borg
  • Time and Date is
  • Thu Oct 20 15:35:54 UTC 2016
  • Time and Date is
  • Thu Oct 20 15:35:59 UTC 2016
  • Time and Date is
  • Thu Oct 20 15:36:04 UTC 2016

So the daemon is keep running in background and each time we use the docker logs command we get the latest update of the command which is date after 5 seconds sleep.

We can also use the short Container ID.

  • [root@hadoop-master ~]# docker logs 4e87d7ae7fca

At the end we can just stop the container

  • [root@hadoop-master ~]# docker stop 4e87d7ae7fca

b. We make a new image that takes our previous image (hossein-ubuntu) and create an image that put our command by default inside the image. In this case we just need to run the new image and it will automatically run the command in background.

  • [root@hadoop-master ubuntu2]# pwd
  • /root/docker/ubuntu/ubuntu
  • [root@hadoop-master ubuntu2]# cat Dockerfile
  • FROM hossein-ubuntu:latest
  • CMD /bin/sh -c “while true; do echo Time and Date is; date; sleep 5; done”

and the we build our new image out of it which is called hossein-ubuntu2

  • [root@hadoop-master ubuntu2]# docker build -t hossein-ubuntu2 .

And then run the image. In this case we have 2 choice to run the new image in daemon mode and check the output with docker logs or run it in non-daemon mode and see the output in our terminal (not recommended for this example)

  • [root@hadoop-master ubuntu2]# docker run hossein-ubuntu2
  • Time and Date is
  • Thu Oct 20 16:00:42 UTC 2016
  • Time and Date is
  • Thu Oct 20 16:00:47 UTC 2016
  • or in daemon mode as we can see here
  • [root@hadoop-master ~]# docker run -d hossein-ubuntu2

Example 5

In this example the target is to run a specific web application inside the container. There is a public image in docker public hub called training/webapp which has a Python Flask application prepared and the result of running this application is displaying ‘Hello World’. All requirments of this web application like Flask, Jinja2 and other stuff already built in image. The python file called ‘app.py’ is already written inside which is following:

  • import os
  • from flask import Flask
  • app = Flask(__name__)
  • @app.route(‘/’)
  • def hello():
  • provider = str(os.environ.get(‘PROVIDER’, ‘world’))
  • return ‘Hello ‘+provider+’!’
  • if __name__ == ‘__main__’:
  • # Bind to PORT if defined, otherwise default to 5000.
  • port = int(os.environ.get(‘PORT’, 5000))
  • app.run(host=’0.0.0.0′, port=port)

Description: as we can see inside the code, the default exposed port for Python Flask is 5000. This means that we need to have access to port 5000 inside the docker in order to see our web application.

So let’s run the web application container. We run it as a daemon (-d) in order to be run in background and whenever we need can have a access to it. Obviously as previous examples, the Host system (our Centos 7.2) needs to have a access to our container and as a result we need to have some kind of port forwading. There are 2 way for this purpose which I include inside docker run command:

a. Let docker automatically and randomly expose our web application port to our Host. We already know that web application listening port is 5000 from the code. So we can use the command in 2 way here:

  • [root@hadoop-master ~]# docker run -d -P training/webapp python app.py

or

  • [root@hadoop-master ~]# docker run -d -p 5000 training/webapp python app.py

So in first command we really don’t need to know what port the web application is listening inside the image, and we make the process more automatic. In the second command we specify precisely that we want port 5000 of the container being exposed to a proper port of Host system. In both case we can see what port has been chosen automatically for this purpose by docker, from this command:

  • [root@hadoop-master ~]# docker ps
  • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  • 79b59e61c998 training/webapp “python app.py” 3 minutes ago Up 3 minutes 0.0.0.0:32769->5000/tcp stupefied_thompson
  • b884e7518863 training/webapp “python app.py” 45 minutes ago Up 45 minutes 0.0.0.0:32768->5000/tcp nauseous_yalow

As can be seen, the first container forwarded to port 32769 of the local host and second one has forwarded to port 32768. So we can open our desired browser and type inside:

  • localhost:32769
  • localhost:32768

b. We decide ourself which port of local Host need to be exposed to container desired port which is 5000 in our case. For this we can use following command:

  • [root@hadoop-master ~]# docker run -d -p 8081:5000 training/webapp python app.py

so here we can have access to web application from local Host through port 8081

%d bloggers like this: