TSM - Docker, Build and Ship

Raluca Oanca-Boca - Full Stack Developer @Self-Employed

All programmers encounter at a point in  time the issue of desire to write once and can run the algorithm on different operating systems and independent, a good example would be that we want to write a class in a programming language and we can run this class on different operating systems.

The  solution that Docker provides starts from the same premise except that instead of writing code you can configure servers in a customized way, from the choice of operating system, customized configuration of files, installing certain programs, etc.

What is Docker?

Docker help you design a system as isolated and standardized as you want of an application and all its dependencies for an independent work envirenment.

What comes Docker in addition to the virtual machines is that it is based on a system of containers. These containers helps to run code, runtime, system tools, system libraries or anything else that can be installed on a server in an independent work environment and guarantees that they will run the same way.

The difference between a virtual machine and a Docker container is that these containers will eventually run under the same kernel and are not linked to any infrastructure or host operating system. Another difference is that it doesn't forces you to limitation in terms of memory.

Docker provides a set of tools such as: Docker Engine, Docker Compose, Docker Hub, Docker Swarn, Docker Machine, Docker Trusted Registry. Each of these tools come with various commands sites.

Docker Engine

It provides most of the functionality to create an image and run a container Docker. It is the core tool of all other toolsets. To make sure that we have installed it on the host system we run the command

  sudo apt-get install docker-engine

Starting client daemon:

 sudo service docker start 

To check if it is installed:

 docker  --v

Once you have it installed we can begin to create our own image, to achieve this we need to create a file called  Dockerfile.

MAINTAINER Raluca Onaca Boca 
    RUN apt-get update
    RUN apt-get -y upgrade

    # Install apache, PHP, and supplimentary programs. curl and lynx-cur are for debugging the container.
    RUN DEBIAN_FRONTEND=noninteractive apt-get -y install apache2 php5 php5-mysql php5-gd php-pear php5-curl curl
    # Update the PHP.ini file, enable  tags.
    RUN sed -i "s/short_open_tag = Off/short_open_tag = On/" /etc/php5/apache2/php.ini
    # Enable apache mods.
    RUN a2enmod rewrite
    # Manually set up the apache environment variables
    ENV APACHE_RUN_USER www-data
    ENV APACHE_RUN_GROUP www-data
    ENV APACHE_LOG_DIR /var/log/apache2
    ENV APACHE_LOCK_DIR /var/lock/apache2
    ENV APACHE_PID_FILE /var/run/apache2.pid

    VOLUME /var/www/site
    EXPOSE 80
    # Copy site into place.
    # Update the default apache site with the config we created.
    ADD apache-config.conf /etc/apache2/sites-enabled/000-default.conf
    # By default, simply start apache.
    CMD /usr/sbin/apache2ctl -D FOREGROUND

Commands in this file creates an Docker image with the Ubuntu operating system, application server like Apache, PHP and some libraries, synchronization project by the and adding a file to Virtual. 

To create this image:

sudo docker build -t ralucaonaca/first-site:v1 .

Running the previous command will first check if somehow we already have the image of ubuntu or not, if not it will start to download it, -t flag identifies that this image is one of the newly created of the user ralucaonaca and named first-website with the tag: v1.

Docker container is created by instantiating this image:

Some of the most important commands: 

docker ps - the list of containers

Docker Compose

This tool helps us to create a multiple containers. After creating the  Dockerfile  which helps us to create a local working environment we need also a  docker-compose.yml file by which we can define all the services needed to run an application in an izolated system and to be able run 

web:
image: ralucaonaca/first-site:latest
ports:
  -"5000:5000"
volumes:
\- /var/www/site:/var/www/site
links:
\- redis
\- db
redis:
image: redis
db:
image: mysql:latest

As you can see we have defined the container which has the base image ralucaonaca/first-site:latest and 2 links. To create the container we will run the command:

sudo docker-compose up

An advantage to use docker composer is to create multiple containers using a single command, all containers with the same host.

Some of the most important commands:

docker-compose ps - container list

docker-compose kill - stops the containers

docker-compose rm - removes the stoped containers

Docker Hub

It is a cloud service that helps to keep a record of all images created. Some of the main properties are:

- image repository : searching  images both public and private

- Automating build sites: provides connection to github accounts and the bitbucket and automatic creation of images.

- Webhooks: a property for automatic creation of images, let us do trigger certain actions.

A recap.

We can say that this tool behaves like Git because it starts from the same premise only will track changes in the system.

Public image search command  docker search ubuntu , to be able to do a to an image you need to connect this tool docker login, and after login you can easily use docker push /:.

In comparisson with Vagrant

Vagrant is based on the provider of virtual machine having as provisioning several possibilities: shell, puppet, ansible and many more in order to create an environment isolated and independent of the host system, while for the Docker no need of this provider.

Using Vagrant and provisioning can be done by Docker containers from which we can conclude that Docker is a complementary part for Vagrant.

Conclusion: 

The advantage that any Docker image it can run on any machine change a bit the rules of the  game. Using the commands pull / push to make easier collaboration between software developers and devops. Another advantage is that the application can create containers and containers for database or system cache that can be shared between devops and developers.