Thursday, May 14, 2015

Series: How to create your own website based on Docker (Part 5 - Creating our Docker Compose file)

Let's implement our docker container architecture

This is part 5 of the series: How to create your own website based on Docker.

In the last part of the series, we have planned and created our Docker container architecture. So now it's about time to turn this architecture into a real scenario - and that's what we need Docker Compose for.

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)

What is Docker Compose?

Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.

Compose is great for development environments, staging servers, and CI. We don't recommend that you use it in production yet. (Source: https://docs.docker.com/compose)

There are three steps involved when using docker compose:

  1. We need image files for each container (we'll start with that in the next chapter)
  2. Then we need to create a docker-compose.yml file that tell docker compose which containers must be started, including all options (like volumes, links, ports,...)
  3. At last we need to run docker-compose up to start up our container architecture (the configuration from the YAML file)
Since we have just created our architecture, we're starting with step 2 now and will create the image files later. This will show you how we can create a docker compose yaml file based on our architecture.

Implementing our container design

Let's recap - this is what our architecture looks like:


We're going to create a web site called projectwebdev, so the following container names are based on that name of the site. In the diagram above we can see that we have the following containers and options:
  1. nginx reverse proxy
    • links:
      • nginx website
      • ioJS REST API
    • volumes:
      • log files (/opt/docker/logs)
  2. nginx web site
    • links:
      • none
    • volumes:
      • web site files (/opt/docker/projectwebdev/html)
      • log files (/opt/docker/logs)
  3. ioJS REST API
    • links:
      • mongoDB database
    • volumes:
      • ioJS application files (/opt/docker/projectwebdev-api/app)
      • log files (/opt/docker/logs)
  4. mongoDB database
    • links:
      • none
    • volumes:
      • mongoDB files (/opt/docker/mongodb/db)
      • log files (/opt/docker/logs)

The Docker directory structure on my VM

I will use the following folder structure on my Ubuntu VM to host all Docker images/containers:
/opt/docker/

├── logs

├── mongodb

├── nginx-reverse-proxy

├── projectwebdev

├── projectwebdev-api

├── ubuntu-base

└── docker-compose.yml
So the docker-compose.yml file will be in the root directory of all docker image directories (in which we will dive into later). With this setup, I can later just copy the /opt/docker/ folder on to another server and then just run docker-compose up to get everything up and running again.

You can also see that this directory structure already contains a logs/ directory, which will be the collection point for all container logs we've been talking about in the last part of this series.

Create the Docker Compose file

If you've never heard of YAML before, let me just tell you what it is. YAML is a recursive acronym for "YAML Ain't Markup Language". Early in its development, YAML was said to mean "Yet Another Markup Language", but it was then reinterpreted (backronyming the original acronym) to distinguish its purpose as data-oriented, rather than document markup. YAML’s purpose is to have a human friendly data serialization standard for all programming languages. (see: http://yaml.org)

In our YAML file we will tell Docker Compose how our containers must be started, which volumes should be mounted, which containers should be linked together and what ports should be exposed. So it's basically everything from that list above.

Let's get into details - This is what our docker-compose.yaml file looks like:
ubuntubase:
    build: ./ubuntu-base
projectwebdev:
    build: ./projectwebdev
    expose:
        - "8081"
    volumes:
        - ./logs/:/var/log/nginx/
        - ./projectwebdev/html:/var/www/html:ro
projectwebdevapi:
    build: ./projectwebdev-api
    expose:
        - "3000"
    links:
        - mongodb:db
    volumes:
        - ./logs/:/var/log/pm2/
        - ./projectwebdev-api/app:/var/www/html
mongodb:
    build: ./mongodb
    expose:
        - "3333"
    volumes:
        - ./logs/:/var/log/mongodb/
        - ./mongodb/db:/data/db
nginxreverseproxy:
    build: ./nginx-reverse-proxy
    expose:
        - "80"
        - "443"
    links:
        - projectwebdev:blog
        - projectwebdevapi:blogapi
    ports:
        - "80:80"
    volumes:
        - ./logs/:/var/log/nginx/
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/docker-compose.yml

Let's pick the nginx reverse proxy to explain our settings. Besides all other Docker Compose YAML possibilies, we'll only use build, exposelinks, ports and volumes.

build: This is the path to the directory containing the Dockerfile for the image. We have supplied that value as a relative path, which means that it is interpreted relatively to the location of the YAML file itself. This directory is also the build context that is sent to the Docker daemon. All files belonging to the nginx reverse proxy reside in folder ./nginx-reverse-proxy, therefore we tell Docker Compose to build the image based on the following Dockerfile /opt/docker/nginx-reverse-proxy/Dockerfile, which we're going to create later.

expose: This section specifies the ports to be exposed without publishing them to the host machine - they'll only be accessible to linked services. Only the internal port can be specified - see the architecture diagram above, these exposed ports are the ports with the purple background color.

links: Here we specify all links to containers in other services. You can either specify both the service name and the link alias (SERVICE:ALIAS), or just the service name (which will also be used for the alias). In our design we'll use aliases, so we'll tell Docker that whenever we want to talk to our containers we want them to use blog (for the projectwebdev website) and blogapi (for our ioJS REST API).

ports: The ports we want to expose to the Docker host - see the yellow port in the architecture diagram above. You can either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen). Since we want to make sure that it's always the same port (in our case it's port 80) we specify the HOST and the CONTAINER port explicitly (which in both cases would be 80). If your nginx reverse proxy in your container uses port 8000 and you want that port to be accessible from outside via port 80, you'll specifiy it like this: "80:8000". Important: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results when using a container port lower than 60, because YAML will parse numbers in the format xx:yy as sexagesimal (base 60). For this reason, Docker recommends always explicitly specifying your port mappings as String.

volumes: This section contains all mount paths as volumes, optionally specifying a path on the host machine (HOST:CONTAINER), or an access mode (HOST:CONTAINER:ro). The latter one (:ro = readonly) is used in our projectwebdev container, since we don't want the container to change the files for any reason. Only our host may provide the markup that is needed for the website.

We have now implemented our architecture with Docker Compose! Let's create each image and container so we can fire up docker compose. We'll start with our Ubuntu Base Image!

12 comments:

  1. This is really great blog with detail information, I really like that you are taking real use case and linking the containers. Looking forward for the next blog. Thanks for sharing.

    ReplyDelete
  2. Is there any reason to specify ports 80 and 443 in expose section for nginxreverseproxy?

    ReplyDelete
  3. I guess internally from nginx container he has exposed port 80 and 443 . But to access it from outside he has only mapped port 80 (host machine) to port 80 (container). In short 443 is availble from container but not mapped from external host to be accessible.

    ReplyDelete
  4. Most hosting services whether cheap or expensive offer the same general features. Some of the features you should look for when selecting web hosting is unlimited bandwidth, disk space and domains. You might also want a web host that uses control panel to make setting up and managing your hosting account quick and easy. best hosting provider in bangladesh

    ReplyDelete