Sunday, June 14, 2015

Series: How to create your own website based on Docker (Part 11 - Run the whole application stack with Docker Compose)

Manage all your created containers with Docker Compose

This is part 11 of the series: How to create your own website based on Docker.

Well, we have created our images now, so it's about time to get everything up and running. As mentioned in the first posting we're using Docker Compose for that.

This is the time where our docker-compose file from part 5 comes back into play. Docker Compose will use this file to figure out which containers need to be started and in what order. I will make sure that all posts are exposed correctly and will mount our directories.


Source code

All files mentioned in this series are available on Github, so you can play around with it! :)


Custom Clean Script

I've created a custom clean script that allows me to clean all existing images and containers. This is comes in pretty handy when you're in the middle of development. I've used this script a thousand times already.
#!/bin/bash
docker rm -f $(docker ps -q -a)
docker rmi -f $(docker images -q)
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/cleanAll

This script iterates through all containers first and removes them (it doesn't care whether they are up and running or not) and will then remove all images. Don't be afraid... the images will be re-created from the Dockerfiles. So if you've played around with these containers already, I recommend to run this script before reading further. Important: Do not run this script in your production environment! :)

Working with Docker Compose

Docker Compose will only do it's magic when you're in a directory that contains a docker-compose.yml. As mentioned before, I've put all files into /opt/docker/, so we're operating in this directory only.

I'm currently using the following version:
/opt/docker$ docker-compose --version
docker-compose 1.2.0
Make sure that no containers are running:
/opt/docker$ docker-compose ps
Name   Command   State   Ports
------------------------------
Update: If you're running Docker Compose >= 1.3.x then you'll have to run the following command if you've checked out the code from my repository:
docker-compose migrate-to-labels
Since we have our docker-compose.yml ready, the only thing to do is fire docker-compose via the following command:
docker-compose up -d
The -d flag makes sure that Docker Compose starts in daemon mode (in the background).

When you run this command you'll see that it fetches the images and configures them based on our Dockerfile (e.g. like running apt-get, copying files,...). It would be too much to copy here, but you'll see soon how much output it generates.

Let's play around with our containers

Let's see if all containers are up and running:
/opt/docker$ docker-compose ps
           Name                         Command               State               Ports          
--------------------------------------------------------------------------------------------------
docker_mongodb_1             /tmp/entrypoint.sh mongod  ...   Up       3333/tcp                  
docker_nginxreverseproxy_1   /bin/sh -c /etc/nginx/conf ...   Up       443/tcp, 0.0.0.0:80->80/tcp
docker_projectwebdev_1       /bin/sh -c nginx                 Up       8081/tcp                  
docker_projectwebdevapi_1    pm2 start index.js --name  ...   Up       3000/tcp                  
docker_ubuntubase_1          bash                             Exit 0               
It's pretty fine that our ubuntu container already exited, since there is no background task running (e.g. like our nginx server that has to reply to requests) - all other three services are up and running. You can also see that only our nginx reverse proxy exposes its port (80) to the public. All other ports are internal ports.

Let's see if our website is up and running:

Our Person REST API:


Our Person Demo page:



Just a short information: Don't try to access the URLs you see in this image. Since this is just a demo I just uploaded it for demonstration purposes and stopped the containers right after taking these screenshots.

Let's see how much memory our Person API consumes:
/opt/docker$ docker-compose stats docker_projectwebdevapi_1
CONTAINER                   CPU %               MEM USAGE/LIMIT       MEM %               NET I/O
docker_projectwebdevapi_1   0.00%               76.12 MiB/1.954 GiB   3.80%               3.984 KiB/1.945 KiB
Let's stop & start all containers - why? Because we can!
/opt/docker$ docker-compose restart
Restarting docker_projectwebdev_1...
Restarting docker_mongodb_1...
Restarting docker_projectwebdevapi_1...
Restarting docker_nginxreverseproxy_1...
Let's see the stacked images:
docker images --tree
Warning: '--tree' is deprecated, it will be removed soon. See usage.
└─1c3c252d48a5 Virtual Size: 131.3 MB
  └─66b5d995810b Virtual Size: 131.3 MB
    └─b7e7cde90a84 Virtual Size: 131.3 MB
      └─c6a3582257ff Virtual Size: 131.3 MB Tags: ubuntu:15.04
        └─beec7359d06b Virtual Size: 516 MB
          └─56f95e536056 Virtual Size: 516 MB
            └─2e6215be7f22 Virtual Size: 516 MB
              └─0da535016806 Virtual Size: 516 MB Tags: docker_ubuntubase:latest
                ├─22e3ad368e3d Virtual Size: 516.4 MB
  […]
                  └─bc20ce213396 Virtual Size: 679 MB
                │   └─b20c90481a4e Virtual Size: 679 MB Tags: docker_mongodb:latest
                └─419a34bcfcfd Virtual Size: 516 MB
                  ├─2d2525cf28e1 Virtual Size: 537.1 MB
                  │ └─9c9f238dc62d Virtual Size: 558.2 MB
                  │   └─4bf8554af678 Virtual Size: 580.2 MB
                  │     └─9d6fdb379360 Virtual Size: 620.4 MB
                  │       └─02b3cd93208f Virtual Size: 638.1 MB
           […]    
   └─aba65d0f0c06 Virtual Size: 706 MB
                  │           └─9b4b55e323e3 Virtual Size: 706 MB Tags: docker_projectwebdevapi:latest
                  └─466f9910439a Virtual Size: 543.9 MB
                    ├─008ffe8fa738 Virtual Size: 543.9 MB
                    │ └─476a45c16218 Virtual Size: 543.9 MB
                    […]
                    │   └─b53827f8ddfd Virtual Size: 543.9 MB Tags: docker_nginxreverseproxy:latest
                    └─aec75192e11a Virtual Size: 543.9 MB
                      └─eadec9140592 Virtual Size: 543.9 MB
                        └─27b6deeec60a Virtual Size: 543.9 MB
                          └─6f0f6661c308 Virtual Size: 543.9 MB Tags: docker_projectwebdev:latest
As you can see: All our images are based on the Ubuntu Base Image and files are only added on top of the underlying base image.

Let's see all logs to console/stdout:
/opt/docker$ docker-compose logs
Attaching to docker_nginxreverseproxy_1, docker_projectwebdevapi_1, docker_mongodb_1, docker_projectwebdev_1, docker_ubuntubase_1
projectwebdevapi_1  | pm2 launched in no-daemon mode (you can add DEBUG="*" env variable to get more messages)
projectwebdevapi_1  | 2015-06-12 21:29:23: [PM2][WORKER] Started with refreshing interval: 30000
projectwebdevapi_1  | 2015-06-12 21:29:23: [[[[ PM2/God daemon launched ]]]]
[…]
nginxreverseproxy_1 | START UPDATING DEFAULT CONF
nginxreverseproxy_1 | CHANGED DEFAULT CONF
nginxreverseproxy_1 | upstream blog  {
nginxreverseproxy_1 |       server 172.17.0.29:8081; #Blog
nginxreverseproxy_1 | }
nginxreverseproxy_1 |
nginxreverseproxy_1 | upstream blog-api  {
nginxreverseproxy_1 |       server 172.17.0.54:3000; #Blog-API
nginxreverseproxy_1 | }
nginxreverseproxy_1 |
nginxreverseproxy_1 | ## Start blog.project-webdev.com ##
nginxreverseproxy_1 | server {
nginxreverseproxy_1 |     listen  80;
nginxreverseproxy_1 |     server_name  blog.project-webdev.com;
[…]
nginxreverseproxy_1 | }
nginxreverseproxy_1 | ## End blog.project-webdev.com ##
nginxreverseproxy_1 |
nginxreverseproxy_1 | ## Start api.project-webdev.com ##
nginxreverseproxy_1 | server {
nginxreverseproxy_1 |     listen  80;
nginxreverseproxy_1 |     server_name  api.project-webdev.com;
nginxreverseproxy_1 |
[…]
nginxreverseproxy_1 | }
nginxreverseproxy_1 | ## End api.project-webdev.com ##
nginxreverseproxy_1 |
[…]
nginxreverseproxy_1 |  END UPDATING DEFAULT CONF
As you can see, you'll get all stdout output from each container. You'll see the container name in the beginning of the line... in our example we'll only get output from our ioJS container/projectwebdevapi_1 (started with pm2) and our nginx reverseproxy/nginxreverseproxy_1.

Let's check our log directory:
/opt/docker/logs$ ll
total 24
drwxrwxrwx 2 mastix docker 4096 Jun 14 21:21 ./
drwxr-xr-x 9 mastix docker 4096 Jun 12 17:39 ../
-rw-r--r-- 1 root   root      0 Jun 14 21:21 access.log
-rw-r--r-- 1 root   root      0 Jun 14 21:21 error.log
-rw-r--r-- 1 root   root   2696 Jun 14 21:21 mongodb-projectwebdev.log
-rw-r--r-- 1 root   root    200 Jun 14 21:21 nginx-reverse-proxy-blog.access.log
-rw-r--r-- 1 root   root    637 Jun 14 21:21 nginx-reverse-proxy-blog-api.access.log
-rw-r--r-- 1 root   root      0 Jun 14 21:21 nginx-reverse-proxy-blog-api.error.log
-rw-r--r-- 1 root   root      0 Jun 14 21:21 nginx-reverse-proxy-blog.error.log
-rw-r--r-- 1 root   root      0 Jun 14 21:21 pm2-0.log
-rw-r--r-- 1 root   root    199 Jun 14 21:21 project-webdev.access.log
-rw-r--r-- 1 root   root      0 Jun 14 21:21 project-webdev.error.log
Remember: We've told each container to log its files into our /opt/docker/logs directory on the Docker host... And now we have them all in one place.

That's it. I hope you had fun learning Docker with this session. And if you find any bugs, I'm happy to fix them. Just add a comment or create an issue in the Github repository.

Greetz,

Sascha

Friday, June 12, 2015

Series: How to create your own website based on Docker (Part 10 - Creating the nginx reverse proxy Docker container)

Let's glue it all together

This is part 10 of the series: How to create your own website based on Docker.

Let's recap: We've created the backend containers consisting of a mongodb container and a ioJS/hapiJS container. We've also created the nginx/Angular 2.0 frontend container that makes use of the new backend.

As mentioned in the 4th part of this series we've defined that every request must go through an nginx reverse proxy. This proxy decides where the requests go to and what services are accessible from outside. So in order to make the whole setup available from the web, you need to configure the nginx reverse proxy that will route all requests to the proper docker container.

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)


Talking about container links again

Remember that we have linked containers together? And that we don't know the IP addresses, because Docker takes care of that and reserves them while creating the container?

That's bad for a reverse proxy, since it needs to know where to route the requests to - but there's a good thing: IP addresses and ports are available as environment variable within each container. Bad thing: nginx configurations can't read environment variables! So using links makes creating this container hard and more complex than it actually should be - but there's a solution for that of course, which we will cover that later in that post.

So what does this container have to do?

Well, that's pretty simple... it needs to glue everything together, so it needs to collect all services that should be accessible from outside.

So in our case we'd need an nginx configuration for:
  • Our REST-API container based on ioJS & hapiJS
  • Our frontend container based on nginx and Angular 2.0

We don't want to expose:
  • Our mongodb container
  • Our ubuntu base container

Let's get started - creating the nginx image

Creating the nginx image is basically the same every time. Let's create a new directory called /opt/docker/nginx-reverse-proxy/ and within this new directory we'll create other directories called config & html and our Dockerfile:
# mkdir -p /opt/docker/projectwebdev/config/
# mkdir -p /opt/docker/projectwebdev/html/
# > /opt/docker/projectwebdev/Dockerfile
So just create your /opt/docker/nginx-reverse-proxy/Dockerfile with the following content:
# Pull base image.
FROM docker_ubuntubase
ENV DEBIAN_FRONTEND noninteractive
# Install Nginx.
RUN \
  add-apt-repository -y ppa:nginx/stable && \
  apt-get update && \
  apt-get install -y nginx && \
  rm -rf /var/lib/apt/lists/* && \
  chown -R www-data:www-data /var/lib/nginx
# Define mountable directories.
VOLUME ["/etc/nginx/certs", "/var/log/nginx", "/var/www/html"]
# Define working directory.
WORKDIR /etc/nginx
# Copy all config files
COPY config/default.conf /etc/nginx/conf.d/default.conf
COPY config/nginx.conf /etc/nginx/nginx.conf
COPY config/config.sh /etc/nginx/config.sh
RUN ["chmod", "+x", "/etc/nginx/config.sh"]
# Copy default webpage
RUN rm /var/www/html/index.nginx-debian.html
COPY html/index.html /var/www/html/index.html
COPY html/robots.txt /var/www/html/robots.txt
# Define default command.
CMD /etc/nginx/config.sh && nginx
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/nginx-reverse-proxy/Dockerfile

This Dockerfile also uses our Ubuntu base image, installs nginx and bakes our configuraton into our nginx container.

What is this html folder for?

Before looking into the configuration we'll cover the easy stuff first. :)

While creating the directories you might have asked yourself why you need to create an html folder?! Well, that's simple: Since we're currently only developing api.project-webdev.com and blog.project-webdev.com we need a place to go when someone visits www.project-webdev.com - that's what this folder is for. If you don't have such a use case, you can also skip it - so this is kind of a fallback strategy.

The HTML page is pretty simple:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to this empty page!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to this empty page!</h1>
<p>If you see this page, you'll see that it is empty.</p>
<p><em>Will this change soon...? Hell yeah it will! ;)</em></p>
</body>
</html>
So let's put this code into the following file nginx-reverse-proxy/html/index.html.

The nginx configuration

Now it's getting difficult interesting. :)

As mentioned before, our nginx container needs to route all requests based on the URL to our containers.

So we need two routes/locations
  • api.project-webdev.com routes to my Docker REST API container
  • blog.project-webdev.com routes to my Docker blog container
Since we don't know the IP addresses to route to during development, we need to work with custom placeholders, that we need to replace via shell script once the container starts. In the following example you'll see that we're using two place holders for our two exposed services:
  • BLOG_IP:BLOG_PORT
  • BLOGAPI_IP:BLOGAPI_PORT
We're going to replace these two placeholders with the correct value from the environment variables that docker offers us when linking containers together.

So you need a config file called /opt/docker/nginx-reverse-proxy/config/default.conf that contains your nginx server configuration:
upstream blog  {
      server BLOG_IP:BLOG_PORT; #Blog
}
upstream blog-api  {
      server BLOGAPI_IP:BLOGAPI_PORT; #Blog-API
}
## Start blog.project-webdev.com ##
server {
    listen  80;
    server_name  blog.project-webdev.com;
    access_log  /var/log/nginx/nginx-reverse-proxy-blog.access.log;
    error_log  /var/log/nginx/nginx-reverse-proxy-blog.error.log;
    root   /var/www/html;
    index  index.html index.htm;
    ## send request back to blog ##
    location / {
     proxy_pass  http://blog;
     proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
     proxy_redirect off;
     proxy_buffering off;
     proxy_set_header        Host            $host;
     proxy_set_header        X-Real-IP       $remote_addr;
     proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
   }
}
## End blog.project-webdev.com ##
## Start api.project-webdev.com ##
server {
    listen  80;
    server_name  api.project-webdev.com*;
    access_log  /var/log/nginx/nginx-reverse-proxy-blog-api.access.log;
    error_log  /var/log/nginx/nginx-reverse-proxy-blog-api.error.log;
    ## send request back to blog api ##
    location / {
     proxy_pass  http://blog-api;
     proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
     proxy_redirect off;
     proxy_buffering off;
     proxy_set_header        Host            $host;
     proxy_set_header        X-Real-IP       $remote_addr;
     proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;

     # send the CORS headers
     add_header 'Access-Control-Allow-Credentials' 'true';
     add_header 'Access-Control-Allow-Origin'      'http://blog.project-webdev.com';
   }
}
## End api.project-webdev.com ##

This configuration file contains two locations, one that redirects to our blog (upstream blog  {[..]}) and to our api (upstream blog-api{[...]}). As mentioned before, we're going to replace the IP and the port soon. :)

So what's happening is that every request against blog.project-webdev.com will be redirected to the corresponding upstream:
server {
    listen  80;
    server_name  blog.project-webdev.com;
   [...]
    location / {
     proxy_pass  http://blog;
   [...]
   }
}
The same works for the REST API:
server {
    listen  80;
    server_name  api.project-webdev.com;
   [...]
    location / {
     proxy_pass  http://blog-api;
   [...]
   }
}

Using docker's environment variables

In order to use docker's environment variables we need to run a script every time the container starts. This script will check the default.conf file that you have just created and replaces your placeholders with the values from the environment variables; after that it will start nginx.

See the last line of the dockerfile that triggers the script execution:
CMD /etc/nginx/config.sh && nginx
Let's recap quickly: As mentioned in previous posts, Docker creates environment variables with IP and port information when you link containers together. These variables contain all information that you need to access your containers - and that's exactly what we want to do here.

The following script will replace our custom placeholders in our default.conf file with the corresponding values from the environment variables that Docker has created for us, so let's create the aforementioned /opt/docker/nginx-reverse-proxy/config/config.sh file:
#!/bin/bash
# Using environment variables to set nginx configuration
# Settings for the blog
echo "START UPDATING DEFAULT CONF"
[ -z "${BLOG_PORT_8081_TCP_ADDR}" ] && echo "\$BLOG_PORT_8081_TCP_ADDR is not set" || sed -i "s/BLOG_IP/${BLOG_PORT_8081_TCP_ADDR}/" /etc/nginx/conf.d/default.conf
[ -z "${BLOG_PORT_8081_TCP_PORT}" ] && echo "\$BLOG_PORT_8081_TCP_PORT is not set" || sed -i "s/BLOG_PORT/${BLOG_PORT_8081_TCP_PORT}/" /etc/nginx/conf.d/default.conf
[ -z "${BLOGAPI_PORT_3000_TCP_ADDR}" ] && echo "\$BLOGAPI_PORT_3000_TCP_ADDR is not set" || sed -i "s/BLOGAPI_IP/${BLOGAPI_PORT_3000_TCP_ADDR}/" /etc/nginx/conf.d/default.conf
[ -z "${BLOGAPI_PORT_3000_TCP_PORT}" ] && echo "\$BLOGAPI_PORT_3000_TCP_PORT is not set" || sed -i "s/BLOGAPI_PORT/${BLOGAPI_PORT_3000_TCP_PORT}/" /etc/nginx/conf.d/default.conf
echo "CHANGED DEFAULT CONF"
cat /etc/nginx/conf.d/default.conf
echo "END UPDATING DEFAULT CONF"
This script uses the basic sed (stream editor) command to replace the strings.

See the following example, that demonstrates how the IP address for the blog is being replaced:
[ -z "${BLOG_PORT_8081_TCP_ADDR}" ] && echo "\$BLOG_PORT_8081_TCP_ADDR is not set" || sed -i "s/BLOG_IP/${BLOG_PORT_8081_TCP_ADDR}/" /etc/nginx/conf.d/default.conf

  • First it checks whether the BLOG_PORT_8081_TCP_ADDR exists as environment variable
  • If that is true, it will call the sed command, which looks for BLOG_IP in the /etc/nginx/conf.d/default.conf file (which has been copied from our Docker host into the image - see Dockerfile)
  • And will then replace it with the value from the environment variable BLOG_PORT_8081_TCP_ADDR.

And that's all the magic! ;)

So when the script has run, it will have replaced the placeholders in our config file so that it looks like this:
CHANGED DEFAULT CONF
upstream blog  {
      server 172.17.0.14:8081; #Blog
}
nginxreverseproxy_1 |
upstream blog-api  {
      server 172.17.0.10:3000; #Blog-API
}
... and therefore our nginx reverse proxy is ready to distribute our requests to our containers, since it knows their port and ip address now! :)

Ok, since we have now created all our containers it's about time to start them up in the next post! :)

Wednesday, June 10, 2015

Series: How to create your own website based on Docker (Part 9 - Creating the nginx/Angular 2 web site Docker container)

It's about time to add some frontend logic to our project

This is part 9 of the series: How to create your own website based on Docker.

In the last two parts we've created the whole backend (REST API + database), so it's about time to create the website that makes use of it. Since we have a simple person REST API (see part 8), we need a site that can list all persons as well as create new ones. Since Angular 2.0 has achieved "Developer Preview" status, this sounds like a perfect demo for Angular 2.0!

A word of advice: Angular 2.0 is not production ready yet! This little project is perfect for playing around with the latest version of Angular 2.0, but please do not start new projects with it, since a lot of new changes are going to be introduced until the framework is officially released. If you have ever played around with Angular 2.0 alpha, you probably don't want to use it for production anyways... It's still very very unstable and the hassle with the typings makes me sad every time I use Angular 2.0. But this will change some time soon and then we'll be able to work Angular like pros! :)

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)

Technologies to be used

Our website will use the following technologies (and to make sure that we have full support of everything, I recommend to use the latest greatest Chrome):

First things first - creating the nginx image

Creating the nginx image is basically the same every time. Let's create a new directory called /opt/docker/projectwebdev/ and within this new directory we'll create other directories called config and html as well as our Dockerfile:
# mkdir -p /opt/docker/projectwebdev/config/
# mkdir -p /opt/docker/projectwebdev/html/
# > /opt/docker/projectwebdev/Dockerfile
After creating the fourth Dockerfile, you should  now be able to understand what this file is doing, so I'm not going into details now - as a matter of fact, this is a pretty easy one.
# Pull base image.
FROM docker_ubuntubase

ENV DEBIAN_FRONTEND noninteractive

# Install Nginx.
RUN \
  add-apt-repository -y ppa:nginx/stable && \
  apt-get update && \
  apt-get install -y nginx && \
  rm -rf /var/lib/apt/lists/* && \
  chown -R www-data:www-data /var/lib/nginx

# Define working directory.
WORKDIR /etc/nginx

# Copy all config files
COPY ./config/default.conf /etc/nginx/conf.d/default.conf
COPY ./config/nginx.conf /etc/nginx/nginx.conf

# Define default command.
CMD nginx
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/projectwebdev/Dockerfile

I've created the following config files that you have to copy to the /opt/docker/projectwebdev/config/ folder - the Dockerfile will make sure that these files get copied into the image:

default.conf:
## Start www.project-webdev.com ##
server {
    listen  8081;
    server_name  _;
    access_log  /var/log/nginx/project-webdev.access.log;
    error_log  /var/log/nginx/project-webdev.error.log;
    root   /var/www/html;
    index  index.html;
}
## End www.project-webdev.com ##
nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
daemon off;

events {
        worker_connections 768;
}

http {
  ##
  # Basic Settings
  ##
  sendfile on;
  tcp_nopush on;
  tcp_nodelay off;
  keepalive_timeout 65;
  types_hash_max_size 2048;
  server_tokens off;
  server_names_hash_bucket_size 64;
  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  ##
  # Logging Settings
  ##
  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  ##
  # Gzip Settings
  ##
  gzip on;
  gzip_disable "msie6";
  gzip_http_version 1.1;
  gzip_proxied any;
  gzip_min_length 500;
  gzip_types text/plain text/xml text/css
  text/comma-separated-values text/javascript
  application/x-javascript application/atom+xml;

  ##
  # Virtual Host Configs
  ##
  include /etc/nginx/conf.d/*.conf;
  include /etc/nginx/sites-enabled/*;
}

These two files are the basic configuration for our frontend server and will make sure that it listens on port 8081 (see architecture: 4-digit-ports are not exposed to the web) and enables gzip. These files contain basic settings and should be adjusted to your personal preferences when creating your own files.

That's it... your nginx frontend server is ready to run!

Let's create the frontend application

Since I don't want to re-invent the wheel I'm going to use a pretty cool Angular 2.0 seed project by Minko Gechev. This project uses gulp (which I'm using for pretty much every frontend project) as build system. We're going to use this as bootstrapped application and will add our person-demo-specific code to it - for the sake of simplicity I've tried not to alter the original code/structure much.

Our demo will do the following:

  • List all persons stored in our mongodb
  • Create new persons and store them in in our mongodb

Important: This demo will not validate any input data nor does it implement any performance optimizations, it's just a very very little Angular 2.0 demo application that talks to our Docker REST container.

Let's get started with the basic project setup

To get started, I've created a fork of the original Angular 2.0 seed repository and you can find the complete application source code right here! So grab the source if you want to play around with it! :)

The gulp script in this project offers several tasks that will create the project for us by collecting all files, minifying them and preparing our HTML markup. The task we're going to use is gulp build.prod (build production, which will minify files). You can also use gulp build.dev (build development) if you want to be able to debug the generated JavaScript files in your browser. Whenever you run your build, all project-necessary generated files will be copied to dist/prod/ - so the files in this directory represent the website that you need to copy to your docker host later - we'll cover that later.

Although I've said that I'm not going to alter the original code much, I've included Twitter Bootstrap via Bower - for those who don't know: Bower is a frontend dependency management framework. Usually you would call bower install to install all dependencies, but I added this call to the package.json file, so all you have to do is call npm install (which you have to do anyways when downloading my project from github), which will call bower install afterwards.

The model

We've covered the basic project setup, so let's get started with the code.

Since we're using TypeScript, we can make use of types. So we're creating our own type, which represents our Person and thanks to TypeScript and ES6, we can use a class for that. This model consists of the same properties that we've used for our mongoose schema in our REST API (idfirstnamelastname). For that I have created a models directory and within that directory I've added a file called Person.ts which contains the following code:
export class Person {
    private id:string;
    private firstname:string;
    private lastname:string;
    constructor(theId:string, theFirstname:string, theLastname:string) {
        this.id = theId;
        this.firstname = theFirstname;
        this.lastname = theLastname;
    }
    public getFirstName() {
        return this.firstname;
    }
    public getLastName() {
        return this.lastname;
    }
    public getId() {
        return this.id;
    }
}
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/models/Person.ts

The service

No matter if you're working with AngularJS, jQuery, ReactJS or Angular 2.0 you always have to make sure that you outsource your logic into a service or any other detached component that can be replaced if something changes. In Angular 2.0 we don't have a concept of Factories, Services and Providers like in AngularJS - everything is a @Component. So we're creating our PersonService class that allows us to read and store our data by firing XMLHttpRequests (XHR) to our REST API (api.project-webdev.com).

Since this service needs to work with our Person model, we need to import our model to our code. In TypeScript/ES6 we can use the import statement for that.
import {Person} from '../models/Person';
export class PersonService {
    getAllPersons() {
        var personService = this;
        return new Promise(function (resolve, reject) {
            personService.getJSON('http://api.yourdomain.com/person').then(function (retrievedPersons) {
                if (!retrievedPersons || retrievedPersons.length == 0) {
                    reject("ERROR fetching persons...");
                }
                resolve(retrievedPersons.map((p)=>new Person(p.id, p.firstname, p.lastname)));
            });
        });
    }
    addPerson(thePerson:Person) {
        this.postJSON('http://api.yourdomain.com/person', thePerson).then((response)=>alert('Added person successfully! Click list to see all persons.'));
    }
    getJSON(url:string) {
        return new Promise(function (resolve, reject) {
            var xhr = new XMLHttpRequest();
            xhr.open('GET', url);
            xhr.onreadystatechange = handler;
            xhr.responseType = 'json';
            xhr.setRequestHeader('Accept', 'application/json');
            xhr.send();
            function handler() {
                if (this.readyState === this.DONE) {
                    if (this.status === 200) {
                        resolve(this.response);
                    } else {
                        reject(new Error('getJSON: `' + url + '` failed with status: [' + this.status + ']'));
                    }
                }
            }
        });
    }
    postJSON(url:string, person:Person) {
        return new Promise(function (resolve, reject) {
            var xhr = new XMLHttpRequest();
            var params = `id=${person.getId()}&firstname=${person.getFirstName()}&lastname=${person.getLastName()}`;
            xhr.open("POST", url, true);
            xhr.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
            xhr.onreadystatechange = handler;
            xhr.responseType = 'json';
            xhr.setRequestHeader('Accept', 'application/json');
            xhr.send(params);
            function handler() {
                if (this.readyState === this.DONE) {
                    if (this.status === 201) {
                        resolve(this.response);
                    } else {
                        reject(new Error('getJSON: `' + url + '` failed with status: [' + this.status + ']'));
                    }
                }
            }
        });
    }
}
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/services/PersonService.ts

The application

Since we can read and store persons now, it's about time to take care of the UI and the point where everything starts is the Angular 2.0 application itself. It creates the whole application by glueing logic and UI together. The file we're talking about here is the app.ts.
import {Component, View, bootstrap, NgFor} from 'angular2/angular2';
import {RouteConfig, RouterOutlet, RouterLink, routerInjectables} from 'angular2/router';
import {PersonList} from './components/personlist/personlist';
import {PersonAdd} from './components/personadd/personadd';
@Component({
    selector: 'app'
})
@RouteConfig([
    {path: '/', component: PersonList, as: 'personlist'},
    {path: '/personadd', component: PersonAdd, as: 'personadd'}
])
@View({
    templateUrl: './app.html?v=<%= VERSION %>',
    directives: [RouterOutlet, RouterLink]
})
class App {
}
bootstrap(App, [routerInjectables]);

Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/app.ts


We need four different imports in our application:
  • Everything that is needed in order to render the application correctly
  • Everything that is needed in order to route our links correctly
  • Our page that allows us to list all stored persons
  • Out page that allows us to create a new person

Let's have a look at some snippets:
@Component({
    selector: 'app'
})
You can think of Angular apps as a tree of components. This root component we've been talking about acts as the top level container for the rest of your application. The root component's job is to give a location in the index.html file where your application will render through its element, in this case <app>. There is also nothing special about this element name; you can pick it as you like. The root component loads the initial template for the application that will load other components to perform whatever functions your application needs - menu bars, views, forms, etc.
@RouteConfig([
    {path: '/', component: PersonList, as: 'personlist'},
    {path: '/personadd', component: PersonAdd, as: 'personadd'}
])
Our demo application will have two links. One that loads a page which lists all stored persons and another one that allows to create a new one. So we need two routes in this case. Each route is directly linked to components, which we'll cover in a sec.
@View({
    templateUrl: './app.html?v=<%= VERSION %>',
    directives: [RouterOutlet, RouterLink]
})
The @View annotation defines the HTML that represents the component. The component I've developed uses an external template, so it specifies a templateUrl property including the path to the HTML file. Since we need to iterate over our stored persons, we need to inject the ng-For/For directive that we have imported. You can only use directives in your markup if they are specified here. Just skip the <%= VERSION %> portion, as this is part of the original Angular 2.0 Seed project and not changed in our application.
bootstrap(App, [routerInjectables]);
At the bottom of our app.ts, we call the bootstrap() function to load your new component into its page. The bootstrap() function takes a component and our injectables as a parameter, enabling the component (as well as any child components it contains) to render.

The index.html

The index.html represents the outline which will be filled with our components later. This is a pretty basic html file, that uses the aforementioned <app> tag (see our app.ts above) - you might also want to use a name like <person-app> or something, but then you need to adjust your app.ts. This is the hook point for our application.
<!DOCTYPE html>
<head>
[…]
</head>
<body>
[…]
<div class="jumbotron">
    <div class="container">
        <h1>Person Demo</h1>
        <p>This is just a little demonstration about how to use Angular 2.0 to interact with a REST API that we have
            created in the following series: <a
                    href="http://project-webdev.blogspot.com/2015/05/create-site-based-on-docker-part1.html"
                    target="_blank">Series: How to create your own website based on Docker </a>
    </div>
</div>
<div class="container">
    <app>Loading...</app>
    [...]
</div>
<!-- inject:js -->
<!-- endinject -->
<script src="./init.js?v=<%= VERSION %>"></script>
</body>
</html>
That's it... we've created our base application. Now everything we'll create will be rendered in the <app> portion of the page.

Creating the person list

Since encapsulation is very important in huge deployments, we're adding all self-contained components into the following folder: /app/components/ so in terms of the person list, this is going to be a folder called /app/components/personlist/.

Each component consists of
  • the component code itself
  • the template to use

As mentioned before, everything in Angular 2.0 is a component and so the structure of our personlist.ts pretty much looks like the app.ts.
import {Component, View,NgFor} from 'angular2/angular2';
// import the person list, which represents the array that contains all persons.
import {PersonService} from '../../services/PersonService';
//import our person model that represents our person from the REST service.
import {Person} from '../../models/Person';
@Component({
    selector: 'personlist',
    appInjector: [PersonService]
})
@View({
    templateUrl: './components/personlist/personlist.html?v=<%= VERSION %>',
    directives: [NgFor]
})
export class PersonList {
    personArray:Array<string>;
    constructor(ps:PersonService) {
        ps.getAllPersons().then((array)=> {
            this.personArray = array;
        });
    }
}
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/components/personlist/personlist.ts

As you can see, we're importing the following:

  • Standard Angular Components to be render the page (NgFor Directive is used to iterate through our list later)
  • Our Person service
  • Our Person model

We need to inject our PersonService into our component and the NgFor directive into our View, so we can use them later (e.g. see https://github.com/mastix/person-demo-angular2-seed/blob/master/app/components/personlist/personlist.html).

The real logic happens in the PersonList class itself - ok, there's not much here... but it's important. The constructor of this class uses the PersonService to fetch all Persons (the service will then fire a request to our API to fetch the list of persons) and to store them in an array. This array will then be accessible in the view, so we can iterate over it.
<table class="table table-striped">
    <tr>
        <th>ID</th>
        <th>FIRST NAME</th>
        <th>LAST NAME</th>
    </tr>
    <tr *ng-for="#person of personArray">
        <td>{{person.id}}</td>
        <td>{{person.firstname}}</td>
        <td>{{person.lastname}}</td>
    </tr>
</table>
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/components/personlist/personlist.html

We're using a table to represent the list of persons. So the only thing we need to do is to iterate over the personArray that we have created in our PersonList component. In every iteration we're creating a row (tr) with 3 fields (td) that contains the person's id, first name and last name.

Creating the person add page

Ok, since we can now list all persons, let's add the possibility to create a new one. We're following the same pattern here and create a personadd component (/app/components/personadd) that consists of some logic and a view as well.
import {Component, View, NgFor} from 'angular2/angular2';
// import the person list, which represents the array that contains all persons.
import {PersonService} from '../../services/PersonService';
//import our person model that represents our person from the REST service.
import {Person} from '../../models/Person';
@Component({
    selector: 'personadd',
    appInjector: [PersonService]
})
@View({
    templateUrl: './components/personadd/personadd.html?v=<%= VERSION %>',
})
export class PersonAdd {
    addPerson(theId, theFirstName, theLastName) {
        new PersonService().addPerson(new Person(theId, theFirstName, theLastName));
    }
}
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/components/personadd/personadd.ts

I'm not going to cover the annotations here, since they follow pretty much the same pattern like the PersonList. But what's important here is that the PersonAdd class offers a property/method/function called addPerson(), which takes three parameters: id, firstname, lastname. Based on these parameters, we can create our Person model and call our PersonService to store it on our server (in our mongodb Docker container via our ioJS REST Docker container).

Important: Usually you would add some validation here, but for the sake of simplicity I've skipped that.

As mentioned before, everything that we specify in the class will be available in the View, so this method can later be called from the HTML markup.
<form>
    <div class="form-group">
        <label for="inputId">ID</label>
        <input #id type="number" class="form-control" id="inputId" placeholder="Enter ID">
    </div>
    <div class="form-group">
        <label for="inputFirstName">First name</label>
        <input #firstname type="text" class="form-control" id="inputFirstName" placeholder="First name">
    </div>
    <div class="form-group">
        <label for="inputLastName">First name</label>
        <input #lastname type="text" class="form-control" id="inputLastName" placeholder="Last name">
    </div>
</form>
<button class="btn btn-success" (click)="addPerson(id.value, firstname.value, lastname.value)">Add Person</button>
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/components/personadd/personadd.html

I could have used angular2/forms here, but believe me, it is not ready to work with... I've struggled so much that I've decided to skip it (e.g. I'd have to update my type definitions and so on...). But what's really important here is that we can call our addPerson() method from our PersonAdd component and pass the values from our fields. Pretty easy, right?

Now we can build our project by running gulp build.prod and copy the contents of the newly created dist/prod/ folder to our docker host. Remember: In our docker compose file we've specified that our /opt/docker/projectwebdev/html folder will be mounted in our container (as /var/www/html). So we can easily update our HTML files and the changes will be reflected on our website on-the-fly.

So when you've copied all files, the directory structure should look like that:
├── config
│   ├── default.conf
│   └── nginx.conf
├── Dockerfile
└── html
    ├── app.html
    ├── app.js
    ├── bootstrap
    │   └── dist
    │       └── css
    │           └── bootstrap.min.css
    ├── components
    │   ├── personadd
    │   │   └── personadd.html
    │   └── personlist
    │       └── personlist.html
    ├── index.html
    ├── init.js
    ├── lib
    │   └── lib.js
    └── robots.txt

Here is what it looks like later

Adding a new person


Listing all persons



That's it... we have the backend and the frontend now... it's about time to create our nginx reverse proxy to make them all accessible!


Wednesday, May 20, 2015

Series: How to create your own website based on Docker (Part 8 - Creating the ioJS REST API Docker container)

It's about time to add some application logic to our project

This is part 8 of the series: How to create your own website based on Docker.

In the last part of the series, we have created our "dockerized" mongodb noSQL database server to read our persisted entries from and based on our architecture we have decided, that only the REST API (which will be based on ioJS) is allowed to talk to our database container.

So now it's about time to create the actual REST API that can be called via our nginx reverse proxy (using api.project-webdev.com) to read some person object entry from our database. We'll also create a very simple way to create a Person as well as list all available persons. As soon as you've understood how things work, you'll be able to implement more features of the REST API yourself - so consider this as pretty easy example.

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)


Technologies to be used

Our REST API will use the following technologies:
  • ioJS as JavaScript application server
  • hapiJS as REST framework
  • mongoose as mongoDB driver, to connect to our database container
  • pm2 to run our nodejs application (and restart it if it crashes for some reason)

First things first - creating the ioJS image

Creating the ioJS image is basically the same every time. Let's create a new directory called /opt/docker/projectwebdev-api/ and within this new directory we'll create another directory called app and our Dockerfile:
# mkdir -p /opt/docker/projectwebdev-api/app/
# > /opt/docker/projectwebdev-api/Dockerfile
The new Dockerfile is based on the official ioJS Dockerfile, but I've added added some application/image specific information, so that we can implement our ioJS application:

  • Added our ubuntu base image (we're not using debian wheezy like in the official image)
  • Installed the latest NPM, PM2 and gulp (for later; we're not using gulp for this little demo)
  • Added our working directories
  • Added some clean up code
  • Added PM2 as CMD (we'll talk about that soon)

So just create your /opt/docker/projectwebdev-api/Dockerfile with the following content:
# Pull base image.
FROM docker_ubuntubase

ENV DEBIAN_FRONTEND noninteractive

RUN apt-get update
RUN apt-get update --fix-missing
RUN curl -sL https://deb.nodesource.com/setup_iojs_2.x | bash -

RUN apt-get install -y iojs gcc make build-essential openssl make node-gyp
RUN npm install -g npm@latest
RUN npm install -g gulp
RUN npm install -g pm2@latest
RUN apt-get update --fix-missing

RUN mkdir -p /var/log/pm2
RUN mkdir -p /var/www/html

# Cleanup
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN apt-get autoremove -y
RUN ln -s /usr/bin/nodejs /usr/local/bin/node

WORKDIR /var/www/html

CMD ["pm2", "start", "index.js","--name","projectwebdevapi","--log","/var/log/pm2/pm2.log","--watch","--no-daemon"]
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/projectwebdev-api/Dockerfile

Adding our REST API code to our container

Now let's create a simple application, that listens to a simple GET request and returns and entry from our mongoDB container. Just to proof that it works, I'll create a REST API that returns a simple Person object that contains an id as well as a first and a last name.

In order to get this object later, I'd have to call http://api.projectwebdev.com/person/{id} and it will return that object in JSON format. We'll also add a router to return all persons as well as a route that allows to add a new person - but we'll cover that in a second.

Since PM2 will only start (and not build) our ioJS application, we have to make sure that NPM (packaged with ioJS or nodeJS) is installed on your server, so that you can build the project there.

So here is my simple flow:

  • I create the ioJS application on my local machine
  • Then I upload the files to my server
  • On my server I use npm install to fetch all dependencies
  • PM2 restart the application automatically if it detects changes

In a later blog posting I will explain how you can setup a Git Push-To-Deploy mechanism which will take care of this automatically, but for this simple application we're doing it manually.

To get started, I'll create a new directory on my local machine (which has ioJS installed) and create a basic application:
# mkdir -p /home/mastixmc{development/projectwebdev-api && $_
# npm init
# npm install hapi mongoose --save
npm init will ask you a bunch of questions, and then write a package.json for you. It attempts to make reasonable guesses about what you want things to be set to, and then writes a package.json file with the options you've selected. (Info: Every nodeJS/ioJS application needs to have a package.json as descriptor)

npm install hapi mongoose --save will download/install hapiJS and mongoose and will save the dependency in our package.json file, so our server can download it later as well.

Creating the application

In our new directory, we'll create a file called index.js, with the following contents (we'll get into details afterwards):
var hapi = require('hapi');
var mongoose = require('mongoose');
// connect to database
mongoose.connect('mongodb://'+process.env.MONGODB_1_PORT_3333_TCP_ADDR+':'+process.env.MONGODB_1_PORT_3333_TCP_PORT+'/persons', function (error) {
    if (error) {
        console.log("Connecting to the database failed!");
        console.log(error);
    }
});
// Mongoose Schema definition
var PersonSchema = new mongoose.Schema({
    id: String,
    firstName: String,
    lastName: String
});
// Mongoose Model definition
var Person = mongoose.model('person', PersonSchema);
// Create a server with a host and port
var server = new hapi.Server();
server.connection({
    port: 3000
});
// Add the route to get a person by id.
server.route({
    method: 'GET',
    path:'/person/{id}',
    handler: PersonIdReplyHandler
});
// Add the route to get all persons.
server.route({
    method: 'GET',
    path:'/person',
    handler: PersonReplyHandler
});
// Add the route to add a new person.
server.route({
    method: 'POST',
    path:'/person',
    handler: PersonAddHandler
});
// Return all users in the database.
function PersonReplyHandler(request, reply){
    Person.find({}, function (err, docs) {
        reply(docs);
    });
}
// Return a certain user based on its id.
function PersonIdReplyHandler(request, reply){
    if (request.params.id) {
        Person.find({ id: request.params.id }, function (err, docs) {
            reply(docs);
        });
    }
}
// add new person to the database.
function PersonAddHandler(request, reply){
    var newPerson = new Person();
    newPerson.id = request.payload.id;
    newPerson.lastName = request.payload.lastname;
    newPerson.firstName = request.payload.firstname;
    newPerson.save(function (err) {
        if (!err) {
            reply(newPerson).created('/person/' + newPerson.id);    // HTTP 201
        } else {
            reply("ERROR SAVING NEW PERSON!!!"); // HTTP 403
        }
    });
}
// Start the server
server.start();
Disclaimer: Since this is just a little example, I hope you don't mind that I've put everything into on file - in a real project, I'd recommend to structure the project correctly, so that it scales in larger deployments - but for now, we're fine. Also, I did not add any error-checking or whatsoever to this code as it's just for demonstration purposes.

Now I we can copy our index.js and package.json file to our server (/opt/docker/projectwebdev-api/app/), ssh into our server and run npm install within that directory. This will download all dependencies and create a node_modules folder for us. You'll have a fully deployed ioJS application on your Docker host now, which can be used by the projectwebdev-api container, since this directory is mounted into it.

Explaining the REST-API code

So what does this file do? Pretty simple:

HapiJS creates a server that will listen on port 3000 - I've also added the following routes including their handlers:

  • GET to /person, which will then call a PersonReplyHandler function, that uses Mongoose to fetch all persons stored in our database.
  • GET to /person/{id}, which will then call a PersonIdReplyHandler function, that uses Mongoose to fetch a person with a certain id from our database.
  • POST to /person, which will then call a PersonAddHandler function, that uses Mongoose to store a person in our database.

A Person consists of the following fields (we're using the Mongoose Schema here):
// Mongoose Schema definition
var PersonSchema = new mongoose.Schema({
    id: String,
    firstname: String,
    lastname: String
});
So the aforementioned handlers (e.g. PersonAddHandler) will make sure that this information is served or stored from/to the database.

Later, when you have set up your nginx reverse proxy, you'll be able to use the following requests to GET or POST persons. But we'll get into that in the last part!

Add a new person:
curl -X POST -H "Accept: application/json" -H "Content-Type: multipart/form-data" -F "id=999" -F "firstname=Sascha" -F "lastname=Sambale" http://api.project-webdev.com/person
Result:
[{
    "_id": "555c827959a2234601c5ddfa",
    "firstName": "Sascha",
    "lastName": "Sambale",
    "id": "999",
    "__v": 0
}]
Get all persons:
curl -X GET -H "Accept: application/json" http://api.project-webdev.com/person/
Result:
[{
    _id: "555c81f559a2234601c5ddf9",
    firstName: "John",
    lastName: "Doe",
    id: "15",
    __v: 0
}, {
    _id: "555c827959a2234601c5ddfa",
    firstName: "Sascha",
    lastName: "Sambale",
    id: "999",
    __v: 0
}]
Get a person with id 999:
curl -X GET -H "Accept: application/json" http://api.project-webdev.com/person/999
Result:
[{
    "_id": "555c827959a2234601c5ddfa",
    "firstName": "Sascha",
    "lastName": "Sambale",
    "id": "999",
    "__v": 0
}]
You'll be able to do that as soon as you've reached the end of this series! ;)

Explaining the database code

I guess the most important part of the database code is how we establish the connection to our mongodb container.
// connect to database
mongoose.connect('mongodb://'+process.env.MONGODB_1_PORT_3333_TCP_ADDR+':'+process.env.MONGODB_1_PORT_3333_TCP_PORT+'/persons', function (error) {
    if (error) {
        console.log("Connecting to the database failed!");
        console.log(error);
    }
});
Since we're using container links, we can not know which ip our mongodb container will get when it gets started. So we have to use environment variables that Docker provides us.

Docker uses this prefix format to define three distinct environment variables:

  • The prefix_ADDR variable contains the IP Address from the URL, for example WEBDB_PORT_8080_TCP_ADDR=172.17.0.82.
  • The prefix_PORT variable contains just the port number from the URL for example WEBDB_PORT_8080_TCP_PORT=8080.
  • The prefix_PROTO variable contains just the protocol from the URL for example WEBDB_PORT_8080_TCP_PROTO=tcp.

If the container exposes multiple ports, an environment variable set is defined for each one. This means, for example, if a container exposes 4 ports that Docker creates 12 environment variables, 3 for each port.

In our case the environment variables look like this:

  • MONGODB_1_PORT_3333_TCP_ADDR
  • MONGODB_1_PORT_3333_TCP_PORT
  • MONGODB_1_PORT_3333_TCP_PROTO

Where MONGODB is the name and PORT is the port number we've specified in our docker-compose.yml file:
mongodb:
    build: ./mongodb
    expose:
      - "3333"

    volumes:
        - ./logs/:/var/log/mongodb/
        - ./mongodb/db:/data/db
Docker Compose also creates environment variables with the name DOCKER_MONGODB, which we are not going to use as it might happen that we switch from Docker Compose to something else in the future.

So Docker provides the environment variables and ioJS uses the process.env object to access them. We can therefore create a mongodb connection URL that looks like this:
mongodb://172.17.0.82:3333/persons
... which will be the link to our Docker container that runs mongodb on port 3333... Connection established!

Running ioJS in production mode

As mentioned before, in order to start (and automatically restart our REST API application, when we update the application files or the application crashes for some reason) we're using PM2, which will be configured via command line paramaters in our CMD instruction (see our Dockerfile):
CMD ["pm2", "start", "index.js","--name","projectwebdevapi","--log","/var/log/pm2/pm2.log","--watch","--no-daemon"]
So what does this command do?

  • "pm2", "start", "index.js" starts our application from within our WORKDIR (/var/www/html/).
  • "--name","projectwebdevapi" names our application projectwebdevapi.
  • "--log","/var/log/pm2/pm2-project.log" logs everything to /var/log/pm2/pm2-project.log (and since this is a mounted directory it will be stored on our docker host in /opt/docker/logs - see our docker-compose.yml file).
  • "--watch" watches our WORKDIR (/var/www/html/) for changes and will restart the application if something has changed. So you'll be able to update the application on your docker host and the changes will be reflected on the live site automatically.
  • "--no-daemon" runs PM2 in the foreground so the container does not exit and keeps running.

That's pretty much it - now, whenever you start your container later  (in our case Docker Compose will start it), PM2 will start your application and will make sure that it keeps running.

In the next part we'll create the frontend application that calls our new REST-API!

Friday, May 15, 2015

Series: How to create your own website based on Docker (Part 7 - Creating the mongodb Docker container)


Creating our mongodb database image

This is part 7 of the series: How to create your own website based on Docker.

It's about time to create our first image/container that is part of the real application stack. This container acts as persistence layer for the REST API (which we will create in the next part of this series). So the only component that talks to the database is the ioJS REST API container. In this part of the series, we'll have a look into how you can create your own mongodb container based on the official mongodb image.

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)

Let's get started

Let's create a new directory called /opt/docker/mongodb/ and within this new directory we'll create two folders and one file:
# mkdir -p /opt/docker/mongodb/config/
# mkdir /opt/docker/mongodb/db/
# > /opt/docker/mongodb/Dockerfile
Since I don't want to re-invent the wheel, I'll have a look at the official mongodb Docker image and we're basically using the same mongodb 3.0 Dockerfile for our design. Since we want to run this mongodb database on our own Ubuntu Base Container, we need to make some changes to official mongodb Docker image.

The official mongodb Dockerfile uses Debian wheezy as base image, which is not what we want:
FROM debian:wheezy
We are going to use our own Ubuntu Base Image for the mongodb image and since we use Docker Compose, we must specify the correct base image name, which is a concatenation of "docker_" and the image name that we have specified in our docker-compose.yml - so in our case that would be "docker_ubuntubase". So we're changing the aforementioned line to use our base image:
# Pull base image.
FROM docker_ubuntubase 
Since the original Dockerfile only allows us to mount /data/db as volume, so we're extending it to also allow the mongodb log directory:

Replace the following line:
VOLUME /data/db
With this line:
VOLUME ["/data/db","/var/log/mongodb/"]
I'd like to have my configurations in a subfolder called "config", so we need to adjust another line:

Replace the following lines:
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
With these lines:
COPY ./config/docker-entrypoint.sh /tmp/entrypoint.sh
RUN ["chmod", "+x", "/tmp/entrypoint.sh"]
ENTRYPOINT ["/tmp/entrypoint.sh"]
These lines copy a script called ./config/docker-entrypoint.sh to the /tmp/ folder in the container, make it executable and run it once, once the container has started. You can find the docker-entrypoint.sh file in the official mongodb docker repository on GitHub. Just copy that file into the config directory, which you have to create if you haven't done so already.

Let's create our own mongodb configuration file to set some parameters.

To do so, create a file called /opt/docker/mongodb/config/mongodb.conf and add the following lines (important: YAML does not accept tabs; use spaces instead!):
systemLog:
   destination: file
   path: "/var/log/mongodb/mongodb-projectwebdev.log"
   logAppend: true
storage:
   journal:
      enabled: true
net:
   port: 3333
   http:
       enabled: false
       JSONPEnabled: false
       RESTInterfaceEnabled: false
Now add the following lines to your Dockerfile to copy our new custom config file to our image:
RUN mkdir -p /var/log/mongodb && chown -R mongodb:mongodb /var/log/mongodb
COPY ./config/mongodb.conf /etc/mongod.conf
Since we want to load our custom config now, we'll need to changed the way we start mongodb, so we change the following line from
CMD ["mongod"]
to
CMD ["mongod", "-f", "/etc/mongod.conf"]
Our folder structure must look like this now:
+-- /opt/docker/mongodb
¦   +-- config
¦   ¦   +-- docker-entrypoint.sh
¦   ¦   +-- mongodb.conf
¦   +-- db
¦   +-- Dockerfile
Another thing we can remove is the EXPOSE instruction, since we already specified that in our docker-compose.yml.

So the complete Dockerfile will look like this now:
# Pull base image.
FROM docker_ubuntubase

# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mongodb && useradd -r -g mongodb mongodb

RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates curl \
numactl \
&& rm -rf /var/lib/apt/lists/*

# grab gosu for easy step-down from root
RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu

# gpg: key 7F0CEB10: public key "Richard Kreuter <richard@10gen.com>" imported
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 492EAFE8CD016A07919F1D2B9ECBEC467F0CEB10

ENV MONGO_MAJOR 3.0
ENV MONGO_VERSION 3.0.3

RUN echo "deb http://repo.mongodb.org/apt/debian wheezy/mongodb-org/$MONGO_MAJOR main" > /etc/apt/sources.list.d/mongodb-org.list

RUN set -x \
&& apt-get update \
&& apt-get install -y mongodb-org=$MONGO_VERSION \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mongodb \
&& mv /etc/mongod.conf /etc/mongod.conf.orig

RUN mkdir -p /data/db && chown -R mongodb:mongodb /data/db
RUN mkdir -p /var/log/mongodb && chown -R mongodb:mongodb /var/log/mongodb

VOLUME ["/data/db","/var/log/mongodb/"]

COPY ./config/docker-entrypoint.sh /tmp/entrypoint.sh
COPY ./config/mongodb.conf /etc/mongod.conf
RUN ["chmod", "+x", "/tmp/entrypoint.sh"]

ENTRYPOINT ["/tmp/entrypoint.sh"]

CMD ["mongod", "-f", "/etc/mongod.conf"]
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/mongodb/Dockerfile

This is pretty much it. That's all we need to create our mongodb database container, that will run on the default port 3333 - but will only be accessible from the REST API, since we linked that container to the ioJS REST API container only, see our docker-compose.yml file again:
projectwebdevapi:
    build: ./projectwebdev-api
    expose:
        - "3000"
    links:
        - mongodb:db

    volumes:
        - ./logs/:/var/log/supervisor/
        - ./projectwebdev-api/app:/var/www/html
In the next chapter it's getting more interesting: Let's create our REST API container, that talks to the mongodb container!