Sunday, June 14, 2015

Series: How to create your own website based on Docker (Part 11 - Run the whole application stack with Docker Compose)

Manage all your created containers with Docker Compose

This is part 11 of the series: How to create your own website based on Docker.

Well, we have created our images now, so it's about time to get everything up and running. As mentioned in the first posting we're using Docker Compose for that.

This is the time where our docker-compose file from part 5 comes back into play. Docker Compose will use this file to figure out which containers need to be started and in what order. I will make sure that all posts are exposed correctly and will mount our directories.


Source code

All files mentioned in this series are available on Github, so you can play around with it! :)


Custom Clean Script

I've created a custom clean script that allows me to clean all existing images and containers. This is comes in pretty handy when you're in the middle of development. I've used this script a thousand times already.
#!/bin/bash
docker rm -f $(docker ps -q -a)
docker rmi -f $(docker images -q)
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/cleanAll

This script iterates through all containers first and removes them (it doesn't care whether they are up and running or not) and will then remove all images. Don't be afraid... the images will be re-created from the Dockerfiles. So if you've played around with these containers already, I recommend to run this script before reading further. Important: Do not run this script in your production environment! :)

Working with Docker Compose

Docker Compose will only do it's magic when you're in a directory that contains a docker-compose.yml. As mentioned before, I've put all files into /opt/docker/, so we're operating in this directory only.

I'm currently using the following version:
/opt/docker$ docker-compose --version
docker-compose 1.2.0
Make sure that no containers are running:
/opt/docker$ docker-compose ps
Name   Command   State   Ports
------------------------------
Update: If you're running Docker Compose >= 1.3.x then you'll have to run the following command if you've checked out the code from my repository:
docker-compose migrate-to-labels
Since we have our docker-compose.yml ready, the only thing to do is fire docker-compose via the following command:
docker-compose up -d
The -d flag makes sure that Docker Compose starts in daemon mode (in the background).

When you run this command you'll see that it fetches the images and configures them based on our Dockerfile (e.g. like running apt-get, copying files,...). It would be too much to copy here, but you'll see soon how much output it generates.

Let's play around with our containers

Let's see if all containers are up and running:
/opt/docker$ docker-compose ps
           Name                         Command               State               Ports          
--------------------------------------------------------------------------------------------------
docker_mongodb_1             /tmp/entrypoint.sh mongod  ...   Up       3333/tcp                  
docker_nginxreverseproxy_1   /bin/sh -c /etc/nginx/conf ...   Up       443/tcp, 0.0.0.0:80->80/tcp
docker_projectwebdev_1       /bin/sh -c nginx                 Up       8081/tcp                  
docker_projectwebdevapi_1    pm2 start index.js --name  ...   Up       3000/tcp                  
docker_ubuntubase_1          bash                             Exit 0               
It's pretty fine that our ubuntu container already exited, since there is no background task running (e.g. like our nginx server that has to reply to requests) - all other three services are up and running. You can also see that only our nginx reverse proxy exposes its port (80) to the public. All other ports are internal ports.

Let's see if our website is up and running:

Our Person REST API:


Our Person Demo page:



Just a short information: Don't try to access the URLs you see in this image. Since this is just a demo I just uploaded it for demonstration purposes and stopped the containers right after taking these screenshots.

Let's see how much memory our Person API consumes:
/opt/docker$ docker-compose stats docker_projectwebdevapi_1
CONTAINER                   CPU %               MEM USAGE/LIMIT       MEM %               NET I/O
docker_projectwebdevapi_1   0.00%               76.12 MiB/1.954 GiB   3.80%               3.984 KiB/1.945 KiB
Let's stop & start all containers - why? Because we can!
/opt/docker$ docker-compose restart
Restarting docker_projectwebdev_1...
Restarting docker_mongodb_1...
Restarting docker_projectwebdevapi_1...
Restarting docker_nginxreverseproxy_1...
Let's see the stacked images:
docker images --tree
Warning: '--tree' is deprecated, it will be removed soon. See usage.
└─1c3c252d48a5 Virtual Size: 131.3 MB
  └─66b5d995810b Virtual Size: 131.3 MB
    └─b7e7cde90a84 Virtual Size: 131.3 MB
      └─c6a3582257ff Virtual Size: 131.3 MB Tags: ubuntu:15.04
        └─beec7359d06b Virtual Size: 516 MB
          └─56f95e536056 Virtual Size: 516 MB
            └─2e6215be7f22 Virtual Size: 516 MB
              └─0da535016806 Virtual Size: 516 MB Tags: docker_ubuntubase:latest
                ├─22e3ad368e3d Virtual Size: 516.4 MB
  […]
                  └─bc20ce213396 Virtual Size: 679 MB
                │   └─b20c90481a4e Virtual Size: 679 MB Tags: docker_mongodb:latest
                └─419a34bcfcfd Virtual Size: 516 MB
                  ├─2d2525cf28e1 Virtual Size: 537.1 MB
                  │ └─9c9f238dc62d Virtual Size: 558.2 MB
                  │   └─4bf8554af678 Virtual Size: 580.2 MB
                  │     └─9d6fdb379360 Virtual Size: 620.4 MB
                  │       └─02b3cd93208f Virtual Size: 638.1 MB
           […]    
   └─aba65d0f0c06 Virtual Size: 706 MB
                  │           └─9b4b55e323e3 Virtual Size: 706 MB Tags: docker_projectwebdevapi:latest
                  └─466f9910439a Virtual Size: 543.9 MB
                    ├─008ffe8fa738 Virtual Size: 543.9 MB
                    │ └─476a45c16218 Virtual Size: 543.9 MB
                    […]
                    │   └─b53827f8ddfd Virtual Size: 543.9 MB Tags: docker_nginxreverseproxy:latest
                    └─aec75192e11a Virtual Size: 543.9 MB
                      └─eadec9140592 Virtual Size: 543.9 MB
                        └─27b6deeec60a Virtual Size: 543.9 MB
                          └─6f0f6661c308 Virtual Size: 543.9 MB Tags: docker_projectwebdev:latest
As you can see: All our images are based on the Ubuntu Base Image and files are only added on top of the underlying base image.

Let's see all logs to console/stdout:
/opt/docker$ docker-compose logs
Attaching to docker_nginxreverseproxy_1, docker_projectwebdevapi_1, docker_mongodb_1, docker_projectwebdev_1, docker_ubuntubase_1
projectwebdevapi_1  | pm2 launched in no-daemon mode (you can add DEBUG="*" env variable to get more messages)
projectwebdevapi_1  | 2015-06-12 21:29:23: [PM2][WORKER] Started with refreshing interval: 30000
projectwebdevapi_1  | 2015-06-12 21:29:23: [[[[ PM2/God daemon launched ]]]]
[…]
nginxreverseproxy_1 | START UPDATING DEFAULT CONF
nginxreverseproxy_1 | CHANGED DEFAULT CONF
nginxreverseproxy_1 | upstream blog  {
nginxreverseproxy_1 |       server 172.17.0.29:8081; #Blog
nginxreverseproxy_1 | }
nginxreverseproxy_1 |
nginxreverseproxy_1 | upstream blog-api  {
nginxreverseproxy_1 |       server 172.17.0.54:3000; #Blog-API
nginxreverseproxy_1 | }
nginxreverseproxy_1 |
nginxreverseproxy_1 | ## Start blog.project-webdev.com ##
nginxreverseproxy_1 | server {
nginxreverseproxy_1 |     listen  80;
nginxreverseproxy_1 |     server_name  blog.project-webdev.com;
[…]
nginxreverseproxy_1 | }
nginxreverseproxy_1 | ## End blog.project-webdev.com ##
nginxreverseproxy_1 |
nginxreverseproxy_1 | ## Start api.project-webdev.com ##
nginxreverseproxy_1 | server {
nginxreverseproxy_1 |     listen  80;
nginxreverseproxy_1 |     server_name  api.project-webdev.com;
nginxreverseproxy_1 |
[…]
nginxreverseproxy_1 | }
nginxreverseproxy_1 | ## End api.project-webdev.com ##
nginxreverseproxy_1 |
[…]
nginxreverseproxy_1 |  END UPDATING DEFAULT CONF
As you can see, you'll get all stdout output from each container. You'll see the container name in the beginning of the line... in our example we'll only get output from our ioJS container/projectwebdevapi_1 (started with pm2) and our nginx reverseproxy/nginxreverseproxy_1.

Let's check our log directory:
/opt/docker/logs$ ll
total 24
drwxrwxrwx 2 mastix docker 4096 Jun 14 21:21 ./
drwxr-xr-x 9 mastix docker 4096 Jun 12 17:39 ../
-rw-r--r-- 1 root   root      0 Jun 14 21:21 access.log
-rw-r--r-- 1 root   root      0 Jun 14 21:21 error.log
-rw-r--r-- 1 root   root   2696 Jun 14 21:21 mongodb-projectwebdev.log
-rw-r--r-- 1 root   root    200 Jun 14 21:21 nginx-reverse-proxy-blog.access.log
-rw-r--r-- 1 root   root    637 Jun 14 21:21 nginx-reverse-proxy-blog-api.access.log
-rw-r--r-- 1 root   root      0 Jun 14 21:21 nginx-reverse-proxy-blog-api.error.log
-rw-r--r-- 1 root   root      0 Jun 14 21:21 nginx-reverse-proxy-blog.error.log
-rw-r--r-- 1 root   root      0 Jun 14 21:21 pm2-0.log
-rw-r--r-- 1 root   root    199 Jun 14 21:21 project-webdev.access.log
-rw-r--r-- 1 root   root      0 Jun 14 21:21 project-webdev.error.log
Remember: We've told each container to log its files into our /opt/docker/logs directory on the Docker host... And now we have them all in one place.

That's it. I hope you had fun learning Docker with this session. And if you find any bugs, I'm happy to fix them. Just add a comment or create an issue in the Github repository.

Greetz,

Sascha

Friday, June 12, 2015

Series: How to create your own website based on Docker (Part 10 - Creating the nginx reverse proxy Docker container)

Let's glue it all together

This is part 10 of the series: How to create your own website based on Docker.

Let's recap: We've created the backend containers consisting of a mongodb container and a ioJS/hapiJS container. We've also created the nginx/Angular 2.0 frontend container that makes use of the new backend.

As mentioned in the 4th part of this series we've defined that every request must go through an nginx reverse proxy. This proxy decides where the requests go to and what services are accessible from outside. So in order to make the whole setup available from the web, you need to configure the nginx reverse proxy that will route all requests to the proper docker container.

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)


Talking about container links again

Remember that we have linked containers together? And that we don't know the IP addresses, because Docker takes care of that and reserves them while creating the container?

That's bad for a reverse proxy, since it needs to know where to route the requests to - but there's a good thing: IP addresses and ports are available as environment variable within each container. Bad thing: nginx configurations can't read environment variables! So using links makes creating this container hard and more complex than it actually should be - but there's a solution for that of course, which we will cover that later in that post.

So what does this container have to do?

Well, that's pretty simple... it needs to glue everything together, so it needs to collect all services that should be accessible from outside.

So in our case we'd need an nginx configuration for:
  • Our REST-API container based on ioJS & hapiJS
  • Our frontend container based on nginx and Angular 2.0

We don't want to expose:
  • Our mongodb container
  • Our ubuntu base container

Let's get started - creating the nginx image

Creating the nginx image is basically the same every time. Let's create a new directory called /opt/docker/nginx-reverse-proxy/ and within this new directory we'll create other directories called config & html and our Dockerfile:
# mkdir -p /opt/docker/projectwebdev/config/
# mkdir -p /opt/docker/projectwebdev/html/
# > /opt/docker/projectwebdev/Dockerfile
So just create your /opt/docker/nginx-reverse-proxy/Dockerfile with the following content:
# Pull base image.
FROM docker_ubuntubase
ENV DEBIAN_FRONTEND noninteractive
# Install Nginx.
RUN \
  add-apt-repository -y ppa:nginx/stable && \
  apt-get update && \
  apt-get install -y nginx && \
  rm -rf /var/lib/apt/lists/* && \
  chown -R www-data:www-data /var/lib/nginx
# Define mountable directories.
VOLUME ["/etc/nginx/certs", "/var/log/nginx", "/var/www/html"]
# Define working directory.
WORKDIR /etc/nginx
# Copy all config files
COPY config/default.conf /etc/nginx/conf.d/default.conf
COPY config/nginx.conf /etc/nginx/nginx.conf
COPY config/config.sh /etc/nginx/config.sh
RUN ["chmod", "+x", "/etc/nginx/config.sh"]
# Copy default webpage
RUN rm /var/www/html/index.nginx-debian.html
COPY html/index.html /var/www/html/index.html
COPY html/robots.txt /var/www/html/robots.txt
# Define default command.
CMD /etc/nginx/config.sh && nginx
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/nginx-reverse-proxy/Dockerfile

This Dockerfile also uses our Ubuntu base image, installs nginx and bakes our configuraton into our nginx container.

What is this html folder for?

Before looking into the configuration we'll cover the easy stuff first. :)

While creating the directories you might have asked yourself why you need to create an html folder?! Well, that's simple: Since we're currently only developing api.project-webdev.com and blog.project-webdev.com we need a place to go when someone visits www.project-webdev.com - that's what this folder is for. If you don't have such a use case, you can also skip it - so this is kind of a fallback strategy.

The HTML page is pretty simple:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to this empty page!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to this empty page!</h1>
<p>If you see this page, you'll see that it is empty.</p>
<p><em>Will this change soon...? Hell yeah it will! ;)</em></p>
</body>
</html>
So let's put this code into the following file nginx-reverse-proxy/html/index.html.

The nginx configuration

Now it's getting difficult interesting. :)

As mentioned before, our nginx container needs to route all requests based on the URL to our containers.

So we need two routes/locations
  • api.project-webdev.com routes to my Docker REST API container
  • blog.project-webdev.com routes to my Docker blog container
Since we don't know the IP addresses to route to during development, we need to work with custom placeholders, that we need to replace via shell script once the container starts. In the following example you'll see that we're using two place holders for our two exposed services:
  • BLOG_IP:BLOG_PORT
  • BLOGAPI_IP:BLOGAPI_PORT
We're going to replace these two placeholders with the correct value from the environment variables that docker offers us when linking containers together.

So you need a config file called /opt/docker/nginx-reverse-proxy/config/default.conf that contains your nginx server configuration:
upstream blog  {
      server BLOG_IP:BLOG_PORT; #Blog
}
upstream blog-api  {
      server BLOGAPI_IP:BLOGAPI_PORT; #Blog-API
}
## Start blog.project-webdev.com ##
server {
    listen  80;
    server_name  blog.project-webdev.com;
    access_log  /var/log/nginx/nginx-reverse-proxy-blog.access.log;
    error_log  /var/log/nginx/nginx-reverse-proxy-blog.error.log;
    root   /var/www/html;
    index  index.html index.htm;
    ## send request back to blog ##
    location / {
     proxy_pass  http://blog;
     proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
     proxy_redirect off;
     proxy_buffering off;
     proxy_set_header        Host            $host;
     proxy_set_header        X-Real-IP       $remote_addr;
     proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
   }
}
## End blog.project-webdev.com ##
## Start api.project-webdev.com ##
server {
    listen  80;
    server_name  api.project-webdev.com*;
    access_log  /var/log/nginx/nginx-reverse-proxy-blog-api.access.log;
    error_log  /var/log/nginx/nginx-reverse-proxy-blog-api.error.log;
    ## send request back to blog api ##
    location / {
     proxy_pass  http://blog-api;
     proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
     proxy_redirect off;
     proxy_buffering off;
     proxy_set_header        Host            $host;
     proxy_set_header        X-Real-IP       $remote_addr;
     proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;

     # send the CORS headers
     add_header 'Access-Control-Allow-Credentials' 'true';
     add_header 'Access-Control-Allow-Origin'      'http://blog.project-webdev.com';
   }
}
## End api.project-webdev.com ##

This configuration file contains two locations, one that redirects to our blog (upstream blog  {[..]}) and to our api (upstream blog-api{[...]}). As mentioned before, we're going to replace the IP and the port soon. :)

So what's happening is that every request against blog.project-webdev.com will be redirected to the corresponding upstream:
server {
    listen  80;
    server_name  blog.project-webdev.com;
   [...]
    location / {
     proxy_pass  http://blog;
   [...]
   }
}
The same works for the REST API:
server {
    listen  80;
    server_name  api.project-webdev.com;
   [...]
    location / {
     proxy_pass  http://blog-api;
   [...]
   }
}

Using docker's environment variables

In order to use docker's environment variables we need to run a script every time the container starts. This script will check the default.conf file that you have just created and replaces your placeholders with the values from the environment variables; after that it will start nginx.

See the last line of the dockerfile that triggers the script execution:
CMD /etc/nginx/config.sh && nginx
Let's recap quickly: As mentioned in previous posts, Docker creates environment variables with IP and port information when you link containers together. These variables contain all information that you need to access your containers - and that's exactly what we want to do here.

The following script will replace our custom placeholders in our default.conf file with the corresponding values from the environment variables that Docker has created for us, so let's create the aforementioned /opt/docker/nginx-reverse-proxy/config/config.sh file:
#!/bin/bash
# Using environment variables to set nginx configuration
# Settings for the blog
echo "START UPDATING DEFAULT CONF"
[ -z "${BLOG_PORT_8081_TCP_ADDR}" ] && echo "\$BLOG_PORT_8081_TCP_ADDR is not set" || sed -i "s/BLOG_IP/${BLOG_PORT_8081_TCP_ADDR}/" /etc/nginx/conf.d/default.conf
[ -z "${BLOG_PORT_8081_TCP_PORT}" ] && echo "\$BLOG_PORT_8081_TCP_PORT is not set" || sed -i "s/BLOG_PORT/${BLOG_PORT_8081_TCP_PORT}/" /etc/nginx/conf.d/default.conf
[ -z "${BLOGAPI_PORT_3000_TCP_ADDR}" ] && echo "\$BLOGAPI_PORT_3000_TCP_ADDR is not set" || sed -i "s/BLOGAPI_IP/${BLOGAPI_PORT_3000_TCP_ADDR}/" /etc/nginx/conf.d/default.conf
[ -z "${BLOGAPI_PORT_3000_TCP_PORT}" ] && echo "\$BLOGAPI_PORT_3000_TCP_PORT is not set" || sed -i "s/BLOGAPI_PORT/${BLOGAPI_PORT_3000_TCP_PORT}/" /etc/nginx/conf.d/default.conf
echo "CHANGED DEFAULT CONF"
cat /etc/nginx/conf.d/default.conf
echo "END UPDATING DEFAULT CONF"
This script uses the basic sed (stream editor) command to replace the strings.

See the following example, that demonstrates how the IP address for the blog is being replaced:
[ -z "${BLOG_PORT_8081_TCP_ADDR}" ] && echo "\$BLOG_PORT_8081_TCP_ADDR is not set" || sed -i "s/BLOG_IP/${BLOG_PORT_8081_TCP_ADDR}/" /etc/nginx/conf.d/default.conf

  • First it checks whether the BLOG_PORT_8081_TCP_ADDR exists as environment variable
  • If that is true, it will call the sed command, which looks for BLOG_IP in the /etc/nginx/conf.d/default.conf file (which has been copied from our Docker host into the image - see Dockerfile)
  • And will then replace it with the value from the environment variable BLOG_PORT_8081_TCP_ADDR.

And that's all the magic! ;)

So when the script has run, it will have replaced the placeholders in our config file so that it looks like this:
CHANGED DEFAULT CONF
upstream blog  {
      server 172.17.0.14:8081; #Blog
}
nginxreverseproxy_1 |
upstream blog-api  {
      server 172.17.0.10:3000; #Blog-API
}
... and therefore our nginx reverse proxy is ready to distribute our requests to our containers, since it knows their port and ip address now! :)

Ok, since we have now created all our containers it's about time to start them up in the next post! :)

Wednesday, June 10, 2015

Series: How to create your own website based on Docker (Part 9 - Creating the nginx/Angular 2 web site Docker container)

It's about time to add some frontend logic to our project

This is part 9 of the series: How to create your own website based on Docker.

In the last two parts we've created the whole backend (REST API + database), so it's about time to create the website that makes use of it. Since we have a simple person REST API (see part 8), we need a site that can list all persons as well as create new ones. Since Angular 2.0 has achieved "Developer Preview" status, this sounds like a perfect demo for Angular 2.0!

A word of advice: Angular 2.0 is not production ready yet! This little project is perfect for playing around with the latest version of Angular 2.0, but please do not start new projects with it, since a lot of new changes are going to be introduced until the framework is officially released. If you have ever played around with Angular 2.0 alpha, you probably don't want to use it for production anyways... It's still very very unstable and the hassle with the typings makes me sad every time I use Angular 2.0. But this will change some time soon and then we'll be able to work Angular like pros! :)

Source code

All files mentioned in this series are available on Github, so you can play around with it! :)

Technologies to be used

Our website will use the following technologies (and to make sure that we have full support of everything, I recommend to use the latest greatest Chrome):

First things first - creating the nginx image

Creating the nginx image is basically the same every time. Let's create a new directory called /opt/docker/projectwebdev/ and within this new directory we'll create other directories called config and html as well as our Dockerfile:
# mkdir -p /opt/docker/projectwebdev/config/
# mkdir -p /opt/docker/projectwebdev/html/
# > /opt/docker/projectwebdev/Dockerfile
After creating the fourth Dockerfile, you should  now be able to understand what this file is doing, so I'm not going into details now - as a matter of fact, this is a pretty easy one.
# Pull base image.
FROM docker_ubuntubase

ENV DEBIAN_FRONTEND noninteractive

# Install Nginx.
RUN \
  add-apt-repository -y ppa:nginx/stable && \
  apt-get update && \
  apt-get install -y nginx && \
  rm -rf /var/lib/apt/lists/* && \
  chown -R www-data:www-data /var/lib/nginx

# Define working directory.
WORKDIR /etc/nginx

# Copy all config files
COPY ./config/default.conf /etc/nginx/conf.d/default.conf
COPY ./config/nginx.conf /etc/nginx/nginx.conf

# Define default command.
CMD nginx
Source: https://github.com/mastix/project-webdev-docker-demo/blob/master/projectwebdev/Dockerfile

I've created the following config files that you have to copy to the /opt/docker/projectwebdev/config/ folder - the Dockerfile will make sure that these files get copied into the image:

default.conf:
## Start www.project-webdev.com ##
server {
    listen  8081;
    server_name  _;
    access_log  /var/log/nginx/project-webdev.access.log;
    error_log  /var/log/nginx/project-webdev.error.log;
    root   /var/www/html;
    index  index.html;
}
## End www.project-webdev.com ##
nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
daemon off;

events {
        worker_connections 768;
}

http {
  ##
  # Basic Settings
  ##
  sendfile on;
  tcp_nopush on;
  tcp_nodelay off;
  keepalive_timeout 65;
  types_hash_max_size 2048;
  server_tokens off;
  server_names_hash_bucket_size 64;
  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  ##
  # Logging Settings
  ##
  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  ##
  # Gzip Settings
  ##
  gzip on;
  gzip_disable "msie6";
  gzip_http_version 1.1;
  gzip_proxied any;
  gzip_min_length 500;
  gzip_types text/plain text/xml text/css
  text/comma-separated-values text/javascript
  application/x-javascript application/atom+xml;

  ##
  # Virtual Host Configs
  ##
  include /etc/nginx/conf.d/*.conf;
  include /etc/nginx/sites-enabled/*;
}

These two files are the basic configuration for our frontend server and will make sure that it listens on port 8081 (see architecture: 4-digit-ports are not exposed to the web) and enables gzip. These files contain basic settings and should be adjusted to your personal preferences when creating your own files.

That's it... your nginx frontend server is ready to run!

Let's create the frontend application

Since I don't want to re-invent the wheel I'm going to use a pretty cool Angular 2.0 seed project by Minko Gechev. This project uses gulp (which I'm using for pretty much every frontend project) as build system. We're going to use this as bootstrapped application and will add our person-demo-specific code to it - for the sake of simplicity I've tried not to alter the original code/structure much.

Our demo will do the following:

  • List all persons stored in our mongodb
  • Create new persons and store them in in our mongodb

Important: This demo will not validate any input data nor does it implement any performance optimizations, it's just a very very little Angular 2.0 demo application that talks to our Docker REST container.

Let's get started with the basic project setup

To get started, I've created a fork of the original Angular 2.0 seed repository and you can find the complete application source code right here! So grab the source if you want to play around with it! :)

The gulp script in this project offers several tasks that will create the project for us by collecting all files, minifying them and preparing our HTML markup. The task we're going to use is gulp build.prod (build production, which will minify files). You can also use gulp build.dev (build development) if you want to be able to debug the generated JavaScript files in your browser. Whenever you run your build, all project-necessary generated files will be copied to dist/prod/ - so the files in this directory represent the website that you need to copy to your docker host later - we'll cover that later.

Although I've said that I'm not going to alter the original code much, I've included Twitter Bootstrap via Bower - for those who don't know: Bower is a frontend dependency management framework. Usually you would call bower install to install all dependencies, but I added this call to the package.json file, so all you have to do is call npm install (which you have to do anyways when downloading my project from github), which will call bower install afterwards.

The model

We've covered the basic project setup, so let's get started with the code.

Since we're using TypeScript, we can make use of types. So we're creating our own type, which represents our Person and thanks to TypeScript and ES6, we can use a class for that. This model consists of the same properties that we've used for our mongoose schema in our REST API (idfirstnamelastname). For that I have created a models directory and within that directory I've added a file called Person.ts which contains the following code:
export class Person {
    private id:string;
    private firstname:string;
    private lastname:string;
    constructor(theId:string, theFirstname:string, theLastname:string) {
        this.id = theId;
        this.firstname = theFirstname;
        this.lastname = theLastname;
    }
    public getFirstName() {
        return this.firstname;
    }
    public getLastName() {
        return this.lastname;
    }
    public getId() {
        return this.id;
    }
}
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/models/Person.ts

The service

No matter if you're working with AngularJS, jQuery, ReactJS or Angular 2.0 you always have to make sure that you outsource your logic into a service or any other detached component that can be replaced if something changes. In Angular 2.0 we don't have a concept of Factories, Services and Providers like in AngularJS - everything is a @Component. So we're creating our PersonService class that allows us to read and store our data by firing XMLHttpRequests (XHR) to our REST API (api.project-webdev.com).

Since this service needs to work with our Person model, we need to import our model to our code. In TypeScript/ES6 we can use the import statement for that.
import {Person} from '../models/Person';
export class PersonService {
    getAllPersons() {
        var personService = this;
        return new Promise(function (resolve, reject) {
            personService.getJSON('http://api.yourdomain.com/person').then(function (retrievedPersons) {
                if (!retrievedPersons || retrievedPersons.length == 0) {
                    reject("ERROR fetching persons...");
                }
                resolve(retrievedPersons.map((p)=>new Person(p.id, p.firstname, p.lastname)));
            });
        });
    }
    addPerson(thePerson:Person) {
        this.postJSON('http://api.yourdomain.com/person', thePerson).then((response)=>alert('Added person successfully! Click list to see all persons.'));
    }
    getJSON(url:string) {
        return new Promise(function (resolve, reject) {
            var xhr = new XMLHttpRequest();
            xhr.open('GET', url);
            xhr.onreadystatechange = handler;
            xhr.responseType = 'json';
            xhr.setRequestHeader('Accept', 'application/json');
            xhr.send();
            function handler() {
                if (this.readyState === this.DONE) {
                    if (this.status === 200) {
                        resolve(this.response);
                    } else {
                        reject(new Error('getJSON: `' + url + '` failed with status: [' + this.status + ']'));
                    }
                }
            }
        });
    }
    postJSON(url:string, person:Person) {
        return new Promise(function (resolve, reject) {
            var xhr = new XMLHttpRequest();
            var params = `id=${person.getId()}&firstname=${person.getFirstName()}&lastname=${person.getLastName()}`;
            xhr.open("POST", url, true);
            xhr.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
            xhr.onreadystatechange = handler;
            xhr.responseType = 'json';
            xhr.setRequestHeader('Accept', 'application/json');
            xhr.send(params);
            function handler() {
                if (this.readyState === this.DONE) {
                    if (this.status === 201) {
                        resolve(this.response);
                    } else {
                        reject(new Error('getJSON: `' + url + '` failed with status: [' + this.status + ']'));
                    }
                }
            }
        });
    }
}
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/services/PersonService.ts

The application

Since we can read and store persons now, it's about time to take care of the UI and the point where everything starts is the Angular 2.0 application itself. It creates the whole application by glueing logic and UI together. The file we're talking about here is the app.ts.
import {Component, View, bootstrap, NgFor} from 'angular2/angular2';
import {RouteConfig, RouterOutlet, RouterLink, routerInjectables} from 'angular2/router';
import {PersonList} from './components/personlist/personlist';
import {PersonAdd} from './components/personadd/personadd';
@Component({
    selector: 'app'
})
@RouteConfig([
    {path: '/', component: PersonList, as: 'personlist'},
    {path: '/personadd', component: PersonAdd, as: 'personadd'}
])
@View({
    templateUrl: './app.html?v=<%= VERSION %>',
    directives: [RouterOutlet, RouterLink]
})
class App {
}
bootstrap(App, [routerInjectables]);

Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/app.ts


We need four different imports in our application:
  • Everything that is needed in order to render the application correctly
  • Everything that is needed in order to route our links correctly
  • Our page that allows us to list all stored persons
  • Out page that allows us to create a new person

Let's have a look at some snippets:
@Component({
    selector: 'app'
})
You can think of Angular apps as a tree of components. This root component we've been talking about acts as the top level container for the rest of your application. The root component's job is to give a location in the index.html file where your application will render through its element, in this case <app>. There is also nothing special about this element name; you can pick it as you like. The root component loads the initial template for the application that will load other components to perform whatever functions your application needs - menu bars, views, forms, etc.
@RouteConfig([
    {path: '/', component: PersonList, as: 'personlist'},
    {path: '/personadd', component: PersonAdd, as: 'personadd'}
])
Our demo application will have two links. One that loads a page which lists all stored persons and another one that allows to create a new one. So we need two routes in this case. Each route is directly linked to components, which we'll cover in a sec.
@View({
    templateUrl: './app.html?v=<%= VERSION %>',
    directives: [RouterOutlet, RouterLink]
})
The @View annotation defines the HTML that represents the component. The component I've developed uses an external template, so it specifies a templateUrl property including the path to the HTML file. Since we need to iterate over our stored persons, we need to inject the ng-For/For directive that we have imported. You can only use directives in your markup if they are specified here. Just skip the <%= VERSION %> portion, as this is part of the original Angular 2.0 Seed project and not changed in our application.
bootstrap(App, [routerInjectables]);
At the bottom of our app.ts, we call the bootstrap() function to load your new component into its page. The bootstrap() function takes a component and our injectables as a parameter, enabling the component (as well as any child components it contains) to render.

The index.html

The index.html represents the outline which will be filled with our components later. This is a pretty basic html file, that uses the aforementioned <app> tag (see our app.ts above) - you might also want to use a name like <person-app> or something, but then you need to adjust your app.ts. This is the hook point for our application.
<!DOCTYPE html>
<head>
[…]
</head>
<body>
[…]
<div class="jumbotron">
    <div class="container">
        <h1>Person Demo</h1>
        <p>This is just a little demonstration about how to use Angular 2.0 to interact with a REST API that we have
            created in the following series: <a
                    href="http://project-webdev.blogspot.com/2015/05/create-site-based-on-docker-part1.html"
                    target="_blank">Series: How to create your own website based on Docker </a>
    </div>
</div>
<div class="container">
    <app>Loading...</app>
    [...]
</div>
<!-- inject:js -->
<!-- endinject -->
<script src="./init.js?v=<%= VERSION %>"></script>
</body>
</html>
That's it... we've created our base application. Now everything we'll create will be rendered in the <app> portion of the page.

Creating the person list

Since encapsulation is very important in huge deployments, we're adding all self-contained components into the following folder: /app/components/ so in terms of the person list, this is going to be a folder called /app/components/personlist/.

Each component consists of
  • the component code itself
  • the template to use

As mentioned before, everything in Angular 2.0 is a component and so the structure of our personlist.ts pretty much looks like the app.ts.
import {Component, View,NgFor} from 'angular2/angular2';
// import the person list, which represents the array that contains all persons.
import {PersonService} from '../../services/PersonService';
//import our person model that represents our person from the REST service.
import {Person} from '../../models/Person';
@Component({
    selector: 'personlist',
    appInjector: [PersonService]
})
@View({
    templateUrl: './components/personlist/personlist.html?v=<%= VERSION %>',
    directives: [NgFor]
})
export class PersonList {
    personArray:Array<string>;
    constructor(ps:PersonService) {
        ps.getAllPersons().then((array)=> {
            this.personArray = array;
        });
    }
}
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/components/personlist/personlist.ts

As you can see, we're importing the following:

  • Standard Angular Components to be render the page (NgFor Directive is used to iterate through our list later)
  • Our Person service
  • Our Person model

We need to inject our PersonService into our component and the NgFor directive into our View, so we can use them later (e.g. see https://github.com/mastix/person-demo-angular2-seed/blob/master/app/components/personlist/personlist.html).

The real logic happens in the PersonList class itself - ok, there's not much here... but it's important. The constructor of this class uses the PersonService to fetch all Persons (the service will then fire a request to our API to fetch the list of persons) and to store them in an array. This array will then be accessible in the view, so we can iterate over it.
<table class="table table-striped">
    <tr>
        <th>ID</th>
        <th>FIRST NAME</th>
        <th>LAST NAME</th>
    </tr>
    <tr *ng-for="#person of personArray">
        <td>{{person.id}}</td>
        <td>{{person.firstname}}</td>
        <td>{{person.lastname}}</td>
    </tr>
</table>
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/components/personlist/personlist.html

We're using a table to represent the list of persons. So the only thing we need to do is to iterate over the personArray that we have created in our PersonList component. In every iteration we're creating a row (tr) with 3 fields (td) that contains the person's id, first name and last name.

Creating the person add page

Ok, since we can now list all persons, let's add the possibility to create a new one. We're following the same pattern here and create a personadd component (/app/components/personadd) that consists of some logic and a view as well.
import {Component, View, NgFor} from 'angular2/angular2';
// import the person list, which represents the array that contains all persons.
import {PersonService} from '../../services/PersonService';
//import our person model that represents our person from the REST service.
import {Person} from '../../models/Person';
@Component({
    selector: 'personadd',
    appInjector: [PersonService]
})
@View({
    templateUrl: './components/personadd/personadd.html?v=<%= VERSION %>',
})
export class PersonAdd {
    addPerson(theId, theFirstName, theLastName) {
        new PersonService().addPerson(new Person(theId, theFirstName, theLastName));
    }
}
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/components/personadd/personadd.ts

I'm not going to cover the annotations here, since they follow pretty much the same pattern like the PersonList. But what's important here is that the PersonAdd class offers a property/method/function called addPerson(), which takes three parameters: id, firstname, lastname. Based on these parameters, we can create our Person model and call our PersonService to store it on our server (in our mongodb Docker container via our ioJS REST Docker container).

Important: Usually you would add some validation here, but for the sake of simplicity I've skipped that.

As mentioned before, everything that we specify in the class will be available in the View, so this method can later be called from the HTML markup.
<form>
    <div class="form-group">
        <label for="inputId">ID</label>
        <input #id type="number" class="form-control" id="inputId" placeholder="Enter ID">
    </div>
    <div class="form-group">
        <label for="inputFirstName">First name</label>
        <input #firstname type="text" class="form-control" id="inputFirstName" placeholder="First name">
    </div>
    <div class="form-group">
        <label for="inputLastName">First name</label>
        <input #lastname type="text" class="form-control" id="inputLastName" placeholder="Last name">
    </div>
</form>
<button class="btn btn-success" (click)="addPerson(id.value, firstname.value, lastname.value)">Add Person</button>
Source: https://github.com/mastix/person-demo-angular2-seed/blob/master/app/components/personadd/personadd.html

I could have used angular2/forms here, but believe me, it is not ready to work with... I've struggled so much that I've decided to skip it (e.g. I'd have to update my type definitions and so on...). But what's really important here is that we can call our addPerson() method from our PersonAdd component and pass the values from our fields. Pretty easy, right?

Now we can build our project by running gulp build.prod and copy the contents of the newly created dist/prod/ folder to our docker host. Remember: In our docker compose file we've specified that our /opt/docker/projectwebdev/html folder will be mounted in our container (as /var/www/html). So we can easily update our HTML files and the changes will be reflected on our website on-the-fly.

So when you've copied all files, the directory structure should look like that:
├── config
│   ├── default.conf
│   └── nginx.conf
├── Dockerfile
└── html
    ├── app.html
    ├── app.js
    ├── bootstrap
    │   └── dist
    │       └── css
    │           └── bootstrap.min.css
    ├── components
    │   ├── personadd
    │   │   └── personadd.html
    │   └── personlist
    │       └── personlist.html
    ├── index.html
    ├── init.js
    ├── lib
    │   └── lib.js
    └── robots.txt

Here is what it looks like later

Adding a new person


Listing all persons



That's it... we have the backend and the frontend now... it's about time to create our nginx reverse proxy to make them all accessible!