Jul 292017
 
Serverless Computing AKA FaaS

While looking into what “Serverless Computing” means as it looks like an oxymoron, I created a presentation for my workplace’s “Knowledge Sharing” series. Total of 140 people attended. Last year’s Docker presentation was about 200.

I removed the counter and translation example for cost reasons. The example code is on GitHub for everyone to play with. Here the slides.

 

Feb 192017
 
HAProxy, docker-compose and Let's Encrypt

Linode started to offer US$5 VMs and they are available in Tokyo (9ms ping as opposed to 122ms ping to California), so I could not resist to get another one and use it for some experimenting which I simply don’t dare to do on this very blog page (and my wife is using it for work too).

 

Goal

  • 2 web servers serving web server stuff (a simple static index.html serves here)
  • 3 virtual web addresses:
    • first web server
    • second web server
    • round-robin web server which serves data from the first and second web server
  • Everything with SSL without any warning from web browsers

Ingredients

  • 1 VM with a public IP address
  • 3 FQDNs pointing all to above IP address
    • www1.qw2.org
    • www3.qw2.org
    • www4.qw2.org
  • 2 web servers (www1, www3)
  • 1 load-balancer (www4)
  • 3 certificates for above FQDNs

Let’s Encrypt Certificates

I use acme.sh for my few certificate needs. This is how to get a new certificate issued:

./acme.sh --issue --dns dns_linode --dnssleep 1200 -d www4.qw2.org

This is using the DNS API from Linode (who hosts my DNS records). See more details here. It creates the required TXT record and removes it later again. I found that 1200 seconds wait time works. 900 does not always. I end up using 10 seconds, suspend the acme.sh command (shell ^Z), and use “dig -t TXT _acme-challenge.www4.qw2.org” until it returns some TXT record. Then continue the suspended acme.sh command.

You should then have a new directory www4.qw2.org in your acme.sh directory with those files:

harald@blue:~/.acme.sh$ ls -la www4.qw2.org/ 
total 36 
drwxr-xr-x  2 harald users 4096 Feb 19 10:16 . 
drwx------ 17 harald users 4096 Feb 19 00:26 .. 
-rw-r--r--  1 harald users 1647 Feb 19 10:16 ca.cer 
-rw-r--r--  1 harald users 3436 Feb 19 10:16 fullchain.cer 
-rw-r--r--  1 harald users 1789 Feb 19 10:16 www4.qw2.org.cer 
-rw-r--r--  1 harald users  517 Feb 19 10:16 www4.qw2.org.conf 
-rw-r--r--  1 harald users  936 Feb 19 00:26 www4.qw2.org.csr 
-rw-r--r--  1 harald users  175 Feb 19 00:26 www4.qw2.org.csr.conf 
-rw-r--r--  1 harald users 1675 Feb 19 00:26 www4.qw2.org.key

You’ll need the fullchain.cer and the private key www4.qw2.org.key later.

Repeat for www1 and www3 too.

Note that the secret key is world readable. the .acme.sh directory is therefore secured with 0700 permissions.

Setting up the Web Servers

Using lighttpd herte. The full directory structure:

harald@lintok1:~$ tree lighttpd 
lighttpd 
├── 33100 
│   ├── etc 
│   │   ├── lighttpd.conf 
│   │   ├── mime-types.conf 
│   │   ├── mod_cgi.conf 
│   │   ├── mod_fastcgi.conf 
│   │   ├── mod_fastcgi_fpm.conf 
│   │   └── www1.qw2.org 
│   │       ├── combined.pem 
│   │       └── fullchain.cer 
│   └── htdocs 
│       └── index.html 
├── 33102 
│   ├── etc 
│   │   ├── lighttpd.conf 
│   │   ├── mime-types.conf 
│   │   ├── mod_cgi.conf 
│   │   ├── mod_fastcgi.conf 
│   │   ├── mod_fastcgi_fpm.conf 
│   │   └── www3.qw2.org 
│   │       ├── combined.pem 
│   │       └── fullchain.cer 
│   └── htdocs 
│       └── index.html 
└── docker-compose.yml

Using the lighttpd.conf is simple and can be done in 5 or 10 minutes. The part for enabling https is this:

$SERVER["socket"] == ":443" { 
ssl.engine    = "enable" 
ssl.pemfile   = "/etc/lighttpd/www1.qw2.org/combined.pem" 
ssl.ca-file   = "/etc/lighttpd/www1.qw2.org/fullchain.cer" 
}

fullchain.cer is the one you get from the Let’s Encrypt run. “combined.pem” is created via

cat fullchain.cer www1.qw2.org.key > combined.pem

Here the content of docker-compose.yml:

lighttpd-33100: 
  image: sebp/lighttpd 
  volumes: 
    - /home/harald/lighttpd/33100/htdocs:/var/www/localhost/htdocs 
    - /home/harald/lighttpd/33100/etc:/etc/lighttpd 
  ports: 
    - 33100:80 
    - 33101:443
  restart: always
 
lighttpd-33102: 
  image: sebp/lighttpd 
  volumes: 
    - /home/harald/lighttpd/33102/htdocs:/var/www/localhost/htdocs 
    - /home/harald/lighttpd/33102/etc:/etc/lighttpd 
  ports: 
    - 33102:80 
    - 33103:443
  restart: always

To start those 2 web servers, use docker-compose:

docker-compose up

If you want to have a reboot automatically restart the service, then use do “docker-compose start” afterwards which installs a service.

To test, access: http://www1.qw2.org:33100, https://www1.qw2.org:33101, http://www3.qw2.org:33102, https://www3.qw2.org:33103

They all should work, and the https pages should find a proper security status (valid certificate, no name mismatch etc.).

Adding HAProxy

HAProxy (1.7.2 as of the time of writing) can be the SSL termination and forwarded traffic between the web server and HAProxy is unencrypted (resp. can be encrypted via another method), or HAProxy can simply forward traffic. Which one is preferred depends on the application. In my case it makes most sense to let HAProxy handle SSL.

First the full directory structure:

haproxy 
├── docker-compose.yml 
└── etc 
    ├── errors 
    │   ├── 400.http 
    │   ├── 403.http 
    │   ├── 408.http 
    │   ├── 500.http 
    │   ├── 502.http 
    │   ├── 503.http 
    │   ├── 504.http 
    │   └── README 
    ├── haproxy.cfg 
    └── ssl 
        └── private 
            ├── www1.qw2.org.pem 
            ├── www3.qw2.org.pem 
            └── www4.qw2.org.pem

The www{1,3}.qw2.org.pem were copied from the lighttpd files.

haproxy.cfg:

harald@lintok1:~/haproxy/etc$ cat haproxy.cfg  
global 
        user nobody 
        group users 
        #daemon 
 
        # Admin socket 
        stats socket /var/run/haproxy.sock mode 600 level admin 
        stats timeout 2m 
 
        # Default SSL material locations 
        #ca-base /usr/local/etc/haproxy/ssl/certs 
        #crt-base /usr/local/etc/haproxy/ssl/private 
 
        # Default ciphers to use on SSL-enabled listening sockets. 
        # For more information, see ciphers(1SSL). 
        tune.ssl.default-dh-param 2048 
 
        ssl-default-bind-options no-sslv3 no-tls-tickets 
        ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA
-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256
:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES1
28-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA
:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256
-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:A
ES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CB
C3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA 
 
        ssl-default-server-options no-sslv3 no-tls-tickets 
        ssl-default-server-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-R
SA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA2
56:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AE
S128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-S
HA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES2
56-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA
:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-
CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA 
 
defaults 
    log     global 
    mode    http 
    option  dontlognull 
    timeout connect 5000 
    timeout client  50000 
    timeout server  50000 
    errorfile 400 /usr/local/etc/haproxy/errors/400.http 
    errorfile 403 /usr/local/etc/haproxy/errors/403.http 
    errorfile 408 /usr/local/etc/haproxy/errors/408.http 
    errorfile 500 /usr/local/etc/haproxy/errors/500.http 
    errorfile 502 /usr/local/etc/haproxy/errors/502.http 
    errorfile 503 /usr/local/etc/haproxy/errors/503.http 
    errorfile 504 /usr/local/etc/haproxy/errors/504.http 
    stats enable 
    stats uri /stats 
    stats realm Haproxy\ Statistics 
    stats auth admin:SOME_PASSWORD 
 
frontend http-in 
    bind *:80 
    acl is_www1 hdr_end(host) -i www1.qw2.org 
    acl is_www3 hdr_end(host) -i www3.qw2.org 
    acl is_www4 hdr_end(host) -i www4.qw2.org 
    use_backend www1 if is_www1 
    use_backend www3 if is_www3 
    use_backend www4 if is_www4 
 
frontend https-in 
    bind *:443 ssl crt /usr/local/etc/haproxy/ssl/private/ 
    reqadd X-Forward-Proto:\ https 
    acl is_www1 hdr_end(host) -i www1.qw2.org 
    acl is_www3 hdr_end(host) -i www3.qw2.org 
    acl is_www4 hdr_end(host) -i www4.qw2.org 
    use_backend www1 if is_www1 
    use_backend www3 if is_www3 
    use_backend www4 if is_www4 
 
backend www1 
    balance roundrobin 
    option httpclose 
    option forwardfor 
    server s1 www1.qw2.org:33100 maxconn 32 
 
backend www3 
    balance roundrobin 
    option httpclose 
    option forwardfor 
    server s3 www3.qw2.org:33102 maxconn 32 
 
backend www4 
    balance roundrobin 
    option httpclose 
    option forwardfor 
    server s4-1 www1.qw2.org:33100 maxconn 32 
    server s4-3 www3.qw2.org:33102 maxconn 32 
 
listen admin 
    bind *:1936 
    stats enable 
    stats admin if TRUE

Replace “SOME_PASSWORD” with an admin password for the admin user who can stop/start backends via the Web UI.

Here the docker-compose.yml file to start HAProxy:

harald@lintok1:~/haproxy$ cat docker-compose.yml  
haproxy: 
  image: haproxy:1.7 
  volumes: 
    - /home/harald/haproxy/etc:/usr/local/etc/haproxy 
  ports: 
    - 80:80 
    - 443:443 
    - 1936:1936
  restart: always

To start haproxy, do:

docker-compose up

The Result

Now http://www1.qw2.org as well as https://www1.qw2.org works. No need for specific ports like 33100 or 33101 anymore. Same for www3.qw2.org. www4.qw2.org is a round-robin of www1 and www3, but it’s using the www4 certificate when using https. In all cases HAProxy terminates the SSL connections and it’s presenting the correct certificates.

Related: on http://www4.qw2.org:1936/haproxy?stats you can see the statistics of HAProxy.

Connecting it all all

Running 2 web servers plus the load-balancer with all of them internally connected and only the load-balancer visible on port 80 resp. 443 needs a new docker-compose.yml (changed to version 3 syntax) and a slight matching change haproxy.conf file:

harald@lintok1:~/three$ cat docker-compose.yml 
version: '3' 
 
services: 
  lighttpd-33100: 
    image: sebp/lighttpd 
    volumes: 
      - /home/harald/lighttpd/33100/htdocs:/var/www/localhost/htdocs 
      - /home/harald/lighttpd/33100/etc:/etc/lighttpd 
    expose: 
      - 80 
    restart: always 
 
  lighttpd-33102: 
    image: sebp/lighttpd 
    volumes: 
      - /home/harald/lighttpd/33102/htdocs:/var/www/localhost/htdocs 
      - /home/harald/lighttpd/33102/etc:/etc/lighttpd 
    expose: 
      - 80 
    restart: always 
 
  haproxy: 
    image: haproxy:1.7 
    volumes: 
      - /home/harald/three/haproxy/etc:/usr/local/etc/haproxy 
    ports: 
      - 80:80 
      - 443:443 
      - 1936:1936 
    restart: always

No need for lighttpd to handle SSL anymore (no more port 443 needed to be exposed at all). Only the HAProxy is visible from outside. Small changes are needed on haproxy.conf, but only in the backend section:

[...]
backend www1 
    balance roundrobin 
    option httpclose 
    option forwardfor 
    server s1 lighttpd-33100:80 maxconn 32 
 
backend www3 
    balance roundrobin 
    option httpclose 
    option forwardfor 
    server s3 lighttpd-33102:80 maxconn 32 
 
backend www4 
    balance roundrobin 
    option httpclose 
    option forwardfor 
    server s4-1 lighttpd-33100:80 maxconn 32 
    server s4-3 lighttpd-33102:80 maxconn 32
 [...]

And with “docker ps” we can see what’s happening under the hood of docker-compose:

harald@lintok1:~/three$ docker ps 
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS         
     PORTS                                                              NAMES 
742a2e5388f2        sebp/lighttpd       "lighttpd -D -f /e..."   4 minutes ago       Up 3 minutes   
     80/tcp                                                             three_lighttpd-33100_1
9d4c61e6c162        sebp/lighttpd       "lighttpd -D -f /e..."   4 minutes ago       Up 3 minutes   
     80/tcp                                                             three_lighttpd-33102_1 
2e41dfa26ac9        haproxy:1.7         "/docker-entrypoin..."   4 minutes ago       Up 3 minutes   
     0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:1936->1936/tcp   three_haproxy_1 

 

 

 

Jul 092016
 

Since moving Docker containers is easy, and Amazon keeps on patching and rebooting my little AWS server, time to put the theory into practice and move to some other place.

Moving was 4 parts:

  • Copy the data volumes over
  • Change DNS
  • Configure the load balancer
  • Start everything in the new place

The first one needed some scripting since moving data containers in Docker is not a built-in function. And that needs a corresponding restore function of course. Here the scripts, first backup:

#!/bin/bash
# Does a backup from one or many Docker volumes
# Harald Kubota 2016-07-09

if [[ $# -eq 0 ]] ; then
 echo "Usage: $0 docker_volume_1 [docker_volume_2 ...]"
 echo "will create tar.bz2 files of the docker volumes listed"
 exit 10
fi

today=`date +%Y-%m-%d`

for vol in "$@" ; do

 backup_file=$vol-${today}.tar.bz2
 echo "Backup up docker volume $i into $backup_file"
 docker run --rm -v $vol:/voldata -v $(pwd):/backup debian:8 \
 tar cf /backup/$vol.tar.tmp /voldata
 bzip2 <$vol.tar.tmp >$backup_file && rm -f $vol.tar.tmp
done

and here restore:

#!/bin/bash
# Restore a tar dump back into a Docker data volume
# tar file can be .tar or .tar.bz2
# Name is always VOLUME-NAME-20YY-MM-DD.tar[.bz2]
# Harald Kubota 2016-07-09

if [[ $# -ne 1 ]] ; then
 echo "Usage: $0 tarfile"
 echo "will create a new volume derived from the tarfile name and restore the tarfile data into it"
 exit 10
fi

today=`date +%Y-%m-%d`

for i in "$@" ; do

 volumename=$(echo $i | sed 's/\(.*\)-20[0-9][0-9]-[0-9][0-9]-[0-9][0-9]\.tar.*/\1/')
 docker volume create --name $volumename

 # if .bz2 then decompress first
 length=${#i}
 last4=${i:length-4}
 tar_name=$i
 delete_tar=0

 echo "Restoring tar file $i into volume $volumename"

 if [[ ${last4} == ".bz2" ]] ; then
 tar_name=$(basename $i .bz2)
 bunzip2 <$i >$tar_name
 delete_tar=1
 fi
 #echo "tar_name=$tar_name, delete_tar=$delete_tar"

 docker run --rm -v $volumename:/voldata -v $(pwd):/backup debian:8 \
 tar xfv /backup/$tar_name
 if [[ $delete_tar -eq 1 ]] ; then
 rm $tar_name
 fi
done

With this, moving Docker volumes is a piece of cake, and it doubles as a universal backup mechanism too.

Updating DNS is trivial, as is the Load Balancer part which maps the popular port 80 to the correct Docker instance. Starting is a simple “docker-compose up”.

Mar 062016
 
Using HAProxy with Web Mirrors

Setting up dockerized webmirrors is easy with this


docker run -e web_source="http://www.theregister.co.uk/" -e port=3402 --expose=3402 -P --name=mirror_theregister -d hkubota/webmirror

and you’ll get a descent copy of The Register’s web page on port 3402. Thus access is via http://this.host:3402

Accessing would be nicer via a virtual hostname (e.g. theregister-mirror.this.host). That’s where HAProxy comes into play: as a reverse proxy. I found the basics here.

Here the complete start script:

#!/bin/bash
# Create a valid and usable configport_blog=`docker inspect mirror_haraldblog | jq -r '.[0].NetworkSettings.Ports."3401/tcp"[0].HostPort'`
port_reg=`docker inspect mirror_theregister | jq -r '.[0].NetworkSettings.Ports."3402/tcp"[0].HostPort'`

cat <~/haproxy/config/haproxy/haproxy.cfg
global
 user haproxy
 group users
 # Admin socket 
 stats socket /var/run/haproxy.sock mode 600 level admin 
 stats timeout 2m
 
 #daemon

 # Default SSL material locations
 #ca-base /etc/ssl/certs
 #crt-base /etc/ssl/private

# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
#ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL

defaults
 log global
 mode http
 option httplog
 option dontlognull
 timeout connect 5000
 timeout client 50000
 timeout server 50000
 errorfile 400 /etc/haproxy/errors/400.http
 errorfile 403 /etc/haproxy/errors/403.http
 errorfile 408 /etc/haproxy/errors/408.http
 errorfile 500 /etc/haproxy/errors/500.http
 errorfile 502 /etc/haproxy/errors/502.http
 errorfile 503 /etc/haproxy/errors/503.http
 errorfile 504 /etc/haproxy/errors/504.http
 stats enable 
 stats uri /stats 
 stats realm Haproxy\ Statistics 
 stats auth admin:ADMIN_PASSWORD
 
frontend http-in
 bind *:80
 acl is_site1 hdr_end(host) -i www1.studiokubota.com
 acl is_site2 hdr_end(host) -i www2.studiokubota.com
 use_backend site1 if is_site1
 use_backend site2 if is_site2

backend site1
 balance first
 option httpclose
 option forwardfor
 server realblog harald.studiokubota.com:80 maxconn 32 check
 server mirrorblog studiokubota.com:${port_reg} maxconn 32 check
 stats admin if true
backend site2
 balance roundrobin
 option httpclose
 option forwardfor
 server s2 studiokubota.com:${port_blog} maxconn 32

listen admin
 bind *:1936
 stats enable
 stats admin if TRUE
_EOF_

docker run -v ~/haproxy/config:/config -p 80:80 -d -p 1936:1936 hkubota/haproxy

Via HAProxy you can access the mirrors via http://www1.studiokubota.com resp. another mirror at http://www2.studiokubota.com, which are much nicer to use.

This setup is probably something for docker-compose if this was one application.

Update 2016-05-03: Enable admin interface: disable/enable backend servers via http://host:1936/haproxy?stats

 

Jan 242016
 
AWS does not like my Docker :-(

My WordPress/MySQL Docker instances on AWS are unhappy: after some time, they stop. No idea why. All I see is

Failed Units: 1
 polkit.service

when I log in after WordPress seems to have died. And no Docker process is running. While I can restart them easily, it’s annoying to do so manually. At this point I have 2 possibilities: find the root cause, or work around it via monitoring and automatic restarting when needed.

And a load balancer to make the service available even when AWS does kill my Docker containers would be nice. Actually a load balancer has to do monitoring anyway. Since this topic came up at work too, it’s double useful. So a load balancer it is!

A nice article, although a bit dated, is here. He knows what he’s talking about: he made up elastic binary trees and wrote haproxy and made it saturate a 10G link back in 2009.

nginx is a popular http load balancer (amond other things it can do), but since haproxy can do TCP in general, it’s more universally suitable for me, so that’s what it’ll be.

Step 1 is to have something to load balance. Because this is blog runs on WordPress with its MySQL instance, doubling this is non-trivial: WordPress keeps files in its web directory, and the MySQL instanced would need a multi-master replication setup.

Instead I’ll have a static copy of the web page…and then use a load balancer to switch to the static page when the primary is dead.

PS: To be fair AWS might not be at fault here. Truth is: I don’t know what’s causing the problem. On Google Cloud I had no such issue, but that’s about all I can say. I don’t expect problems in the Docker containers as they are quite literally identical. Even the version of CoreOS is identical.

Anyway, this all sounds like a good excuse to set up some load balancing and monitoring.

 

Jan 092016
 

Docker has a PID 1 problem: On normal Unix systems his is the init process which does 3 important things (and some more):

  1. Adopt orphans
  2. Reap zombies
  3. Forward signals

There are 2 ways to start processes in a Docker container:

  1. CMD <command> <param1> <param2>…
  2. CMD [“executable”, “param1”, “param2”,…]

In the first case a shell (/bin/sh) runs your program, so PID 1 is /bin/sh. In the 2nd case your executable gets PID 1. Neither is good as neither can do what init normally does.

A fix is to run a proper init system (systemd, SysV init etc.) but that’s way more than you need. A more appropriate fix is to use a simple or dumb init. Like this: https://github.com/Yelp/dumb-init

A nice write-up from the Yelp engineering blog: http://engineeringblog.yelp.com/2016/01/dumb-init-an-init-for-docker.html

Note that this is not needed if

  • Your process runs as PID 1 and does not spawn new processes or
  • Your containers live short so that the volume of potential zombie processes won’t matter and
  • you don’t write any data so a sudden SIGTERM from Docker won’t cause issues with data consistency
Jan 022016
 
Docker Backup

I changed slightly the way to start my containers. Now they always contain the backup volume too:

docker run --name blog-mysql -v blog-backup:/backup \
-v blog-mysql-data:/var/lib/mysql -d mysql/mysql-server:5.7
docker run --name blog-app -v blog-backup:/backup \
-e WORDPRESS_DB_PASSWORD=DBPASSWORD -e WORDPRESS_DB_USER=WPDBACCOUNT \
--link blog-mysql:mysql -v wordpress-data:/var/www/html -p 80:80 \
-w /var/www/html/wordpress -d wordpress:latest

The reason is that I can run a backup job inside the container. Important for the DB backup as now I can use msqldump. Before it was: stop mysql, tar up the DB files, start MySQL again.

Making a backup in each container:

tar -C /var/www/html -z -f /backup/blog-wp-`data +%Y-%m-%d.tar.gz .

resp.

mysqldump -pROOTMYSQLASSWORD --all-databases | gzip >/backup/blog-db-`date +%Y-%m-%d`.tar.gz

Now the problem is how to get those files out and to where…

Dropbox/Google Cloud don’t offer access via ftp/scp/etc. Time to look into the various storage offerings from Google/Amazon/Azure.

Jan 022016
 
Moving dockerized WordPress

One advantage of Docker is the easy move to other Docker providers. In my case: Moving from Google Cloud to Amazon’s AWS. Same instance size, same OS (CoreOS 877). The main difference is the location: Google’s one is in TW, AWS’s one is in JP.

So I start with my MySQL/Wordpress instance from here to AWS.

  1. Create a t1.micro instance with CoreOS 877 (1 shared CPU, 0.6 GB RAM)
  2. Set up ssh authentication for the main user (core for CoreOS, root usually)
  3. Create 3 docker containers and populate them with the last backup of the current instance of MySQL/Wordpress.
    1. Stop WordPress/MySQL. Mount the data volumes to the backup volume. Run a backup.
    2. Start MySQL/WordPress again
  4. Copy those 2 tar files to the new CoreOS AWS server.
  5. Restore the data volumes.
  6. Here wish that Docker would allow copying data volumes from A to B.
  7. Run the very same docker commands to run MySQL and WordPress.
  8. Change DNS (or test by faking a DNS change on the desktop client which runs the browser)

And lo-and-behold, it worked as expected. No issue at all.

When you can read this, then this blog already moved to run on AWS. I’ll keep it here for a while.

Dec 272015
 
Wordpress in a Docker

When you read this, this blog moved do a (currently) Google Cloud VM Instance using Docker.

There’s a lot of examples how to set up containers for this combination of WordPress and MySQL, but the migration from an existing installation was nowhere mentioned. Thus here my notes from the end to the start:

Run this blog in Docker

  1. To start I need
    1. a server which can run Docker
    2. Docker 1.9.x as I need the volume command
    3. a MySQL dump of the cold copied DB files: mysql-backup.dump
    4. a tar archive from the wordpress directory: wordpress-backup.tar.gz
    5. a DNS entry to point to the Docker host
    6. Port 80 on the Docker host exposed to a public IP
  2. On the Docker host, create 3 empty data volumes which will host the persistent DB data, wordpress directory and backups:
    docker volume create --name blog-mysql-data
    docker volume create --name wordpress-data
    docker volume create --name blog-backup
  3. Populate blog-backup with the backup files:
    docker run -it -v blog-backup:/backup -v /home/harald:/data debian:8 bash
    cp /data/mysql-backup.dump /data/wordpress-backup.tar.gz /backup
    exit
  4. blog-backup is a volume which contains backups in /backup of the wordpress directory as well as the (cold) mysql DB. Extract like this:
    docker run -it -v blog-backup:/backup \
    -v blog-mysql-data:/var/lib/mysql \
    -v wordpress-data:/var/www/html debian:8 bash
    cd /var/lib
    tar xfv /backup/mysql-backup.dump
    cd /var/www/html
    tar xfv /backup/wordpress-backup.tar.gz
    exit
  5. Start MySQL Docker container first
    docker run --name blog-mysql \
    -v blog-mysql-data:/var/lib/mysql \
    -d \
    mysql/mysql-server:5.7
  6. Now start WordPress (and replace the passwords and account of course)
    docker run --name blog-app \
    -e WORDPRESS_DB_PASSWORD=MY_PASSWORD \
    -e WORDPRESS_DB_USER=MY_ACCOUNT \
    --link blog-mysql:mysql \
    -v wordpress-data:/var/www/html \
    -p 80:80 \
    -w /var/www/html/wordpress \
    -d \
    wordpress:latest

Google Cloud Configuration

When creating the VM which runs Docker, make sure you get Docker 1.9 or newer as the docker volume command does not exist until then. For now (December 2015) that means to choose the beta CoreOS instance for your Google Cloud VM.

Be able to copy files to the VM.

Beside this, make http and https traffic externally visible and remember the IP assigned to the VM.

DNS

My DNS is on linode.com, so I have to change the blog DNS entry there. TTL is now 5min (instead of 1h default) to make testing a bit faster.

Alternative during testing it’s sufficient to make the machine which runs the browser to point the FQDN to the current IP. The Docker host or container does not care about its external IP.

Initial Polulation of the DB Filesystem

The first population of the MySQL data was a bit tedious and manual due to the unexpected upgrade needed.

  1. Have a dump from the SQL DB:
     mysqldump -pMY_ADMINPW --all-databases > mysql-backup.dump
  2. Run mysql with the dump being available. Mount the directory which has the dump in /data2 and the (empty) blog-mysql-data under /var/lib/mysql and set an initial DB root password:
    docker run -it \
    -v /home/harald/dockerstuff/mysql-data:/data2 \
    -v blog-mysql-data:/var/lib/mysql \
    -e MYSQL_ROOT_PASSWORD=mypw -P mysql/mysql-server:5.7
  3. Since we did not name this container, we have to find its name or container ID:
    CONTAINERID=`docker ps --format "{{.ID}} {{.Image}}" | grep mysql-server | awk '{print $1}'`
    docker exec -it $CONTAINERID bash
  4. Inside the mysql container now run the import:
    mysql -u root -pmypw </data2/mysql-backup.dump
  5. and do a DB upgrade and stop mysql:
    mysql_upgrade -u root -p mypw
    pkill -SIGTERM mysqld
  6. By now the DB root password is change to the one from the dump. And blog-mysql-data now has a working MySQL DB with the last dump we took.

 

Initial Population of the WordPress Filesystem

I tried initially to use a plan vanilla WordPress Docker as unmodified as possible, but since I needed to add plugins and themes, I tried to find a programmatic way to add them. While thinking about future updates of WordPress and its plugins, it made me realize that a separate data volume for the wordpress directory is in order. The alternative would have been to rewriting /entrypoint.sh in the original WordPress Docker container.

  1. Start the WordPress Docker container with no modifications but let it connect to the MySQL container:
    docker volume create --name wordpress-data
    docker run -it --name blog-app \
    -e WORDPRESS_DB_PASSWORD=MY_PASSWORD \
    -e WORDPRESS_DB_USER=MY_ACCOUNT \
    --link blog-mysql:mysql \
    -v wordpress-data:/var/www/html \
    -p 80:80 \
    -w /var/www/html/wordpress wordpress:latest
  2. The /entrypoint.sh script will populate /var/www/html/ with a complete WordPress instance. Changing the WORKDIR to /var/www/html/wordpress puts those files to where I need them as that’s where the files are on the old server.
  3. Now you can stop the WordPress container. The data files are kept.
  4. I had to put a lot of my uploaded images back:
    docker run -it -v /home/harald:/data2 \
    -v wordpress-data:/var/www/html debian:8 bash
  5. Inside the Debian container copy the files to wordpress/wp-content/uploads
    cp -pr /data2/wordpress/wp-content/uploads/* /var/www/html/wordpress/wp-content/uploads/
  6. MySQL container was running at this time. Starting the WordPress container now again:
    docker run -it --name blog-app \
    -e WORDPRESS_DB_PASSWORD=MY_PASSWORD \
    -e WORDPRESS_DB_USER=MY_ACCOUNT \
    --link blog-mysql:mysql \
    -v wordpress-data:/var/www/html \
    -p 80:80 \
    -w /var/www/html/wordpress wordpress:latest
  7. For testing, edit /etc/hosts of the machine with the browser to make the FQDN of the blog point to the IP of the Docker host.
  8. Now in a browser I was able to see everything from my blog, log in, update Askismet, install YAPB and install the theme Suffusion.
  9. Stop the container, mount the data volume as before, and create a tar dump of /var/www/html/wordpress as wordpress-backup.tar.gz
    docker stop blog-app
    docker run -it -v wordpress-data:/var/www/html \
    -v /home/harald:/data2 debian:8 bash
    cd /var/www/html
    tar cfvz /data2/wordpress-backup.tar.gz wordpress
    exit

At this point wordpress-data is the complete wordpress directory and I have a tar archive of it.

Outstanding Items

Backup

MySQL

MySQL is easy enough manually:

docker exec -it blog-mysql bash

Inside run a mysqldump. Then transfer the dump to a place off-site. Automate by not using bash but instead call the script to make the backup. Or run mysqldump from another server but i think that causes more network traffic. I’d need to expose the mysql ports for the latter though.

WordPress

WordPress directory is equally easy:

docker exec -it blog-app bash

Inside again run a tar and/or rsync to a remote site.

Potential Issues

  • MySQL and WordPress currently must run on the same Docker host. To have them on separate hosts a network needs to be created to connect those.
    However I would have been ok to have MySQL and WordPress in one container as I plan not to scale. Right now I use the micro-instance of Google Cloud and I’m fine with this.
  • Disk space on the Docker host. It’s limited. 10G Google gives me (resp. I assigned myself).
    volumes use all disk space they can get, so the backup volume WILL increase if I do dayly dumps inside with no expiration. I plan to move them off-size though, so I can delete old backups quickly.
  • If the Docker hosts fails/restarts, I have to manually restart my 2 containers.
  • CPU/RAM of the f1-micro instance (1 shared CPU, 0.6GB RAM): it’s enough, but memory is used up:
    total       used       free     shared    buffers     cached 
    Mem:        607700     580788      26912      33404      12284     122960 
    -/+ buffers/cache:     445544     162156 
    Swap:            0          0          0

Comments

  • Note that the debian:8 image contains neither bzip2 nor xz.
Dec 062015
 
Google Cloud and Docker

Using the developer console of Google Cloud, deploying a CoreOS VM, connecting to it, using docker commands to start a Docker container was easy.

Here now the console version:

# Might need this once:
ssh-agent bash
ssh-add google_compute_engine
gcloud components update
gcloud config set compute/zone asia-east1-b
# Start instance
gcloud compute instances start instance-1
# See what we got
gcloud compute instances describe instance-1
# Set DNS
gcloud dns record-sets transaction start --zone="kubota7"
gcloud dns record-sets transaction add --zone="kubota7" --name="mq.kubota7.info." --ttl=300 --type=A "104.144.197.212"
gcloud dns record-sets transaction execute --zone="kubota7"
# Add firewall ports to open (port 1883)
gcloud compute firewall-rules create "mqtt" --allow tcp:1883 --description "MQTT traffic" --network "default" --source-ranges "0.0.0.0/0"

Now the fun part! We have an DNS record of a host which can run docker images. We allowed tcp:1883. Now let’s start it:

gcloud compute ssh instance-1 --command "docker run -d -p 1883:1883 hkubota/mosquitto:1.4.5"

Done!
Now tearing it all down:

gcloud compute ssh instance-1 --command "docker stop 3f262028d7abd0b9a5efa3b6bfc69c04e378244d8878f5fdf6e81c2ec38b8631"
yes | gcloud compute firewall-rules delete "mqtt"  
gcloud dns record-sets transaction start --zone="kubota7"
gcloud dns record-sets transaction remove --zone="kubota7" --name="mq.kubota7.info." --ttl 300 --type A "104.144.197.212"
gcloud dns record-sets transaction execute --zone="kubota7"
gcloud compute instances stop instance-1