Dec 312015
 
Synology git

Running git on github is great, but since all repositories are public, there’s a certain danger to publish password, API keys or similar on it. See here.

Since I got a Synology NAS, I can install a git server package! Except it does not do a lot, but it’s enough to get started and have my repositories on NAS. One central location. One location where making a backup is much easier than on random devices.

Requirements for this to work:

  • Enable ssh access on the NAS
  • Have a home directory with .ssh and .ssh/authorized_keys so you can log in via ssh and without needing to enter a password

Now to set up a git repository:

  • Install the git server Synology package
  • Log in as root on the NAS
  • cd /volumeX (x=1 in my case since I have only one volume)
    mkdir git-repos ; chown YOURACCOUNT:users git-repos
  • Log in as you on the NAS
  • cd git-repos
    mkdir dockerstuff; cd dockerstuff ; git init --bare --shared ; cd ..
  • Repeat for other repositories/directories

Now on a git client do:

  • git clone ds.lan:/volume1/git_repos/dockerstuff
  • put files in here
  • git add * ; git commit -a -m "Initial population" ; git push

 

Dec 272015
 
Wordpress in a Docker

When you read this, this blog moved do a (currently) Google Cloud VM Instance using Docker.

There’s a lot of examples how to set up containers for this combination of WordPress and MySQL, but the migration from an existing installation was nowhere mentioned. Thus here my notes from the end to the start:

Run this blog in Docker

  1. To start I need
    1. a server which can run Docker
    2. Docker 1.9.x as I need the volume command
    3. a MySQL dump of the cold copied DB files: mysql-backup.dump
    4. a tar archive from the wordpress directory: wordpress-backup.tar.gz
    5. a DNS entry to point to the Docker host
    6. Port 80 on the Docker host exposed to a public IP
  2. On the Docker host, create 3 empty data volumes which will host the persistent DB data, wordpress directory and backups:
    docker volume create --name blog-mysql-data
    docker volume create --name wordpress-data
    docker volume create --name blog-backup
  3. Populate blog-backup with the backup files:
    docker run -it -v blog-backup:/backup -v /home/harald:/data debian:8 bash
    cp /data/mysql-backup.dump /data/wordpress-backup.tar.gz /backup
    exit
  4. blog-backup is a volume which contains backups in /backup of the wordpress directory as well as the (cold) mysql DB. Extract like this:
    docker run -it -v blog-backup:/backup \
    -v blog-mysql-data:/var/lib/mysql \
    -v wordpress-data:/var/www/html debian:8 bash
    cd /var/lib
    tar xfv /backup/mysql-backup.dump
    cd /var/www/html
    tar xfv /backup/wordpress-backup.tar.gz
    exit
  5. Start MySQL Docker container first
    docker run --name blog-mysql \
    -v blog-mysql-data:/var/lib/mysql \
    -d \
    mysql/mysql-server:5.7
  6. Now start WordPress (and replace the passwords and account of course)
    docker run --name blog-app \
    -e WORDPRESS_DB_PASSWORD=MY_PASSWORD \
    -e WORDPRESS_DB_USER=MY_ACCOUNT \
    --link blog-mysql:mysql \
    -v wordpress-data:/var/www/html \
    -p 80:80 \
    -w /var/www/html/wordpress \
    -d \
    wordpress:latest

Google Cloud Configuration

When creating the VM which runs Docker, make sure you get Docker 1.9 or newer as the docker volume command does not exist until then. For now (December 2015) that means to choose the beta CoreOS instance for your Google Cloud VM.

Be able to copy files to the VM.

Beside this, make http and https traffic externally visible and remember the IP assigned to the VM.

DNS

My DNS is on linode.com, so I have to change the blog DNS entry there. TTL is now 5min (instead of 1h default) to make testing a bit faster.

Alternative during testing it’s sufficient to make the machine which runs the browser to point the FQDN to the current IP. The Docker host or container does not care about its external IP.

Initial Polulation of the DB Filesystem

The first population of the MySQL data was a bit tedious and manual due to the unexpected upgrade needed.

  1. Have a dump from the SQL DB:
     mysqldump -pMY_ADMINPW --all-databases > mysql-backup.dump
  2. Run mysql with the dump being available. Mount the directory which has the dump in /data2 and the (empty) blog-mysql-data under /var/lib/mysql and set an initial DB root password:
    docker run -it \
    -v /home/harald/dockerstuff/mysql-data:/data2 \
    -v blog-mysql-data:/var/lib/mysql \
    -e MYSQL_ROOT_PASSWORD=mypw -P mysql/mysql-server:5.7
  3. Since we did not name this container, we have to find its name or container ID:
    CONTAINERID=`docker ps --format "{{.ID}} {{.Image}}" | grep mysql-server | awk '{print $1}'`
    docker exec -it $CONTAINERID bash
  4. Inside the mysql container now run the import:
    mysql -u root -pmypw </data2/mysql-backup.dump
  5. and do a DB upgrade and stop mysql:
    mysql_upgrade -u root -p mypw
    pkill -SIGTERM mysqld
  6. By now the DB root password is change to the one from the dump. And blog-mysql-data now has a working MySQL DB with the last dump we took.

 

Initial Population of the WordPress Filesystem

I tried initially to use a plan vanilla WordPress Docker as unmodified as possible, but since I needed to add plugins and themes, I tried to find a programmatic way to add them. While thinking about future updates of WordPress and its plugins, it made me realize that a separate data volume for the wordpress directory is in order. The alternative would have been to rewriting /entrypoint.sh in the original WordPress Docker container.

  1. Start the WordPress Docker container with no modifications but let it connect to the MySQL container:
    docker volume create --name wordpress-data
    docker run -it --name blog-app \
    -e WORDPRESS_DB_PASSWORD=MY_PASSWORD \
    -e WORDPRESS_DB_USER=MY_ACCOUNT \
    --link blog-mysql:mysql \
    -v wordpress-data:/var/www/html \
    -p 80:80 \
    -w /var/www/html/wordpress wordpress:latest
  2. The /entrypoint.sh script will populate /var/www/html/ with a complete WordPress instance. Changing the WORKDIR to /var/www/html/wordpress puts those files to where I need them as that’s where the files are on the old server.
  3. Now you can stop the WordPress container. The data files are kept.
  4. I had to put a lot of my uploaded images back:
    docker run -it -v /home/harald:/data2 \
    -v wordpress-data:/var/www/html debian:8 bash
  5. Inside the Debian container copy the files to wordpress/wp-content/uploads
    cp -pr /data2/wordpress/wp-content/uploads/* /var/www/html/wordpress/wp-content/uploads/
  6. MySQL container was running at this time. Starting the WordPress container now again:
    docker run -it --name blog-app \
    -e WORDPRESS_DB_PASSWORD=MY_PASSWORD \
    -e WORDPRESS_DB_USER=MY_ACCOUNT \
    --link blog-mysql:mysql \
    -v wordpress-data:/var/www/html \
    -p 80:80 \
    -w /var/www/html/wordpress wordpress:latest
  7. For testing, edit /etc/hosts of the machine with the browser to make the FQDN of the blog point to the IP of the Docker host.
  8. Now in a browser I was able to see everything from my blog, log in, update Askismet, install YAPB and install the theme Suffusion.
  9. Stop the container, mount the data volume as before, and create a tar dump of /var/www/html/wordpress as wordpress-backup.tar.gz
    docker stop blog-app
    docker run -it -v wordpress-data:/var/www/html \
    -v /home/harald:/data2 debian:8 bash
    cd /var/www/html
    tar cfvz /data2/wordpress-backup.tar.gz wordpress
    exit

At this point wordpress-data is the complete wordpress directory and I have a tar archive of it.

Outstanding Items

Backup

MySQL

MySQL is easy enough manually:

docker exec -it blog-mysql bash

Inside run a mysqldump. Then transfer the dump to a place off-site. Automate by not using bash but instead call the script to make the backup. Or run mysqldump from another server but i think that causes more network traffic. I’d need to expose the mysql ports for the latter though.

WordPress

WordPress directory is equally easy:

docker exec -it blog-app bash

Inside again run a tar and/or rsync to a remote site.

Potential Issues

  • MySQL and WordPress currently must run on the same Docker host. To have them on separate hosts a network needs to be created to connect those.
    However I would have been ok to have MySQL and WordPress in one container as I plan not to scale. Right now I use the micro-instance of Google Cloud and I’m fine with this.
  • Disk space on the Docker host. It’s limited. 10G Google gives me (resp. I assigned myself).
    volumes use all disk space they can get, so the backup volume WILL increase if I do dayly dumps inside with no expiration. I plan to move them off-size though, so I can delete old backups quickly.
  • If the Docker hosts fails/restarts, I have to manually restart my 2 containers.
  • CPU/RAM of the f1-micro instance (1 shared CPU, 0.6GB RAM): it’s enough, but memory is used up:
    total       used       free     shared    buffers     cached 
    Mem:        607700     580788      26912      33404      12284     122960 
    -/+ buffers/cache:     445544     162156 
    Swap:            0          0          0

Comments

  • Note that the debian:8 image contains neither bzip2 nor xz.
Dec 062015
 
Google Cloud and Docker

Using the developer console of Google Cloud, deploying a CoreOS VM, connecting to it, using docker commands to start a Docker container was easy.

Here now the console version:

# Might need this once:
ssh-agent bash
ssh-add google_compute_engine
gcloud components update
gcloud config set compute/zone asia-east1-b
# Start instance
gcloud compute instances start instance-1
# See what we got
gcloud compute instances describe instance-1
# Set DNS
gcloud dns record-sets transaction start --zone="kubota7"
gcloud dns record-sets transaction add --zone="kubota7" --name="mq.kubota7.info." --ttl=300 --type=A "104.144.197.212"
gcloud dns record-sets transaction execute --zone="kubota7"
# Add firewall ports to open (port 1883)
gcloud compute firewall-rules create "mqtt" --allow tcp:1883 --description "MQTT traffic" --network "default" --source-ranges "0.0.0.0/0"

Now the fun part! We have an DNS record of a host which can run docker images. We allowed tcp:1883. Now let’s start it:

gcloud compute ssh instance-1 --command "docker run -d -p 1883:1883 hkubota/mosquitto:1.4.5"

Done!
Now tearing it all down:

gcloud compute ssh instance-1 --command "docker stop 3f262028d7abd0b9a5efa3b6bfc69c04e378244d8878f5fdf6e81c2ec38b8631"
yes | gcloud compute firewall-rules delete "mqtt"  
gcloud dns record-sets transaction start --zone="kubota7"
gcloud dns record-sets transaction remove --zone="kubota7" --name="mq.kubota7.info." --ttl 300 --type A "104.144.197.212"
gcloud dns record-sets transaction execute --zone="kubota7"
gcloud compute instances stop instance-1
Dec 052015
 
Docker! Docker! Docker!

Played a bit with Docker. I read about it (hard to miss, isn’t it?) and wanted to test this out a bit. Setting up a Docker container, deploy in a cloud and see what you can do.

Turns out: this is not only easy, but it’s very quickly understandable why it’s great for dev-ops style work. And why it’s security-wise difficult to use when dev and ops are separate for regulatory reasons.

Important discoveries and notes:

  • You need an underlying OS to run docker containers. Anything will do as long as it’s Linux with support for needed features:
    • kernel 3.10 or newer
    • aufs
    • cgroups
  • A good choice is CoreOS. Tested, and worked. Very small too. Ubuntu is fine too. As is CentOS. Anything modern really. The only thing this OS does is start/stop docker containers. Plus monitoring them if you want to. And store their images.
  • Your app runs all alone in that container. Thus you can use root, but to protect you from yourself, running as a normal user is preferred. In the Dockerfile that will do it:
USER harald
WORKDIR /home/harald
ENTRYPOINT ["/bin/bash"]

 

  • Sharing local filesystems or having local data volumes in containers is easy:
# Create a volume and mountpoint src
docker create -v /src --name src debian:8 /bin/true
# View what you did
docker volume ls
# Use via
docker run -it --volumes-from src --user root hkubota/debian-dev:1

  • If you want to have non-local and persistent modifyable filesystems, you are out of luck. Unsolved problem. NFS is one possibility with known drawbacks.
  • If you build a image, it has layers and the build is automate-able and reproducible. If you cannot, then a commit of a running container creates an image too.
  • https://imagelayers.io/ shows nicely how images are built with all those layers.
  • Development creates a lot of images. Many temporary, but they are kept anyway. Use this to clean them up and run via cron (original found here):
docker rm -v $(docker ps -a -q -f status=exited)
docker rmi $(docker images -f "dangling=true" -q)
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes
  • Since your container only runs your stuff, it does not run things like cron out of the box. Or any other daemon for that matter. If you need it, have it be part of the container and make sure crond (in this case) starts as part of your container start.
  • If you use e.g. Debian 8 as the main Docker container OS, have a development system to create binaries with and for it. Using Alpine Linux as OS and using pre-compiled binaries won’t work if glibc is expected and non-existant. A re-compile on Alpine will of course solve this.
    So stick to one container OS.
  • Security is a problem if you usually don’t allow people to be root. If you log in on the container host as, becoming root on it is trivial.

Pending:

  • How to run a DB? It needs persistent storage. How to do replication? How do I do backup?
  • MQTT similar needs persistent storage. A small data volume container will do.
  • Networking. While exposing a port is easy, I did not yet need interconnections between containers.