Docker is the hyped technology in backend recently. I have no chance to use Docker, since the company I serve has been on private PaaS years before. Fortunately, I encountered problems with my own VPS. Thus, I decided to try Docker and now I want to share the process and my thought.
So far, I use Linode VPS to deploy my websites. I update and deploy my personal website frequently, and I use GitHub webhook to trigger the updating script. With the deterioration of China’s web environment, I have to build VPN and Shadowsocks for myself, which is useful for productivity.
At present, there are several services running on the VPS:
It seems not complicated but I did encounter problems on ops and security.
VPS provisioning is not automated and manual provisioning is trivial and boring.
On July, it was attacked and I had to rebuild my Linode. I then changed the default SSH port and it took tens of minutes to rebuild manually. Especially setting up VPN server is not easy and fallible.
In addition to aforementioned attacks, there are other kind of security risks. For example, Shell injection and actually I just fixed one on the projects.
Different programs may have conflicts with each other. For example, iptables is needed by VPN and it may cause failure of Nginx’s port with wrong configuration.
Also, it’s not only a production server but also used in development mode, which generates trashes in system folders.
Therefore, my requirements are:
Docker is a major containerization software, which packs applications into image and runs in container. The definition of container is:
Using containers, everything required to make a piece of software run is packaged into isolated containers. Unlike VMs, containers do not bundle a full operating system - only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed.3
Docker just meets my requirement, because of its container system. I can run the independent parts in different containers and rebuild a new Linode by initiating all the containers in an efficient way. Even I can scale my applications by the power of Docker.
Every cutting-edge popular (somehow over-hyped) technology solves problems. And Docker is a trend in infrastructure field.
I want to continue to use and build Docker on my current VPS node, as it has a good latency between Tokyo and Beijing, with a low ping of about 80ms. Luckily, Linode has started to support Docker since 20144.
My Linux image is Ubuntu 14.04 LTS, installation is quite easy5, simply execute:
curl -sSL https://get.docker.com/ | sh
P.S.: I used Ubuntu 16.04 LTS image at first, which is the lastest. But it’s not recommended. This version has conflict with systemd
. In consequence, I got stuck in starting docker-engine
, though I found the solution on StackOverflow.
Based on “products”, I need to build four container:
An image is describe as Dockerfile
, which is a Shell-like description file. Actually, there are images for me on Docker Hub, which is the Docker GitHub equivalent. This shows one of the advantages of Docker that, we may distribute softwares with Docker.
I found a hwdsl2/docker-ipsec-vpn-server, which is so complicated that use an exsited image saves me lots of time.
For simplicity, I built Shadowsocks image by myself.
It is not hard for a Shell user to quick start on Dockerfile, which consists of some declarations and installation processes. I simply got started by reading the reference.
FROM ubuntu:trusty
MAINTAINER David Zhang <crispgm@gmail.com>
RUN apt-get update \
&& apt-get install -y python-pip \
&& pip install shadowsocks
COPY etc/shadowsocks.json /etc/shadowsocks.json
EXPOSE 2968
CMD /usr/local/bin/ssserver -c /etc/shadowsocks.json -d start
The image is based on ubuntu:trusty
, of which the version can have no relationship with the host OS. Shadowsocks is distributed on pip. Therefore, executing pip install
is the only thing in installation. At last, don’t forget to COPY
configuration file into the image and EXPOSE
inner port to host OS. The CMD
command is the starting point.
Build the Dockerfile:
docker build -t crisp/shadowsocks .
Start in daemon mode:
docker run --name ssserver -d -p 2968:2968 crisp/shadowsocks
Then I ran docker ps
but found that the container had been shutdown, no containers listed.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Docker runs only if there is a foreground process, instead of a daemon process. Shadowsocks offers foreground mode but typical server’s programs run in daemon. Here is a trick:
CMD /usr/local/bin/ssserver -c /etc/shadowsocks.json -d start \
&& tail -f /var/log/shadowsocks.log
As a result, the container runs tail
command on foreground.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
81f57c6c0710 crisp/shadowsocks "/bin/sh -c '/usr/loc" 22 hours ago Up 22 hours 0.0.0.0:2967->2968/tcp ssserver
So far, the containerization progress bar reached 100%. Here is the Dockerfiles.
Compared to VMs that in lower level, Docker focuses on virtualization of applications and provides deployable softwares. We can quickly build and ship applications.
In addition, Docker plays an important role in CI/CD fields and I will dive into it in the future.
Enable HTTPS with Let’s Encrypt. https://crispgm.com/page/enable-https-with-letsencrypt.html. ↩
Prepare for removal of PPTP VPN before you upgrade to iOS 10 and macOS Sierra. https://support.apple.com/en-us/HT206844. ↩
Docker on Linode. https://blog.linode.com/2014/01/03/docker-on-linode/. ↩
What is Docker https://www.docker.com/what-docker. ↩
Docker Quick Reference. https://www.linode.com/docs/applications/containers/docker-quick-reference-cheat-sheet. ↩