Photo by Vishnu Mohanan
Checkout Part 1 of this series if you haven't already
This is my follow up to my previous post. In this post I'm going to explore a few other systems that I didn't include in my last post, namely self hosted analytics (not Google Analytics), and our system for having our docker container on our Digital Ocean server automatically update when a new version of our container is pushed to our container registry. So let's dive into it.
Plausible Analytics
Plausible Analytics is extremely cool open source software for allowing you to gain basic web analytics on your site. What I like about Plausible vs Google analytics, is not only that it can be self hosted and simple, but it's also very privacy focused. Since I'm simply building a blog site... I don't need to know everything about my users. I pretty much just want to know how many users came to my site, when, and where and Plausible solves this for me.
Additionally, It's an extremely simple UI which was another big selling point. I've used Google Analytics back in the day, and it was overwhelmingly complicated especially if all I really want is some stupid simple stats about my page. The other cool thing about Plausible is if you don't want to host it yourself you can pay for hosting on their platform, but as you know from my last post I'm doing a lot of this stuff not for conveniencie, but to learn something and have fun along the way. So in this post we'll do things "the hard way".
Setting Up Plausible
First off Plausible has great docs here. I'll show you a few things here as a supplement to those docs. First things first you'll want to setup a HTTPS server for plausible using Nginx.
My Nginx Conf looks like this:
server {
listen 80;
listen [::]:80;
server_name changeme.com;
access_log /var/log/nginx/plausible.access.log;
error_log /var/log/nginx/plausible.error.log;
location / {
proxy_pass http://localhost:8000;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name changeme.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/changeme.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/changeme.com/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
proxy_pass http://localhost:8000; # this is our pluasable service.
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
}
}
So the first server block is what listens on port 80 for non TLS encrypted HTTP traffic. If a request comes in on http it will simply redirect the user to our TLS origin. I'm using let's encrypt's cert bot to create my TLS cert. Which is a free and somewhat easy way to have TLS support on your site, which you absolutely should have. Most browsers will make a big stink if your site does not support TLS.
This is a fantastic tutorial on how to setup wild card certificates with Let's Encrypt and Cert Bot on Digital Ocean DNS. Wild card certs are supper useful because they allow you to create one TLS certificate that covers all subdomains on a domain name. So in my case anything like exmple.awhb.dev is covered under the same certificate as awhb.dev. If you don't do wild card certs you'll have to manually create certificates for each subdomain, which is painful.
Here's what my docker-compose.yaml file for this server looks like:
version: '3.3'
services:
mail:
image: bytemark/smtp
restart: always
plausible_db:
# supported versions are 12, 13, and 14
image: postgres:14-alpine
restart: always
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=postgres
plausible_events_db:
image: clickhouse/clickhouse-server:23.3.7.5-alpine
restart: always
volumes:
- event-data:/var/lib/clickhouse
- ./clickhouse/clickhouse-config.xml:/etc/clickhouse-server/config.d/logging.xml:ro
- ./clickhouse/clickhouse-user-config.xml:/etc/clickhouse-server/users.d/logging.xml:ro
ulimits:
nofile:
soft: 262144
hard: 262144
plausible:
image: plausible/analytics:v2.0
restart: always
command: sh -c "sleep 10 && /entrypoint.sh db createdb && /entrypoint.sh db migrate && /entrypoint.sh run"
depends_on:
- plausible_db
- plausible_events_db
- mail
ports:
- 127.0.0.1:8000:8000
env_file:
- plausible-conf.env
volumes:
db-data:
driver: local
event-data:
driver: local
Note this does not include my Nginx service. You can run your Nginx server either on docker or on your host. Honestly it's probably easier to run it on your host, but I went a little wild and am running mine in docker mostly because I didn't want to install a bunch of linux dependencies and configure them, but in retrospect I think it was probably more work the way I configured things, because my Nginx instance is being used for several servers at once running in multiple docker-compose files.
Another really important note here...
Be sure to do this:
ports:
- 127.0.0.1:8000:8000
127.0.0.1:8000:8000 insures that we don't expose this service outside of our local machine. Meaning you cannot directly connect to your servers public IP for example: http://my-public-IP:8000 if you don't set 127.0.0.1 here your service would be accessible this way which is a big security hole, since you want people connecting to your page through your https address and not on this port.
Auto Update Our Blog Docker Container
Now let's switch gears and talk about how we can update our blog container automatically when a new version gets pushed to our registry. Note this will work both with a self hosted registry and with a Docker Hub registry. The way this works is very simple we use a docker service called Watch Tower. Watch tower basically watches all of your running docker containers and checks to see if there's a new version of them on a fixed interval. You can customize this behavior to run as frequently or infrequently as you want.. and you can also have it only check for updates on a subset of your running containers if you want. In my case I just changed the frequency that it checks for updates to every 10 minutes, since it's default is every 24 hours, which in my case is too long, since I want to see my Blog changes update pretty quickly after they are built.
Here's how my docker-compose.ymal looks:
version: '3.9'
services:
watchtower:
image: containrrr/watchtower
environment:
- WATCHTOWER_POLL_INTERVAL=600
volumes:
- /var/run/docker.sock:/var/run/docker.sock
You can also optionally start this without docker-compose, but I find compose a lot easier than just running docker containers from the command line. The environment var WATCHTOWER_POLL_INTERVAL is important. This is the frequency that watchtower checks to see if there are any new containers to pull in seconds. So in my case this is 10 minutes.
Note Watch Tower will restart your docker services with the same configuration they were started in, so you don't have to worry about them being misconfigured when updated. And this service completes our very stupid simple CI system, since this automates the other end of the equation.
We could theoretically do this in another way, which would be to run another dumb web server like our simple GitHub hook server I run on my Raspberry Pi, but this one would handle a call by our Raspberry Pi server when our deploy script finishes, which would trigger a docker pull and restart. That's potentially a more efficient way of doing things, but there's some complexity there... what if your network connection is down on either server and things like that. I just didn't want to deal with that, but could be a fun little project.