…Because I know you care 🙂

When last we left our intrepid Pi I had listed the  software I had running on it. Here’s a quick recap of what that install was with a few of the missed out bits:

Pi 4 Model B with 8 gb of ram
5oo gb external

    • Ubuntu 21.04
    • Shairpoint-sync music server
    • Nginx Proxy Manager
    • Webhost Apache/php
    • MySql server
    • Pi-Hole
    • Nextcloud server
    • Samba File Server

But after watching a few videos (Novaspirit tech’s Pi series) and playing around with Docker a bunch more I basically redid the whole thing from scratch. So that makes the above info completely redundant 😉

SD backups

A word about installs and backups. One of the things I am especially happy about with the move to a Pi is the ability to image SD cards. It took me a while to get my methodology straightened out but here it is

Step 1: Install base operating system

I currently have two base installs:

  • Ubuntu 21.04 — this is a GUI-less (command line only) install of the Ubuntu linux distro. It has become my main system.
  • Ubuntu 20.04 Mate — this is the Ubuntu install with the MATE desktop and apps suite. I have pretty much abandoned this.

Set up server (this is for the headless Ubuntu):

  • change host name
  • add user
  • set static IP
  • install and set up Samba
  • install and set up Docker & Docker Compose

Step 2: Create an image

After I had everything installed and set up, it’s time to make an image of the SD card. Shutdown the computer and and pop the SD card into the MacBook SD slot.

Open Terminal and use dd to make an image:

sudo dd bs=1m if=/dev/rdisk2 of=/Users/admin/Desktop/pi-server.img

To parse this, it is basically saying: using root permission (sudo), make an image (dd) using 1 megabyte blocks (bs=1m) of the disk labeled disk2 to a file on the desktop named pi-server.img

This can take a long time depending on the size of the disks. With a lot of trial and error I settled on using my 32 gb SD card to make these images from and it takes around 400 seconds (6.7 minutes). When I tried with both the 64 gb card or even worse the 500 gb hard drive the time sometimes was in the hours. Which was ridiculous because most of that copy time was copying blank space.

What this means is I have to make all my changes to the install on the 32 GB card, which may mean redoing them since I generally make them first on the hard drive install. But it works for me as it forces me to a) document the changes I am making and b) do them several times which  helps me ingrain them in my memory.

This leaves me with 32gb disk image on my  laptop that I then re-image back to the 500GB hard drive — which is a pretty quick process.

I repeat this process whenever I make a major change to my setup so I can revert back anytime I screw something up (which thankfully is happening less and less).

The new install

This time I started with the base install described above (Ubuntu, Samba for file sharing and Docker to host containers), made an image and then moved on.


As I said in my Drop Dropbox? post, Docker is is a sort of virtual containerizing system. Rather than a true virtual machine (which acts as a completely different physical machine complete with OS and hardware configs), a container  is an isolated “box” into which you install your applications and whatever settings they need. The Docker host  manages the network interfaces and whatever other  i/o you need. The beauty of this is you can install all sorts of  support programs or drivers without affecting — or in my case, screwing up — the rest of the  system. If you don’t like it or make a mistake, you just shutdown the container and delete it and poof, it and all its resources are gone.


One of the things that made me switch to an almost completely Docker-based setup was the discovery of Portainer, which is a web-based GUI that allows you to manage all your Docker containers without having to understand the often complex and arcane Terminal commands used to invoke and manage them.

Oddly enough Portainer itself is a Docker container so that mean you have to do thing the hard way at least once. Create a docker volume (docker volume create portainer_data to store the app data on) and then run:

docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

And you are done. Sign into the system by typing http://<insert IP address>:9000 and you should be good to go. Here’s a screen shot of all my containers at the moment:

NGINX Proxy Manager

Next up was a reinstall of NGINX Proxy Manager to manage SSL and routing.  You can read about it in the NGINX Proxy Manager post.

One difference  in this install was I used Portainer to install it as a Stack (a group of associated programs). This meant the  NGNIX program and its associated Maria database were installed in separate containers that were linked together so they could be managed as a unit.

Then I reinstalled the Portainer container so it was using the same virtual network as NGINX and they now all talked to each other securely and happily.


An app called Home was recommended by several sources so I decided to give it a go. Home is a lightweight dashboard that allows one to  have a convenient starting place for  what is rapidly becoming my HomeLab. It runs off a simple yaml (stands for yet another markup language) text file.

You just add an entry like

- name: "Boat Search" logo: "assets/tools/swan.png" subtitle: "Utility to search Yachtworld" tag: "pythonapp" url: "" target: "_blank"

And it adds another item to your dashboard.

Then I added a Samba entry to allow me to change the config.yml file and drop in additional icons



With that done I  installed my lamp stack again (Linux, Apache, Mysql, and php) to redo my test websites. I decided not to do this in a container as I figured it wasn’t going to change.



Currently I am having issues with the dockerized version of this. The sound cuts out every once in a while and I can’t figure out why. I may go back to the direct install from What’s your Pi doing? and see if that solves the issue. But for now I have it as a Docker container—but disabled.


(See Drop Dropbox?) Again this is now purely Docker based and the install was super easy using Stacks. I can turn it off and on as I like and see how much of my Pi’s resources it is using.

YouTube DL

This one is new. I had this app installed on my Mac mini as it allowed me to download favourite YouTube videos for offline watching. But now I can use it with a web interface. I haven’t played with it much, but I expect it will be much more convenient than the command line version. (Note: I have subsequently had problems with this leading me to discover that YouTube is throttling the app. The first few videos go fine then it drops down to a snail’s pace rendering it almost unusable.)



Python Apps

I built a python/flask app to calculate Bus Fares for L and decided to see if I could dockerize it. It worked out pretty good and so I added it to the mix so L could use it.  Again the ability to manage it and turn it off and on will be a bit of a godsend as I develop it further.

The apps needed python and a webserver (I tried to use nginx but ended up going back to gunicorn — I was having trouble with uwisg in the container). It took some fussing as the resulting containers were initially 800 mb but eventually I knocked them down to a sixteenth of that size.

I also decided since that worked so well I would take my boat search app (If you can’t boat, dream…) that was hosted on the Google Cloud and move it to the Pi. And in the end I was able to share the python container between the two apps.


Grafana & Prometheus

Last but not least I wanted to be able to track some data about usage and load since I was now adding quite a bit onto the poor ~$100 Pi. A bit of research, one last Stack, and I was good to go.

Prometheus is an application that collects data from your machine. It can organize and store a ton of stuff and is so complex I ended up just stealing others’ setups. It uses plugins (I installed cadvisor and node_exporter) to collect specific data. The two I have monitor the docker containers and things like the  cpu/temperature/network stats from the Pi.

Grafana is a web interface the graphically displays a dashboard with whatever data you managed to  collect with Prometheus. Also complex and the config is again “borrowed.”

Très cool.


So that’s it for now

Everything is humming along splendidly now. I still have the issue of Shairport to deal with but that is minor. I can go back to any  of the multiple images I have and start over from there or simply delete a container and not have to worry about residual crud interfering with future installs.

I have tested docker containers for things like my Calibre library which I might potentially move off my Mac mini and am looking into OpenVPN and an online FileBrowser. But that’s all for the future.

I am a happy camper. But I wonder if I should get another Pi… hmmm…