Auto mount network shares: autofs

Apparently there is a BSD utility called autofs that mounts network drives on demand. And with OS X’s unix underpinnings this means it works on your Mac.

This is revolutionary. If you’ve worked in a server environment or tried to store your iTunes or iPhoto library on an external drive you will know, things like network outages, reboots or even taking your laptop offsite will mean you have to reconnect, which while not arduous, is a bit annoying and often hard to explain to users.

In my case I run Calibre-Web on my mac-mini server but house the calibre db on my personal machine. Which means I resorted to writing a script to reconnect every time I rebooted something…which seemed to be be pretty often.

The code

Disclaimer: this is the code for my old mac mini which is stuck on High Sierra 10.13.6. I have read (see links below) that it works slightly differently for newer versions of OS X.

First off edit the auto_master file to insert the auto_smb line and comment out the /net line:

sudo nano /etc/auto_master

# Automounter master map
+auto_master            # Use directory service
#/net                   -hosts          -nobrowse,hidefromfinder,nosuid
/home                   auto_home       -nobrowse,hidefromfinder
/Network/Servers        -fstab
/-                      -static
/-                      auto_smb        -nosuid,noowners

Then you will create the config file you specified above (auto_smb):

sudo nano /etc/auto_smb

The first bit is the location of the share. In this case I called it calibre and wanted it to mount in the volumes folder with all the the rest of the regular mounts.

Then you need to add the login information including your name and password and the network location. An IP will work just as well if you are using a static one.

/../Volumes/calibre     -fstype=smbfs,soft,noowners,nosuid,rw ://username:password@Mac%20mini%202020._smb._tcp.local$

Other Uses

At some point I am going to move L’s ever-growing music library to something like a NAS (network attached storage) and this will be a godsend if it works the way it has so far. Fingers crossed.


Automount network shares on Mac OS for use in iTunes

automount not working after macOS Catalina updates

NGINX Proxy Manager


What’s it all about?

My home server was just revolutionized! I’ve run several websites on my home network for years for testing purposes. Recently I was doing some work for hire and I needed to open them up to the wider internet. In the past I would just open up a bunch of port forwards and be happy.

Port forwarding: generally web traffic travels though various devices on a port 80 (http) or port 443 (https). You can open up other ports on your router and forward them to specific devices e.g.  external traffic sent to  —> internal route

This results in opening a bunch of ports on your router (insecure) and having to give clients and others oddlooking urls like 

And recently Shaw has upgraded their routers to use a fancy fancy web interface that actually removes functionality in the name of making things easier. So my linux server, which had a virtual NIC (network interface card) with a separate IP, didn’t show up on their management site and I was unable to forward any external traffic to it.

But up until this week it was a c’est la vie sort of thing as I struggled to try and figure out how to get the virtual NIC to appear on the network. And then I saw this video about self hosting that talked about setting up a reverse proxy server.

NGINX Proxy Manager

Find it here:

Turns out this was what I was supposed to be doing all along. A reverse proxy senses incoming traffic and routes it not via the port but by the dns name. So now that I have it set up I can just add a CNAME to my dns setup like and it will send it to my home IP on the normal port 80. My router lets it through, passes it to the proxy server which then parses the name and then sends it on to the proper machine/service. So then whenever I set up a new project I can go and add and the proxy server will send it to where it belongs on my internal setup.

So cool.

My Set Up

I used to have some ports going to my Mac mini server and some ports to my Linux machine. Now all traffic is directed to the linux box. It runs NGINX Proxy Manager (NPM) on a Docker container and receives traffic on port 80. I moved the two websites hosted on that box to ports 8090 and NPM now sorts them based on the various CNAMEs I added to my hosting.


CNAMEs are canonical names — akin to forwarding in a weird way. is a CNAME for So if for some reason the IP address changes for then will still go to the right place. If I set up a domain which points to the IP that is assigned to our house by our ISP (Shaw, Telus etc.) I can then set up the CNAME which will be handled internally. If our IP ever changes (which it used to do quite often) now I only have to change the one record and all the CNAMES will still work.


Docker is a virtualized container system. I haven’t a lot of experience with it but this iteration of the NGINX proxy is a GUI based  implementation of the command line version and the developer decided to set it up as container (sort of a mini virtual computer) so he could easily roll out updates as necessary.  So my poor old Linux box is now running virtualized software on top of being a web server and a linux sandbox. Not bad for something from 2009. I will start playing a bit more with docker because it allows you to build a container and implement it with all sorts of things without affecting the main machine and, best of all, be able to throw out any changes and start again. we will see if the  old PC is up to it or not.

I  also installed docker-compose in order for Docker to run “headless” in the background.

Here’s a good video on the process:


The Process


(From the video)

Update the Linux system:
sudo apt update
sudo apt upgrade
sudo apt install

sudo systemctl start docker
sudo systemctl enable docker
sudo systemctl status docker

Check to see if its working by checking the version: docker -v

Then test by installing a test container:
sudo docker run hello-world


sudo apt install docker-compose

To verify: docker-compose version

Then check permissions:
docker container ls
If you are  denied:
sudo groupadd docker
sudo gpasswd -a ${USER} docker
su - $USER

NGINX Reverse Proxy

Make a directory (make sure you have permissions on it)

  • sudo mkdir nginx_proxy_manager

I had to change permissions. Then create a file in the directory:

nano docker-compose.yaml

Copy the setup text from and change passwords

  • Be sure to change the passwords

Then compose:
docker-compose up -d

This grabs the specified docker containers, sets up the program and database and creates the virtual machine that is running the NGINX Reverse Proxy server.

You should be able to access the GUI at []

Set up

At this point it is a simple matter of adding a proxy host. Be sure to take advantage of the free SSL offered through Let’s Encrypt ( a non profit Certificate Authority).

  1. click add proxy host
  2. Add domain name (the CNAME), IP to forward it to and the port
  3. Go to SSL tab
  4. Select “Request a New Certificate” from the dropdown
  5. Select Force SSL (this will auto forward all http requests to https), agree tot eh terms and add a contact email

You should be good to go. Go ahead and add as many proxies as you have CNAMEs and servers.


And remember to close down all the ports on your router if you’d been like me and opened a bunch. Now you should only need 80 (http) and 443 (https).

Like I said—it’s been life changing for organizing my environment.

Tweet not…

As I haven’t been posting a lot of interesting content over the last few years and since I have been automatically  uploading my tweets on a weekly basis, it has kind of made the blog look kinda unappealing. So I decided to block all the Tweeting reposts from the main feed. You can still find them all here:  or in the menu under Categories and they will continue to accumulate in the background.

Hopefully the blog will now look a little bit more like a blog.

Here’s a cat pic to seal the deal.


Mostly because I don’t like other entities controlling my content. So I repost all my twitter and instagram posts on my own server. At some point I intend to do the same thing for Facebook but it isn’t as easy do to their security etc. I do however download all my content from Facebook and store a copy in my own archives. Paranoid? No, but I do like to be in control 🙂

Gmail and Filters

Further to my previous post about Apple Mail Issue I have been having issues on my new mac with threading conversations. Normally this isn’t much of an issue but I subscribe to the Standard Ebooks Google Group because that is what they use to track projects and keeping the various projects grouped together is pretty important.

Normally what one does is create a rule on the server (iCloud, your webmail etc.) and the  server will automatically sort the mail before it gets to your desktop or phone. For example I have all mu linked in emails go straight into a LinkedIn folder or anything related to ebooks purchased routed to an Ebook folder. This means they don’t bing my phone and aren’t sitting in my inbox and I can check them later at my leisure. But for some reason Google had to be different. For the longest time I had the rule on my laptop which was always on and it would sort the gmail emails and then synch that back up to the cloud—a bit of a hack but I couldn’t be bother to try and figure out what Gmail was doing. But the new mini  goes into a deeper sleep and doesn’t sort—so I decided to figure out the actual correct solution.

I will save all my the swearing at Google. Suffice it to say that against all conventions, Gmail does not use simple folders but has this weird-assed system of labels and a given email can exist in the inbox and in the label at the same time—which is exactly what I didn’t want.

To Fix it

Go to and sign in to you account

Go to  Settings (the gear in the upper left)

Click See all settings

Go to/Click Labels

Click Create new label
be sure (show in IMAP) is checked

Then go to Filters and blocked addresses

Click Create a new filter

Add your criteria. I wanted all emails from to move to a new folder so I selected From: and entered that address; but I could have selected Subject: etc. to filter by whatever made sense…

Click Create filter

Check Skip the Inbox (Archive it)
Check Apply the label: Whatever you chose in the step above

Then Click Create Filter

This will “archive the email — basically removing it from the inbox without marking it as read — and then label it with which ever “folder” you want it to appear in. Then by the time you desktop or phone synchs with the server the email will be moved and not appear in your inbox.

SOOO convoluted. As an aside I find most of what Google apps (gmail, sheets, etc.) do is to make a simple thing more complicated rather than a complicated thing more simple. But then again I prefer a computer does what I tell it to rather than what some anonymous programmer decides is simplest, so maybe it’s just me.

Update to Apple Mail Issue

In Apple Mail Issue I had talked about sorting conversations and threading correctly and frankly rebuilding the mailboxes only worked for a while. Now I have deleted the gmail account entirely and added it back as an IMAP account rather than using Apple & Google’s “secure method.” This entails changing the security setting to allow “less secure apps” and manually adding the IMAP account. So far so good, but we will have to wait and see if this works any better.

Apple Mail Issue

For future reference…

I was having an issue in which emails in a thread were not displaying the correct contents. This was happening primarily with my gmail IMAP account from the Standards Ebook mailing list which made it particularly frustrating.

I tried deleting and or rebuilding the mailboxes and even deleted the whole gmail mailbox (~/Library/Mail/v7/AFD4138D-113E-4798-BA9B-A928C0A9EC44/) all to no avail.

Finally I came across this Mail shows wrong message body (finding the right term to Google makes it so much easier…)

The Solution:

  • Quit mail.
  • Go to ~/Library/Mail/v7/MailData/
  • Delete
    Envelope Index
    and any variants
  • Restart Mail and let it rebuild (this will take some time).

So far this seems to be working…

Standard ebooks, January 2021

It’s 2021 and that means books published in 1925 in the US are now in PD (public domain). Of course some of these are already in the public domain in Canada, but Standard Ebooks runs off American copyright laws. So I added a few new stories to my Ukridge Stories, the Jeeves Stories, and my Mack Reynolds Short Fiction and the updated versions can be download on the site.

I have also added a few new texts to the corpus.

Old Posts

Links to previous posts about the books I have worked on:

ebook Update 2020

ebook Update

Calibre Web 2021

A recent update to Calibre Web version .6 added the series info, so there is no longer a need to add that to the templates. And they did make a few changes to the code for the publishing date.

Now to add the pub date so my python web scraping program can access it:

At around line 69 on /cps/templates/shelf.html

        {% if (entry.pubdate|string)[:10] != '0101-01-01' %}
         <p class="publishing-date">{{entry.pubdate|formatdate}} </p>
        {% endif %}

—just after the {% endif  %} for the series section.

Remember to go to Admin and restart Calibre before exporting.

Computer Specs

Because I keep forgetting the specs of the various machines kicking around the house whilst shopping for new toys.

Surprisingly my old Linux box isn’t as under powered as I thought. It just needs some RAM 🙂 But then again the 2020 looks on paper to be the weakest link of the bunch so I guess the old way of counting such things is a bit passé.

chip cores speed ram cache Geekbench
i5 5287U (2015) 2 2.9GHz 8GB L3 3MB 795 1463 3035
i7 2620M (2011) 2 2.7GHz 8GB L3 4MB 672 1455 ?
AMD Athlon X2 7850 (2010) 2 2.8Ghz 2GB L2 512k 375 724 ?
i5 4260U (2014) 2 1.4GHz 4GB L3 3MB
i5 5350U (2017) 2 1.8GHz 8GB L3 3MB
i3 1000NG4 (2020) 2 1.1GHz 8GB L3 4MB
i7-8700B (2018) 6 3.2GHz 16GB L3 12MB 1082 5483 n/a

EDIT: I added the Geekbench scores and the real story emerges about my tired old PC. Current fast chips are scoring 1400+ (1600 for the ultra high-end ones) and the best Macs are coming in around 1200 (pre-M1).

My new Linux install

The hardware

I have had a PC/Windows box sitting by my desk for decades. The latest I bought in 2009. I’ve been a Mac guy since I started in graphic design but never really let go of the Windows system. I played games on PC and used it for the occasional foray into the Windows ecosystem if I needed to use something that wasn’t yet cross-platform. It was setup to use Synergy (a software based KVM) to share my keyboard and mouse and I just flipped the secondary monitor from hdmi to DVI whenever I wanted to use it.

In 2015 it was happily running dual-boot Windows 7 and 8—then I turned it off and left for our sabbatical on the coast. And I never really turned it back on again (aside from one time to retrieve some files)—seems a year of just working (and playing) on my MacBook had finally converted me. And there was Windows 10 to contend with and I wasn’t a fan of the new desktop. Two years later I chanced to fire it up and it just beeped angrily at me and shut down. I was too lazy to track down the issue, so I moved on. Two more years later (about a month ago) I decided to rescue the poor thing and maybe try to set it up as a linux box.


  • AMD 2.8 Ghz 7850 Dual core (2009)
  • M3A78-CM Asus motherboard
  • 2 GB RAM
  • 500 GB sata harddrive

Oh yah, the angry beeps

When I decided to try and resuscitate the poor old thing I took the sides of the case off and saw the processor’s heat sink and fan dangling from its wires. Somewhere along the way the plastic clips that held the heat sink to the processor had come off (broken actually). As soon as I re-secured it, the PC fired right up and booted into Windows. But I was committed to the Linux experiment so I forged on.

Attempt #1

I downloaded the Linux Mint: Cinnamon iso and a utility to flash it to a usb drive on my Mac and proceeded to follow the instructions. My knowledge of boot partitions is shaky since I hadn’t done any playing with installing harddrives since the early 2000s — so I just did what the internet told me to do. Silly me, I know.

Step two was to boot the PC from the flash drive. The theory was I just went into the BIOS, selected the external USB as the primary boot drive and Bob would be my uncle. Unfortunately for me, I was stuck with Uncle Snafu. My BIOS was very old and it took a significant amount of experimentation to even get the USB drives to show up. But I did it and hit restart.

The BIOS splash screen came up. And that’s where the whole thing stopped.

After a bunch more tries, a search on the internet for an updated BIOS (2010) and some scary moments trying to flash the new BIOS. I tried again. And again. And again — this time walking away from the stuck screen for a couple of hours to see if it was just (glacially) slow. Nope. Nope. Nope. It was not going to boot from a USB drive no matter what I did.

I tweaked settings, re-downloaded files, switched USB ports and sticks, tried different hardware configs and did a lot of googling.

After about 2 days of this frustration, I gave in, pulled the harddrive for re-use, mournfully posted a picture of the poor thing on Instagram for posterity and moved the defunct box to the head of the stairs to be sent out for recycling.

And there it sat.

A Glutton for punishment
aka “Attempt #2”

I couldn’t take the failure. I wanted to play with linux and the virtual machine on my mac mini server just wasn’t cutting it. A search online for a Raspberry Pi’s or other such NUCs kept coming back to a ~$200 investment for what was ostensibly just a toy. And there sat that black box, with a perfectly good cpu and harddrive…just mocking me. Stupid motherboard and its decrepit, old BIOS. I tried re-pricing a new motherboard but it quickly became one of those moneypits where you needed to upgrade pretty much everything in turn and the $$ count kept rising.

So I decided to try again. Just once more. Because I really, really hate it when a computer wins.

I hauled it back to my desk, plugged it all in, inserted the USB stick and hit the power button. It booted. I mean it booted all the way to the Linux install menu and then right into the Linux system itself. What the everloving f___? A bunch of head scratching, playing around and experimenting and a lot of muttering later and I figured it out. There was no internal harddrive anymore. Something in the BIOS must have not liked the USB competing with the internal sata system and just hung there. Success? Well even though I had successfully booted the installer, without a harddrive to install it to, I was still stuck.

Or was I?

The games afoot!

Ok. Sata: bad. USB: good. And I had an external USB drive that I used for time machine backups on the mac. Odds were it was just a sata drive in a USB case. I took it apart and yup, just a sata case. Now we were cooking with gas. I swapped out my 2 TB back-up for the 500 GB windows drive and plugged it into the PC. I had to go back into the BIOS a bunch of times to make sure it was going to boot off the USB stick but after that, it all fired up smoothly and I could see the “external” 500 GB drive right there on the desktop. I WIN! Take that computer!

Merrily I followed the instructions and ran the installer. And then I hit a weensie bit of a hiccup. Remember the Windows 7 and 8 installs? Those partitions were right there along with a smaller boot partition. The Mint installer was giving me choice of which partition I wanted to install to. All I had to do was select one. I say this all pretty confidently now but at the time I was mucho confused and my fumbling around pretty much ended up making all the various partitions unusable by the time I was done f@cking around. Tony* would be so proud.

Fast forward at least a day of screwing around that involved grub errors and something called an “invalid arch independent ELF magic error.” Gotta love Linux programmers.

I finally got the drive cleaned off. I stripped it back to the bare bones and got it repartitioned properly with a working boot sector and then did a clean install of Linux Mint and lo and behold: it worked. I had a fresh install of Linux working and connected to the internet and my network.

Remotely interesting,
but too schtupid to be true

The purpose of this refurbished beauty was to sit somewhere out of the way and let me flail away on it. To that end I didn’t want to waste a keyboard/mouse/monitor on it. So I needed to remote in. I got SSH working fine so I could remote in via terminal (i.e. command line stuff like old school DOS commands but unix/linux ones instead) but ran into some snags when setting up VNC (Virtual Network Computing — a way to remote access and control a computer from another computer).

Long story short: x11vnc, which is the recommended vnc server for Linux Mint Cinnamon, which was a recommended install of Linux for beginners had a few issues with headless operation (i.e. no monitor attached). For one, it worked perfectly fine when you booted the computer with a monitor attached but introduced a 3 second delay to every mouse movement or click when you booted it without. As you can imagine this took me a while to discover as all my preliminary setup work was done with a monitor attached. On the other hand, all the other vnc servers I tried didn’t seem to play very nice with Cinnamon. Welcome to the freedom and tyranny of choice — there is a reason most people stick to Mac or Windows OS’s. The issue, after a lot of online searches turned out to be screen compositing (whatever the hell that was) and was a known bug…if I had known to look for it. Unfortunately Cinnamon has compositing baked in and you can’t turn it off like most of the “helpful” websites suggested.

Eventually I came to the conclusion that Cinnamon and I weren’t going to work out. Tragic really. So I tried Xfce, which was a lightweight desktop environment for Mint. Blech. Way too lightweight. Then came MATE, which is a fork of Gnome2, which was a variation of Gnome, which was one of the original linux desktop environments. And it came with a handy switch to turn off compositing. And just like Goldilocks, I found it just right.

In conclusion

I have my old PC up and running. It has three desktop environments to choose from, but I boot straight into MATE. I can access it via terminal or Mac’s built in screensharing. It has been running for 3 days now and hasn’t crashed, locked me out, or reset its compositing (which it was doing for a while after I installed MATE). I have now installed a ton of stuff (see below) and it is humming along (I forgot how loud the fans were) beside me connected only by an ethernet cable.


I am pretty stoked. I can now move the box downstairs and offload some of my background processes to it and never really worry if I manage to screw it up because it really is a testing server. And Linux has some cool ways to “capture” a system image that you can revert to if you get to cute typing in commands you have idea what their purpose is — which I often do.

VNC and terminal, side-by-side, both looking remotely at the linux box.

Stay tuned.

Software to date

  • ssh: to allow secure remote terminal access
  • LAMP (Linux, Apache, MySql, PHP):
    • Apache is the the webserver
    • Mysql the database software
    • php is the language to allow webpages to talk to the mysql databases
  • Samba: to allow the linux box to share files with Windows and Mac computers
  • VSCode: for programming and file editing
  • Midori: a lightweight web browser in lieu of Firefox or Chrome
  • Mate Tweak: to adjust a few additional settings
  • Handbrake: for ripping dvds. I tried a couple and while it works it is very slow so I won;’t like use it for that purpose.

There are tons more things I could add like GIMP and Inkscape, but I don’t intend to use it as a workstation and it is a bit slow for any heavy duty lifting. Mint also has a lot of stuff preinstalled like a firewall, python3, video codecs, Firefox, VLC, ImageMagick, Libre Office and a ton of handy utilities —most of which I won’t use but are nice to have.

* Anthony (Tony) was our IT support person from Avante Garde Technologies back in the day and he made a tidy profit coming in and fixing the things I managed to screw up under the guise of trying to save money and time. Great guy.

Silly Assumptions

I’ve been doing a bunch of video work lately. Some quick and dirty editing of videos for NYCSS so they could do boat briefings remotely, a bunch more for a friend who is an instructor at NAIT and needed to do demos remotely, and finally I am just starting on some for L and her COVID-mutated instructional semester.


This has involved over 30 or so videos to date and well over 250 gig of data.

The Need for Speed

My 2015 Macbook Pro has been working like a champ and I really only ran into issues when copying massive files off the internal drive (500 gig with barely 100 gig left as working space) back and forth to my externals and rendering a few of the files with lots of adjustments. And I really didn’t think I could do anythingto speed up any of that without a huge investment of $$.

Turns out I was wrong.

Issue 1

I was using my external 2 terabyte usb/sata drive as both a repository for my cache files and storage for completed work. Copying a 4 gig finished file took 5 or 6 minutes and if I left it too long and had to transfer 15 or 20 gig it was etter if I just left and went and had a coffee.

I don’t remember what it was that got me looking at my USB hub, but at some point I noticed it was a cheapo one I had bought years ago and was strictly USB 2.0. That is to say 60 megabytes a second. I did a quick “About this Computer” and lo and behold the Macbook Pro  physical usb ports were USB 3.0—that is to say rated for 500 megabytes a second. Um. But I still didn’t do anything about it because, well, math is hard. Then one of my 2 USB ports on the Macbook stopped working.

$37 dollars later and now things are really screaming. Silly, silly boy.

Issue 2

My trusty externals have been chuggin’ away like champs but because they are mechanical hard drives they can really only do one thing at a time — imagine they are remotely like a turntable: the read/write arm has to move back and forth across a physical “platter” constantly every time it performs an operation.

What the alternative? A SSD (solid state drives: essentially just big flash drives) which are purely electronic and have no moving parts. But SSDs are really expensive right?

Ummm. No.

At least not any more. I picked up a couple of high speed Samsung 500 gig SSD drives for ~$130/each. Scream-ing Fast. (See above screen shot.)

And so tiny!

So now I can fire stuff back and forth quickly and have an extra terabyte of space. And best of all if I dump my working cache files to one of them, it can also read/write asynchronously which helps to dramatically improve performance when I am working in Premiere and After Effects.

In summation

I made some pretty silly assumptions. The  core of which was that technology, and most especially technology pricing, would stand still — with the corollary that my 5-year-old machine couldn’t be made to work faster and harder. I am especially chagrined by the USB hub debacle. $37 bucks. 8X faster (I finally did the math). Duh.

As an Aside

All this work has really been an awesome learning experience. I have honed my After Effects skills some more, learned to deal with a new kind of workflow, and best of all got to try out Adobe’s cloud-sharing to work collaboratively with others. We must of moved 300+ gig of files back and forth over the cloud.

I should set up shop doing fancy video effects for all those remote teachers and professors 🙂