Total of 314 posts

Herein you’ll find articles on a very wide variety of topics about technology in the consumer space (mostly) and items of personal interest to me. I have also participated in and created several podcasts most notably Pragmatic and Causality and all of my podcasts can be found at The Engineered Network.

Pushover and PodPing from RSS

In my efforts to support the Podcasting 2.0 initiative, I thought I should see how easy it was to incorporate their new PodPing concept, which is effectively a distributed RSS notification system specifically tailored for Podcasts. The idea is that when a new episode goes live, you notify the PodPing server and it then adds that notification to the distributed Hive blockchain system and then any apps can simply watch the blockchain and this can trigger the download of the new episode in the podcast client.

This has come predominantly from their attempts to leverage existing technology in WebSub, however when I tried the WebSub angle a few months ago, the results were very disappointing with many minutes, hours passing before a notification was seen and in some cases it wasn’t seen at all.

I leveraged parts of an existing Python script I’ve been using for years for my RSS social media poster, but stripped it down to the bare minimum. It consists of two files, checkfeeds.py (which just creates an instance of the RssChecker class) and then the actual code is in rss.py.

This beauty of this approach is that it will work on ANY site’s RSS target. Ideally if you have a dynamic system you could trigger the GET request on an episode posting event, however since my sites are statically generated and the posts are created ahead of time (and hence don’t appear until the site builder coincides with a point in time after that post is set to go live) it’s problematic to create a trigger from the static site generator.

Whilst I’m an Electrical Engineer, I consider myself a software developer of many different languages and platforms, but for Python I see myself more of a hacker and a slasher. Yes, there are better ways of doing this. Yes, I know already. Thanks in advance for keeping that to yourself.

Both are below for your interest/re-use or otherwise:

from rss import RssChecker

rssobject=RssChecker()

checkfeeds.py

CACHE_FILE = '<Cache File Here>'
CACHE_FILE_LENGTH = 10000
POPULATE_CACHE = 0
RSS_URLS = ["https://RSS FEED URL 1/index.xml", "https://RSS FEED URL 2/index.xml"]
TEST_MODE = 0
PUSHOVER_ENABLE = 0
PUSHOVER_USER_TOKEN = "<TOKEN HERE>"
PUSHOVER_API_TOKEN = "<TOKEN HERE>"
PODPING_ENABLE = 0
PODPING_AUTH_TOKEN = "<TOKEN HERE>"
PODPING_USER_AGENT = "<USER AGENT HERE>"

from collections import deque
import feedparser
import os
import os.path
import pycurl
import json
from io import BytesIO

class RssChecker():
    feedurl = ""

    def __init__(self):
        '''Initialise'''
        self.feedurl = RSS_URLS
        self.main()
        self.parse()
        self.close()

    def getdeque(self):
        '''return the deque'''
        return self.dbfeed

    def main(self):
        '''Main of the FeedCache class'''
        if os.path.exists(CACHE_FILE):
            with open(CACHE_FILE) as dbdsc:
                dbfromfile = dbdsc.readlines()
            dblist = [i.strip() for i in dbfromfile]
            self.dbfeed = deque(dblist, CACHE_FILE_LENGTH)
        else:
            self.dbfeed = deque([], CACHE_FILE_LENGTH)

    def append(self, rssid):
        '''Append a rss id to the cache'''
        self.dbfeed.append(rssid)

    def clear(self):
        '''Append a rss id to the cache'''
        self.dbfeed.clear()

    def close(self):
        '''Close the cache'''
        with open(CACHE_FILE, 'w') as dbdsc:
            dbdsc.writelines((''.join([i, os.linesep]) for i in self.dbfeed))

    def parse(self):
        '''Parse the Feed(s)'''
        if POPULATE_CACHE:
            self.clear()
        for currentfeedurl in self.feedurl:
            currentfeed = feedparser.parse(currentfeedurl)

            if POPULATE_CACHE:
                for thefeedentry in currentfeed.entries:
                    self.append(thefeedentry.get("guid", ""))
            else:
                for thefeedentry in currentfeed.entries:
                    if thefeedentry.get("guid", "") not in self.getdeque():
#                        print("Not Found in Cache: " + thefeedentry.get("title", ""))
                        if PUSHOVER_ENABLE:
                            crl = pycurl.Curl()
                            crl.setopt(crl.URL, 'https://api.pushover.net/1/messages.json')
                            crl.setopt(pycurl.HTTPHEADER, ['Content-Type: application/json' , 'Accept: application/json'])
                            data = json.dumps({"token": PUSHOVER_API_TOKEN, "user": PUSHOVER_USER_TOKEN, "title": "RSS Notifier", "message": thefeedentry.get("title", "") + " Now Live"})
                            crl.setopt(pycurl.POST, 1)
                            crl.setopt(pycurl.POSTFIELDS, data)
                            crl.perform()
                            crl.close()

                        if PODPING_ENABLE:
                            crl2 = pycurl.Curl()
                            crl2.setopt(crl2.URL, 'https://podping.cloud/?url=' + currentfeedurl)
                            crl2.setopt(pycurl.HTTPHEADER, ['Authorization: ' + PODPING_AUTH_TOKEN, 'User-Agent: ' + PODPING_USER_AGENT])
                            crl2.perform()
                            crl2.close()

                        if not TEST_MODE:
                            self.append(thefeedentry.get("guid", ""))

rss.py

The basic idea is:

  1. Create a cache file that keeps a list of all of the RSS entries you already have and are already live
  2. Connect up PushOver (if you want push notifications, or you could add your own if you like)
  3. Connect up PodPing (ask @[email protected] or @[email protected] for a posting API TOKEN)
  4. Set it up as a repeating task on your device of choice (preferably a server, but should work on a Synology, a Raspberry Pi or a VPS)

VPS

I built this initially on my Macbook Pro using the Homebrew installed Python 3 development environment, then installed the same on a CentOS7 VPS I have running as my Origin web server. Assuming you already have Python 3 installed, I added the following so I could use pycurl:

yum install -y openssl-devel
yum install python3-devel
yum group install "Development Tools"
yum install libcurl-devel
python3 -m pip install wheel
python3 -m pip install --compile --install-option="--with-openssl" pycurl

Whether you like “pycurl” or not, obviously there are other options but I stick with what works. Rather than refactor for a different library I just jumped through some extra hoops to get pycurl running.

Finally I bridge the checkfeeds.py with a simply bash script wrapper and call it from a CRON Job every 10 minutes.

Job done.

Enjoy.

Fun With Apple Podcasts Connect

Apple Podcasts will shortly open to the public but for podcasters like me, we’ve been having fun with Apple’s first major update to their podcasting backend in several years, and it hasn’t really been that much fun. Before talking about why I’m putting so much time and effort into this at all, I’ll go through the highlights of my experiences to date.

Fun Times at the Podcasts Connect Mk2

Previously I’d used the Patreon/Breaker integration but that fell apart when Breaker was acquired by Twitter and the truth was that very, very few Patrons utilised the feature and the Breaker app was never big enough to attract any new subscribers. The Breaker audio integration and content has since been removed even though the company had the service taken over (to an extent) as it was one less thing for me to upload content to. In a way…this has been a bit déjà-vu and “here we go again…” 1

The back-catalogue of ad-free episodes as well as bonus content between Sleep, Pragmatic, Analytical and Causality adds up to 144 individual episodes.

For practically every one I had the original project files which I restored and re-exported in WAV format then uploaded them via the Apple Podcasts updated interface. (The format must be WAV or FLAC and Stereo, which is funny for a Mono podcast like mine and added up to about 50GB of audio) It’s straight-forward enough although there were a few annoying glitches that after using it for 10 days were still unresolved. Each of the key issues I encountered: (there were others but some were resolved at time of writing this so I’ve excluded those)

  1. Ratings and Reviews made a brief appearance then disappeared and still haven’t come back (I’m sure they will at some point)
  2. Not all show analytics time spans work (Past 60 days still doesn’t work, everything is blank)
  3. Archived shows in the Podcast-drop-down list appear but don’t in the main overview even when displaying ‘All’
  4. The order you save and upload audio files, changes the episode date such that if you create the episode meta-data, set the date, then upload the audio the episode date defaults to todays date. It does this AFTER you leave the page though, so it’s not obvious, but if you upload the audio THEN set the date it’s fine.
  5. The audio upload hit/miss ratio for me was about 8 out of 10, meaning for every 10 episodes I uploaded, 2 got stuck. What do I mean? The episode WAV file uploads, completes and then the page shows the following:

Initial WAV Upload Attempt

…and the “Processing Audio” never actually finishes. Hoping this was just a back-log issue with high end user demand I uploaded everything and came back minutes, hours then days later and finally after waiting five days I set about to try to unstick it.

Can’t Publish! Five Days of Waiting and seeing this I gave up waiting for it to resolve itself…

The obvious thing to try: select “Edit” and delete then re-upload the audio. Simple enough, keeps the meta-data intact (except the date I had to re-save after every audio re-upload) then I waited another few days. Same result. Okay, so that didn’t work at all.

Next thing to try, re-create the entire episode again from scratch! So I did that for the 30 episodes that were stuck. Finally I see this (in some cases up to an hour later):

Blitz

And sure enough…

Blitz

Of course, that only worked for 25 episodes out of the 30 I uploaded a second time. I then had to wash-rinse-repeat for the 5 that had failed for a second time and repeated until they all worked. I’d hate to think about doing this on a low-bandwidth connection like I had a decade ago. Even at 40Mbps up it took a long time for the 2GB+ episodes of Pragmatic. The entire exercise has probably taken me 4 work-days of effort end to end, or about 32 hours of my life. There’s no way to delete the stuck episodes either so I now have a small collection of “Archived” non-episodes. Oh well…

Why John…Why?

I’ve read a lot of differing opinions from podcasters about Apples latest move and frankly I think the people most dismissive are those with significant existing revenue streams for their shows, or those that have already made their money and don’t need/want income for their show(s). Saying that you can reduce fees by using Stripe and your own website integration, by using Memberful, Patreon, or more recently by streaming Satoshis (very cool BTW), all have barriers to entry for the Podcast creator that can not be ignored.

For me, I’m a geek and I love that stuff so sure, I’ll have a crack at that (looks over at the Raspberry Pi Lightning Node on desk with a nod) but not everyone is like me (probably a good thing on balance).

So far as I can tell, Apple Podcasts is currently the most fee-expensive way for podcasters to get support from listeners. It’s also a walled garden2, but then so is Patreon, Spotify/Anchor (if you’re eligible and I’m not…for now), Breaker, and building your own system with Memberful or Stripe website integration requires developer chops most don’t have so isn’t an option. By far the easiest (once you figure out BitCoin/Lightning and set up your own Node) is actually streaming Sats, but that knowledge ramp is tough and lots of people HATE BitCoin. (That’s another, more controversial story).

Apple Podcasts has one thing going for it: It’s going to be the quickest, easiest way for someone to support your show coupled with the biggest audience in a single Podcasting ecosystem. You can’t and shouldn’t ignore that, and that’s why I’m giving this a chance. The same risks apply to Apple as to all the other walled gardens (Patreon, Breaker, Spotify/Anchor etc): you could be kicked-off the platform, they could stop supporting their platform slowly, sell it off or shut it down entirely and if any of that happens, your supporters will mostly disappear with it. That’s why no-one should rely on it as the sole pathway for support.

It’s about being present and assessing after 6-12 months. If you’re not in it, then you might miss out on supporters that love your work and want to support it and this is the only way they’re comfortable doing that. So I’m giving this a shot and when it launches for Beta testing will be looking for any fans that want to give it a try so I can tweak anything that needs tweaking, and will post publicly when it goes live for all. Hopefully all of my efforts (and Apples) are worth it for all concerned.

Time will tell. (It always does)


  1. Realistically if every Podcasting-walled-garden offers something like this (as Breaker did and Spotify is about to) then at some point Podcasters have to draw a line of effort vs reward. Right now I’m uploading files to two places, and with Apple that will be a third. If I add Spotify, Facebook, Breaker then I’m up to triple my current effort to support 5 walled gardens. Eventually if the platform isn’t popular then it’s not going to be worth that effort. Apple is worth considering because its platform is significant. The same won’t always be true for the “next walled garden” whatever that may be. ↩︎

  2. To be crystal clear, I love walled gardens as in actual GARDENS, but I don’t mean those ones, I mean closed ecosystems aka ‘walled gardens’, before you say that. Actually no geek thought that, that’s just my sense of humour. Alas. ↩︎

Causality Transcriptions

Spurred on by Podcasting 2.0 and reflecting on my previous attempt at transcriptions, I thought it was time to have another crack at this. The initial attempts were basic TXT files that weren’t time-synced nor proofed and used a very old version of Dragon Dictate I had laying around.

This time around my focus is on making Causality as good as it possibly can be. From the PC2.0 guidelines:

SRT: The SRT format was designed for video captions but provides a suitable solution for podcast transcripts. The SRT format contains medium-fidelity timestamps and are a popular export option from transcription services. SRT transcripts used for podcasts should adhere to the following specifications.

Properties:

  • Max number of lines: 2
  • Max characters per line: 32
  • Speaker names (optional): Start a new card when the speaker changes. Include the speaker’s name, followed by a colon.

This is closely related to defaults I found using Otter.ai but that’s not free if you want time-sync’d SRT files. So my workflow uses YouTube (for something useful)…

STEPS:

  1. Upload episode directly converted from the original public audio file to YouTube as a Video (I use Ferrite to create a video export). Previously I was using LibSyn as part of their YouTube destination which also works.
  2. Wait a while. It can take anywhere from a few minutes to a few hours, then go to your YouTube Studio, pick an episode, Video Details, under the section: “Language, subtitles, and closed captions”, select “English by YouTube (automatic)” three vertical dots, “Download” (NOTE BELOW). Alternatively select Subtitles, and next to DUPLICATE AND EDIT, select the three dots and Download, then .srt
  3. If you can only get the SBV File: Open this file, untitled.sbv in a raw text editor, then select all, copy and paste it into: DCMP’s website, click Convert, select all, then create a new blank file: untitled.srt and paste in the converted format.
  4. If you have the SRT now, and don’t have the source video (eg if it was created by LibSyn automatically, I didn’t have a copy locally) download the converted YouTube video using the embed link for the episode to: SaveFrom or use a YouTube downloader if you prefer.
  5. Download the Video in low-res and put all into a single directory.
  6. I’m using Subtitle Studio and it’s not free but it was the easiest for me to get my head around and it works for me. Open the SRT file just created/downloaded then drag the video for the episode in question onto the new window.
  7. Visually skim and fix obvious errors before you press play (Title Case, ends of Sentences, words for numbers, MY NAME!)
  8. Export the SRT file and add to the website and RSS Feed!

NOTE: In 1 case out of 46 uploads it thought I was speaking in Russian for some reason? The auto-translation in Russian was funny but not useful, but for all others it correctly translated automatically into English and the quality of the conversion is quite good.

I’ve also flattened the SRT into a fixed Text file, which is useful for full text search. The process for that takes me two steps:

  1. Upload the file to Happy Scribe and select “Text File” as the output format.
  2. Open the downloaded file in a text editor, select all the text and then go to Tool Slick’s line merge tool, pasting the text into the Input Text box, then “Join Lines” and select all of the Output Joined Lines box and paste over what you had in your local text file.
  3. Rename the file and add to the website and RSS Feed!

As of publishing I’ve only done the sub-titles in SRT and TXT formats of two episodes, but I will continue to churn my way through them as time permits until they’re all done.

Of course you could save yourself a bit of effort and use Otter, and save yourself even more effort and don’t proof-read the automatically converted text. If I wasn’t so much of a stickler for detail, I’d probably do that myself but it’s that refusal to just accept that, that makes me the Engineer I am I suppose.

Enjoy!

Building A Synology Hugo Builder

I’ve been using GoHugo (Hugo) as a static site generator on all of my sites for about three years now and I love it’s speed and its flexibility. That said a recent policy change at a VPS host had me reassessing my options and now that I have my own Synology with Docker capability I was looking for a way to go ultra-slim and run my own builder, using a lightweight (read VERY low spec) OpenVZ VPS as the Nginx front-end web server behind a CDN like CloudFlare. Previously I’d used Netlify but their rebuild limitations on the free tier were getting a touch much.

I regularly create content that I want to set to release automatically in the future at a set time and date. In order to accomplish this Hugo needs to rebuild the site periodically in the background such that when new pages are ready to go live, they are automatically built and available to the world to see. When I’m debugging or writing articles I’ll run the local environment on my Macbook Pro and only when I’m happy with the final result will I push to the Git repo. Hence I need a set-and-forget automatic build environment. I’ve done this on spare machines (of which I current have none), on a beefier VPS using CronJobs and scripts, on my Synology as a Virtual machine using the same (wasn’t reliable) before settling on this design.

Requirements

The VPS needed to be capable of serving Nginx from folders that are RSync’d from the DropBox. I searched through LowEnd Stock looking for deals for 256GB of RAM, SSD for a cheap annual rate and at the time got the “Special Mini Sailor OpenVZ SSD” for $6 USD/yr which was that amount of RAM and 10GB of SSD space, running CentOS7. (Note: These have sold out but there’s plenty of others around that price range at time of writing)

Setting up the RSync, NGinx, SSH etc is beyond the scope of this article however it is relatively straight-forward. Some guides here might be helpful if you’re interested.

My sites are controlled via a Git workflow, which is quite common for website management of static sites and in my case I’ve used GitHub, GitLab and most recently settled on the lightweight and solid Gitea which I also self-host now on my Synology. Any of the above would work fine but having them on the same device makes the Git Clone very fast but you can adjust that step if you’re using an external hosting platform.

I also had three sites I wanted to build from the same platform. The requirements roughly were:

  • Must stay within Synology DSM Docker environment (no hacking, no portainer which means DroneCI is out)
  • Must use all self-hosted, owned docker/system environment
  • A single docker image to build multiple websites
  • Support error logging and notifications on build errors
  • Must be lightweight
  • Must be an updated/recent/current docker image of Hugo

The Docker Image And Folders

I struggled for a while with different images because I needed one that included RSync, Git, Hugo and allowed me to modify the startup script. Some of the hugo build dockers out there were actually quite restricted to a set workflow like running up the local server to serve from memory or assumed you had a single website. The XdevBase / HugoBuilder was perfect for what I needed. Preinstalled it has:

  • rsync
  • git
  • Hugo (Obviously)

Search for “xdevbase” in the Docker Registry and you should find it. Select it and Download the latest - at time of writing it’s very lightweight only taking up 84MB.

XDevBase

After this open “File Station” and start building the supporting folder structure you’ll need. For me I had three websites: TechDistortion, The Engineered Network and SlipApps, hence I created three folders. Firstly under the Docker folder which you should already have if you’ve played with Synology docker before, create a sub-folder for Hugo - for me I imaginatively called mine “gohugo”, then under that I created a sub-folder for each site plus one for my logs.

Folders

Under each website folder I also created two more folders: “src” for the website source I’ll be checking out of Gitea, and “output” for the final publicly generated Hugo website output from the generator.

Scripts

I spent a fair amount of time perfecting the scripts below. The idea was to have an over-arching script that called each site one after the other in a never-ending loop with a mandatory wait-time between the loops. If you attempt to run independent dockers each on a timer and any other task runs on the Synology, the two or three independently running dockers will overlap leading to an overload condition the Synology will not recover from. The only viable option is to serialise the builds and synchronising those builds is easiest using a single docker like I have.

Using the “Text Editor” on the Synology or using your text editor of choice and copying the files across to the correct folder, create a main build.sh file and as many build-xyz.sh files as you have sites you want to build.

#!/bin/sh
# Main build.sh

# Stash the current time and date in the log file and note the start of the docker
current_time=$(date)
echo "$current_time :: GoHugo Docker Startup" >> /root/logs/main-build-log.txt

while :
do
	current_time=$(date)
	echo "$current_time :: TEN Build Called" >> /root/logs/main-build-log.txt
	/root/build-ten.sh
	current_time=$(date)
	echo "$current_time :: TEN Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m

	current_time=$(date)
	echo "$current_time :: TD Build Called" >> /root/logs/main-build-log.txt
	/root/build-td.sh
	current_time=$(date)
	echo "$current_time :: TD Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m

	current_time=$(date)
	echo "$current_time :: SLIP Build Called" >> /root/logs/main-build-log.txt
	/root/build-slip.sh
	current_time=$(date)
	echo "$current_time :: SLIP Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m
done

current_time=$(date)
echo "$current_time :: GoHugo Docker Build Loop Ungraceful Exit" >> /root/logs/main-build-log.txt
curl -s -F "token=xxxthisisatokenxxx" -F "user=xxxthisisauserxxx1" -F "title=Hugo Site Builds" -F "message=\"Ungraceful Exit from Build Loop\"" https://api.pushover.net/1/messages.json

# When debugging is handy to jump out into the Shell, but once it's working okay, comment this out:
#sh

This will create a main build log file and calls each sub-script in sequence. If it ever jumps out of the loop, I’ve set up a Pushover API notification to let me know.

Since all three sub-scripts are effectively identical except for the directories and repositories for each, The Engineered Network script follows:

#!/bin/sh

# BUILD The Engineered Network website: build-ten.sh
# Set Time Stamp of this build
current_time=$(date)
echo "$current_time :: TEN Build Started" >> /root/logs/ten-build-log.txt

rm -rf /ten/src/* /ten/src/.* 2> /dev/null
current_time=$(date)
if [[ -z "$(ls -A /ten/src)" ]];
then
	echo "$current_time :: Repository (TEN) successfully cleared." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Repository (TEN) not cleared." >> /root/logs/ten-build-log.txt
fi

# The following is easy since my Gitea repos are on the same device. You could also set this up to Clone from an external repo.
git --git-dir /ten/src/ clone /repos/engineered.git /ten/src/ --quiet
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Repository (TEN) successfully cloned." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Repository (TEN) not cloned." >> /root/logs/ten-build-log.txt
fi

rm -rf /ten/output/* /ten/output/.* 2> /dev/null
current_time=$(date)
if [[ -z "$(ls -A /ten/output)" ]];
then
	echo "$current_time :: Site (TEN) successfully cleared." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not cleared." >> /root/logs/ten-build-log.txt
fi

hugo -s /ten/src/ -d /ten/output/ -b "https://engineered.network" --quiet
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Site (TEN) successfully generated." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not generated." >> /root/logs/ten-build-log.txt
fi

rsync -arvz --quiet -e 'ssh -p 22' --delete /ten/output/ [email protected]:/var/www/html/engineered
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Site (TEN) successfully synchronised." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not synchronised." >> /root/logs/ten-build-log.txt
fi

current_time=$(date)
echo "$current_time :: TEN Build Ended" >> /root/logs/ten-build-log.txt

The above script can be broken down into several steps as follows:

  1. Clear the Hugo Source directory
  2. Pull the current released Source code from the Git repo
  3. Clear the Hugo Output directory
  4. Hugo generate the Output of the website
  5. RSync the output to the remote VPS

Each step has a pass/fail check and logs the result either way.

Your SSH Key

For this work you need to confirm that RSync works and you can push to the remote VPS securely. For that extract the id_rsa key (preferably generate a fresh key-pair) and place that in the /docker/gohugo/ folder on the Synology ready for the next step. As they say it should “just work” but you can test if it does once your docker is running. Open the GoHugo docker, go to the Terminal tab and Create–>Launch with command “sh” then select the “sh” terminal window. In there enter:

ssh [email protected] -p22

That should log you in without a password, securely via ssh. Once it’s working you can exit that terminal and smile. If not, you’ll need to dig into the SSH keys which is beyond the scope of this article.

Gitea Repo

This is now specific to my use case. You could also clone your Repo from any other location but for me this was quicker easier and simpler to map my repo from the Gitea Docker folder location. If you’re like me and running your own Gitea on the Synology you’ll find that repo directory under the /docker/gitea sub-directories at …data/git/respositories/ and that’s it. Of course many will not be doing that, but setting up external Git cloning isn’t too difficult but beyond the scope of this article.

Configuring The Docker Container

Under the Docker –> Image section, select the downloaded image then “Launch” it, set the Container Name to “gohugo” (or whatever name you want…doesn’t matter) then configure the Advanced Settings as follows:

  • Enable auto-restart: Checked
  • Volume: (See below)
  • Network: Leave it as bridge is fine
  • Port Settings: Since I’m using this as a builder I don’t care about web-server functionality so I left this at Auto and never use that feature
  • Links: Leave this empty
  • Environment –> Command: /root/build.sh (Really important to set this start-up command here and now, since thanks to Synology’s DSM Docker implementation, you can’t change this after the Docker container has been created without destroying and recreating the entire docker container!)

There’s a lot of little things to add here to make this work for all the sites. In future if you want to add more sites then stopping the Docker, adding Folders and modifying the scripts is straight-forward.

Add the following Files: (Where xxx, yyy, zzz are the script names representing your sites we created above, aaa is your local repo folder name)

  • docker/gohugo/build-xxx.sh map to /root/build-xxx.sh (Read-Only)
  • docker/gohugo/build-yyy.sh map to /root/build-yyy.sh (Read-Only)
  • docker/gohugo/build-zzz.sh map to /root/build-zzz.sh (Read-Only)
  • docker/gohugo/build.sh map to /root/build.sh
  • docker/gohugo/id_rsa map to /root/.ssh/id_rsa (Read-Only)
  • docker/gitea/data/git/respositories/aaa map to /repos (Read-Only) Only for a locally hosted Gitea repo

Add the following Folders:

  • docker/gohugo/xxx/output map to /xxx/output
  • docker/gohugo/xxx/src map to /xxx/src
  • docker/gohugo/yyy/output map to /yyy/output
  • docker/gohugo/yyy/src map to /yyy/src
  • docker/gohugo/zzz/output map to /zzz/output
  • docker/gohugo/zzz/src map to /zzz/src
  • docker/gohugo/logs map to /root/logs

When finished and fully built the Volumes will look something like this:

Volumes

Apply the Advanced Settings then Next and select “Run this container after the wizard is finished” then Apply and away we go.

Of course, you can put whatever folder structure and naming you like, but I like keeping my abbreviations consistent and brief for easier coding and fault-finding. Feel free to use artistic licence as you please…

Away We Go!

At this point the Docker should now be periodically regenerating your Hugo websites like clockwork. I’ve had this setup running now for many weeks without a single hiccup and on rebooting it comes back to life and just picks up and runs without any issues.

As a final bonus you can also configure the Synology Web Server to point at each Output directory and double-check what’s being posted live if you want to.

Enjoy your automated Hugo build environment that you completely control :)