My goals were:
I’m the proud owner of a Synology, and it can run docker, and you can run BitCoin and Lightning in Docker containers? Okay then…this should be easy enough, right?
BitCoin Node Take 1
I set up the kylemanna/bitcoind docker on my Synology and started it syncing to the Mainnet blockchain. About a week later and I was sitting at 18% complete and averaging 1.5% per day and dropping. Reading up on this and the problem was two-fold: validating the blockchain is a CPU and HDD/SSD intensive task and my Synology had neither. I threw more RAM at it (3GB out of the 4GB it had) with no difference in performance, set the CPU restrictions to give the Docker the most performance possible with no difference and basically ran out of levers to pull.
I then learned it’s possible to copy a blockchain from one device to another and the Raspberry Pi’s sold as your own private node come with the blockchain pre-synced (up to the point they’re shipped) so they don’t take too long to catch up to the front of the chain. I then downloaded BitCoin Core for MacOS and set it running. After two days it had finished (much better) and I copied the directories to the Synology only to find that the settings on BitCoin Core were to “prune” the blockchain after validation, meaning the entire blockchain was no longer stored on my Mac, and the docker container would need to start over.
Ugh.
So I disabled pruning on the Mac, and started again. The blockchain was about 300GB (so I was told) and with my 512GB SSD on my MBP I thought that would be enough, but alas no, as the amount of free space diminished at a rapid rate of knots, I madly off-loaded and deleted what I could finishing with about 2GB to spare and the entire blockchain and associated files weighed in at 367GB.
Transferring them to the Synology and firing up the Docker…it worked! Although it had to revalidate the 6 most recent blocks (taking about 26 minutes EVERY time you restarted the BitCoin docker) it sprang to life nicely. I had a BitCoin node up and running!
Lightning Node Take 1
There are several docker containers to choose from, the two most popular seemed to be LND and c-Lightning. Without understanding the differences I went with the container that was said to be more lightweight and work better on a Synology: c-Lightning.
Later I was to discover that more plugins, applications, GUIs, relays (Sphinx for example) only work with LND and require LND Macaroons, which c-Lightning doesn’t support. Not only that design decisions by the c-Lightning developers to only permit single connections between nodes makes building liquidity problematic when you’re starting out. (More on that later…)
After messing around with RPC for the cLightning docker to communicate with the KyleManna Bitcoind docker, I realised that I needed to install ZMQ support since RPC Username and Password authentication were being phased out in preference for a token authentication through a shared folder.
UGH
I was so frustrated at losing 26 minutes every time I had to change a single setting in the Bitcoin docker, and in an incident overnight but dockers crashed, didn’t restart and then took over a day to catch up to the blockchain again. I had decided more or less at this point to give up on it.
SSD or don’t bother
Interestingly my oldest son pointed out that all of the kits for sale used SSDs for the Bitcoin data storage - even the cheapest versions. A bit more research and it turns out that crunching through the blockchain is less of a CPU intensive exercise and more of a data store read/write intensive exercise. I had a 512GB Samsung USB-3 SSD laying around and in a fit of insanity decided to try connecting it to the Synology’s rear port, shift the entire content of the docker shared folders (that contained all of the blocks and indexes) to that SSD and try it again.
Oh My God it was like night and day.
Both docker containers started, synced and were running in minutes. Suddenly I was interested again!
Bitcoin Node Take 2
With renewed interest I returned to my previous headache - linking the docker containers properly. The LNCM/Bitcoind docker had precompiled support for ZMQ and it was surprisingly easy to set up the docker shared file to expose the token I needed for authentication with the cLightning docker image. It started up referencing the same docker folder (now mounted on the SSD) and honestly, seemed to “just work” straight up. So far so good.
Lightning Node Take 2
This time I went for the more-supported LND, and picked one that was quite popular by Guggero, and also spun it up rather quickly. My funds on my old cLightning node would simply have to remain trapped until I could figure out how to recover them in future.
Host-Network
The instructions I had read all related to TestNet, and advised not to use money you weren’t prepared to lose.
Not published on an external address, needed to recreate and resync the containers to open the ports I needed to port-forward to expose the Lightning external port 9735 to allow incoming connections.
How Much Do I Need?
Everything you do requires SATS. It costs SATS to fund a channel. It costs SATS to close a channel. I couldn’t find out how to determine the minimum amount of Sats needed to open a channel without first opening one. I only had a limited number of SATs to play with so I had to choose carefully. Most channels wanted 10,000 or 20,000 but I managed a find a few that only required 1,000. The advice was to open as many channels as you could then make some transactions and your inbound liquidity will improve as others in the network transact.
Services exist to help build that inbound liquidity, without which, you can’t accept payments from anyone else.
Anything On-Chain Is Slow and Expensive
For a technology that’s supposed to be reducing fees overall, Lightning seems to cost you a bit up-front to get into it, and anytime you want to shuffle things around, it costs SATS.
I initially bought into it wishing to fund my own node and try for that oft-touted “self-soverignty” of BitCoin, but to achieve that you have to invest a lot of money to get started.
Playing with BitCoin today feels like programming COBOL for a bank in the 80s
Did you know that COBOL is behind nearly half of all financial transactions in 2017? Yes and the world is gradually ripping it out (thankfully).
IDENTIFICATION DIVISION. PROGRAM-ID. CONDITIONALS. DATA DIVISION. WORKING-STORAGE SECTION. *> I’m not joking, Lightning-cli and Bitcoin-cli make me think I’m programming for a bank 01 NUM1 SATSJOHNHAS 0(0). PROCEDURE DIVISION. MOVE 20000 TO NUM1. IF NUM1 > 0 THEN DISPLAY ‘YAY I HAZ 20000 SATS!’ END-IF *> I’d like to make all of transactions using the command line, just like when I do normal banking…oh wait… EVALUATE TRUE WHEN SATS = 0 DISPLAY ‘NO MORE SATS NOW :(’ END-EVALUATE. STOP RUN.
There is no doubt there’s a bit geek-elitism amongst many of the people involved with BitCoin. Comments like “Don’t use a GUI, to understand it you MUST use the command line…” reminds me of those that whined about the Macintosh in 1984 having a GUI. A “real” computer used DOS. OMFG seriously?
A real financial system is as painless for the user as possible. And frankly, setting up your own Node that you control is not easy, it’s not intuitive and it will make you think about things like inbound liquidity that you never thought you’d need to know, since you’re geek - not an investment banker.
I suppose the point is that owning your own bank means you have to learn a bit about how a bank needs to work and that takes time and effort. If you’re happy to just pay someone else to build and operate a node for you then that’s fine and that’s just what you’re doing today with any bank account.
I spent weeks learning just how much I don’t want to be my own bank - thank you very much.
Synology at a Node Verdict
Docker was not reliable enough either. In some instances I would modify a single dockers configuration file and restart the container only get “Docker API failed”. Sometimes I could recover by picking the Docker Container I thought had caused the failure (most likely the one I modified but not always) by Clearing the container and restarting it. Other times I had to completely reboot the Synology to recover it and sometimes I had to do both for Docker to restart. Every restart of the Bitcoin Container and there would go another half an hour restarting and then the container would “go backwards” and be 250 blocks behind taking a further 24-48 hours of resynchronising with the blockchain before the Lightning Container could then resynchronise with it. All the while the node is offline.
Money is Gone
The truth is in an attempt to get incoming channel opens working, I flicked between Bridge and Host and back again, opening different ports with Socks failed errors and finally gave up when after many hours the LND docker just wouldn’t connect via ZMQ any more.
And with that my $100 AUD investment is stuck on two virtual wallets.
]]>To solve the problems above there are a few key angles being tackled: Search, Discoverability and Monetisation.
I’d like to add a fourth key angle to that, which I didn’t think at the time should be listed as it’s own however having listened more to Episodes 16 and 17 and their intention to add XML tags for IRC/Chat Room integration I think I should add the fourth key angle: Interactivity.
Interactivity
The problem with broadcast historically is that audience participation is difficult given the tools and effort required. Pick up the phone, make a call, you need a big incentive (think cash prizes, competitions, discounts, something!) or audiences just don’t participate. It’s less personal and with less of a personal connection the desire for listeners to connect is much less.
In podcasting as an internet-first application and being far more personal, the bar is set differently and we can think of real-time feedback methods as verbal via a dial-in/patch-through to the live show or written via messaging, like a chat room. There are also non-real-time methods predominantly via webforms and EMail. With contact EMails already in the RSS XML specification, adding a webform submission entry might be of some use (perhaps < podcast:contactform > with a url=“https://contact.me/form"), but real-time is far more interesting.
Real Time Interactivity
In podcasting initially (like so many internet-first technology applications) geeks that understood how it works, led the way. That is to say with podcasts originally there was a way for a percentage of the listeners to use IRC as a Chat Room (Pragmatic did this for the better part of a year in 2014, as well as other far more popular shows like ATP, Back To Work etc.) to get real-time listener interaction during a podcast recording, albeit with a slight delay between audio out and listener response in the chat room.
YouTube introduced live streaming and live chat with playback that integrated the chat room with the video content to lower the barrier of entry for their platform. For equivalent podcast functionality to go beyond the geek-% of the podcast listeners, podcast clients will need to do the same. In order for podcast clients to be pressured to support it, standardisation of the XML tags and backend infrastructure is a must.
The problem with interactivity is that whilst it starts with the tag, it must end with the client applications otherwise only the geek-% of listeners will use it as they do now.
From my own experiences with live chat rooms during my own and other podcasts, people that are able to tune in to a live show and be present (lots of people just “sit” in a channel and aren’t actually present) is about 1-2% of your overall downloads and that’s for a technical podcast with a high geek-%. I’ve also found there are timezone-effects such that if you podcast live during different times of the day or night directly impacts those percentages even further (it’s 1am somewhere in the world right now, so if your listeners live in that timezone chances are they won’t be listening live).
The final concern is that chat rooms only work for a certain kind of podcast. For me, it could only potentially work with Pragmatic and in my experience I wanted Pragmatic to be focussed and chat rooms turned out to be a huge distraction. Over and again my listeners reiterated that one of the main attractions of podcasts was their ability to time-shift and listen to them when they wanted to listen to them. Being live to them was a minus not a plus.
For these reasons I don’t see that this kind of interactivity will uplift the podcasting ecosystem for the vast majority of podcasters, though it’s certainly nice to have and attempt to standardise.
Crowd-sourced Chapters
Previously I wrote:
The interesting opportunity that Adam puts forward with chapters is he wants the audience to be able to participate with crowd-sourced chapters as a new vector of audience participation and interaction with podcast creators.
Whilst I looked at this last time from a practical standpoint of “how would I as a podcaster use this?” concluding that I wouldn’t use it since I’m a self-confessed control-freak, but I didn’t fully appreciate the angle of audience interaction. I think for podcasts that have a truly significant audience with listeners that really want to help out (but can’t help financially) this feature provides a potential avenue to assist in a non-financial aspect, which is a great idea.
Crowd-source Everything?
(Except recording the show!)
From pre-production to post-production any task in the podcast creation chain could be outsourced to an extent. The pre-production dilemma could look like a feed level XML Tag < podcast:proposedtopics > to a planned topic list (popular podcasts currently use Twitter #Tags like #AskTheBobbyMcBobShow), to cut-out centralised platforms like Twitter from the creation chain in the long term. Again, only useful for certain kinds of shows, but could also include a URL Link to a shared document (probably a JSON file), an episode index reference (i.e. Currently released episode is 85, proposed topics for Episode 86, could also be an array for multiple episodes.)
The post-production dilemma generally consists of show notes, chapters (solution in progress) and audio editing. Perhaps a similar system to crowd-sourced chapters could be used for show notes that could include useful/relevant links for the current episode that aren’t/can’t be easily embedded as Chapter markers.
In either case there’s no reason why it couldn’t work the same way as crowd-sourced chapter markers. The podcaster could also have (with sufficient privileges) the administrative access to add/modify remove content from either of these, with guests also having read/write access. With an appropriate client tool this would then eliminate the plethora of different methods in use today: shared google documents being quite popular with many podcasters today, will not be around indefinitely.
All In One App?
Of course the more features we pile into the Podcasting client app, the more difficult it becomes to write and maintain. Previously an excellent programmer, come podcaster, come audiophile like Marco Arment, could create Overcast. With lightning network integration, plus crowd-sourced chapters, shared document support (notes etc) and a text chat client (IRC) the application is quickly becoming much heavier and complex, with fewer developers with the knowledge in each dimension to create an all-in-one app client.
The need for better frameworks to make feature integration easier for developers is obvious. There may well be the need to two classes of app or at least two views: the listener view and the podcaster view, or simply multiple apps for different purposes. Either way it’s interesting to see where the Tag + Use Case + Tool-chain can lead us.
]]>Interestingly when I visited Houston in late 2019 pre-COVID19 my long-time podfriend Vic Hudson suggested I catch up with Adam as he lived nearby and referred to him as the “Podfather.” I had no idea who Adam was at that point and thought nothing of it at the time and although I caught up with Manton Reece at the IndieWeb Meetup in Austin I ran out of time for much else. Since then a lot has happened and I’ve come across Podcasting 2.0 and thus began my somewhat belated self-education of my pre-podcast-involvement podcasting history of which I had clearly been ignorant until recently.
In the first episode of Podcasting 2.0, “Episode 1: We are upgrading podcasting” on the 29th of August, 2020 at about 17 minutes in, Adam regales the story of when Apple and Steve Jobs wooed him with regards to podcasting as he handed over his own Podcast Index as it stood at the time to Apple as the new custodians. He refers to Steve Jobs' appearance at D3 and at 17:45, Steve defined podcasting as being iPod + Broadcasting = Podcasting, further describing it as “Wayne’s World for Podcasting” and even plays a clip of Adam Curry complaining about the unreliability of his Mac.
The approximate turn of events thereafter: Adam hands over podcast index to Apple, Apple builds podcasting into iTunes and their iPod line up and become the largest podcast index, many other services launch but indies and small networks dominate podcasting for the most part but for the longest time Apple didn’t do much at all to extend podcasting. Apple added a few RSS Feed namespace tags here and there but did not attempt to monetise Podcasting even as many others came into the Podcasting space, bringing big names from conventional media and with them many companies starting or attempting to convert podcast content into something that wasn’t as open as it had been with “exclusive” pay-for content.
What Do I Mean About Open?
To be a podcast by its original definition it must contain an RSS Feed, that can be hosted on any machine serving pages to the internet, readable by any other machine on the internet with an audio tag referring to an audio file that can be streamed or downloaded by anyone. A subscription podcast requires login credentials of some kind, usually associated with a payment scheme, in order to listen to the audio of those episodes. Some people draw the line at free = open (and nothing else), others are happy with the occasional authenticated feed that’s still available on any platform/player as that still presents an ‘open’ choice, but much further beyond that (won’t play in any player, not everyone can find/get the audio) and things start becoming a bit more closed.
Due to their open nature, tracking of podcast listeners, demographics and such is difficult. Whilst advertisers see this as a minus, most privacy conscious listeners see this as a plus.
Back To The History Lesson
With big money and big names a new kind of podcast emerged, one behind a paywall with features and functionality that other podcast platforms didn’t or couldn’t have with a traditional open podcast using current namespace tags. With platforms scaling and big money flowing into podcasting, it effectively brought down the average ad-revenue across the board in podcasting and introduced more self-censorship and forced-censorship of content that previously was freely open.
With Spotify and Amazon gaining traction, more multi-million dollar deals and a lack of action from Apple, it’s become quite clear to me that podcasting as I’ve known it in the past decade is in a battle with more traditional, radio-type production companies with money from their traditional radio, movie and music businesses behind them. The larger the more closed podcast eco-systems become, the harder it then becomes for those that aren’t anointed by those companies as being worthy, to be heard amongst them.
Advertisers instead of spending time and energy with highly targeted advertising by carefully selecting shows (and podcasters) individually to attract their target demographic, instead they start dealing only with the bigger companies in the space since they want demographics from user tracking with bigger companies claiming a large slice of the audience they then over-sell their ad-inventory leading to lower-value DAI and less-personal advertising further driving down ad-revenues.
(Is this starting to sound like radio yet? I thought podcasting was supposed to get us away from that…)
Finally another issue emerged: that of controversial content. What one person finds controversial another person finds acceptable. With many countries around the world, each with different laws regarding freedom of speech and with people of many different belief systems, having a way to censor content with a fundamentally open ecosystem (albeit with partly centralised search) was a lever that would inevitably be pulled at some point.
As such many podcasts have been removed from different indexes/directories for different reasons, some more valid than others perhaps, however that is a subjective measure and one I don’t wish to debate here. If podcasts are no longer open then their corporate controller can even more easily remove them in part or in whole as they control both the search and the feed.
To solve the problems above there are a few key angles being tackled: Search, Discoverability and Monetisation.
Search
Quick and easy, the Podcast Index is a complete list of any podcast currently available that’s been submitted. It isn’t censored and is operated and maintained by the support of it’s users. As it’s independent there is no hierarchy to pressure the removal of content from it.
Monetisation
The concept here is ingenuous but requires a leap of faith (of a sort). Bitcoin or rather Lightning, which is a micro-transaction layer that sits aside Bitcoin. If you are already au fait with having a Bitcoin Node, Lightning Node and Wallet then there’s nothing for me to add but the interesting concept is this: by submitting your Node address in your Podcast RSS feed (using the podcast:value tag) a compliant Podcast player can then optionally use the KeySend Lightning command to send a periodic payment “as you listen.” It’s voluntary but it’s seamless.
The podcaster sets a suggested rate in Sats (Satoshis) per minute of podcast played (recorded minute - not played minute if you’re listening at 2x, and the rate is adjustable by the listener) to directly compensate the podcast creator for their work. You can also “Boost” and provide one-off payments via a similar mechanism to support your podcast creator.
The transactions are so small and carry such minimal transaction fees that effectively the entire amount is transferred from listener to podcaster without any significant middle-person skimming off the top in a manner that both reflects the value in time listened vs created and without relying on a single piece of centralised infrastructure.
Beyond this the podcaster can choose additional splits for the listener streaming Sats to go to their co-hosts, to the podcast player app-developer and more. Imagine being able to directly compensate audio editors, artwork contributors, hosting providers all directly and fairly based on listeners actually consuming the content in real time.
This allows a more balanced value distribution and protects against the current non-advertising podcast-funding model via a support platform like Patreon and Patreon (oh I mean Memberful but that’s actually Patreon ). When Patreon goes out of business all of those supportive audiences will be partly crippled as their creators scramble to migrate their users to an alternative. The question is will it be another centralised platform or service, or a decentralised system like this?
That’s what’s so appealing about the Podcasting 2.0 proposition: it’s future proof, balanced and sensible and it avoids the centralisation problems that have stifled creativity in the music and radio industries in the past. There’s only one problem and it’s a rather big one: the lack of adoption of Lightning and Bitcoin. Currently only Sphinx supports podcast KeySend at the time of publishing and adding more client applications to that list of one is an easier problem to solve than listener mass adoption of BitCoin/Lightning.
Adam is betting that Podcasting might be the gateway to mass adoption of BitCoin and Lightning and if he’s going to have a chance of self-realising that bet, he will need the word spread far and wide to drive that outcome.
As of time of writing I have created a Causality Sphinx Tribe for those that wish to contribute by listening or via Boosting. It’s already had a good response and I’m grateful to those that are supporting Causality via that means or any other for that matter.
Discoverability
This is by far the biggest problem to solve and if we don’t improve it dramatically, the only people and content that will be ‘findable’ will be that of the big names with big budgets/networks behind them, leaving the better creators without such backing, left lacking. It should be just as easy to find an independent podcast with amazing content made by one person as it is to find a multi-million dollar podcast made by an entire production company. (And if the independent show has better content, then the Sats should flow to them…)
Current efforts are focussed on the addition of better tags in the Podcasting NameSpace to allow automated and manual searches for relevant content, and to add levers to improve promotability of podcasts.
They are sensibly splitting the namespace into Phases, each Phase containaing a small group of tags and progressively agreeing several tags at a time with the primary focus of closing out one Phase of tags before embarking on too much detail for the next. The first phase (now released) included the following:
I’ve implemented those that I see as having a benefit for me, which is all of them (soundbite is a WIP for Causality), with the exception of Chapters. The interesting opportunity that Adam puts forward with chapters is he wants the audience to be able to participate with crowd-sourced chapters as a new vector of audience participation and interaction with podcast creators. They’re working with HyperCatcher’s developer to get this working smoothly but for now at least I’ll watch from a safe distance. I think I’m just too much of a control freak to hand that out on Causality to others to make chapter suggestions. That said it could be a small time saver for me for Pragmatic…maybe.
The second phase (currently a work in progress) is tackling six more:
Whilst there are many more in Phase 3 which is still open, the most interesting is the aforementioned < podcast:value > where the podcaster can provide a Lightning Node ID for payment using the KeySend protocol.
TEN Makes It Easy
This is my “that’s fine for John” moment, where I point out that me incorporating these into the fabric of The Engineered Network website hasn’t taken too much effort. TEN runs on GoHugo as a static site generator and whilst it was based on a very old fork of Castanet, I’ve re-written and extended so much of that now that’s not recognisable.
I already had people name tagging, people name files, funding, subscribe-to links on other platforms and social media tags and transcripts (for some episodes) already in the MarkDown YAML front-matter and templates so adding them into the RSS XML template was extremely quick and easy and required very little additional work.
The most intensive tags are those that require additional Meta-Data to make them work. Namely, Location only makes sense to implement on Causality, but it took me about four hours of Open Street Map searching to compile about 40 episode-locations worth of information. The other one is soundbite (WIP) where searching for one or more choice quotes retrospectively is time-consuming.
Not everyone out there is a developer (part or full-time) and hence rely on services to support these tags. There’s a relatively well maintained list at Podcast Index and at time of writing: Castopod, BuzzSprout, Fireside, Podserve and Transistor support one or more tags, with Fireside (thank you Dan!) supporting an impressive six of them: Transcript, Locked, Funding, Chapters, Soundbite and Person.
Moving Forward
I’ve occasionally chatted with the lovely Dave Jones on the Fediverse (Adam’s co-host and the developer working on many aspects of 2.0) and listen to 2.0 via Sphinx when I can (unfortunately I can’t on my mobile/iPad as the app has been banned by my company’s remote device management policy) and I’ve implemented the majority of their proposed tags thus far on my shows. I’m also in the process of setting up my own private BitCoin/Lightning Node.
For the entire time I’ve been involved in the podcasting space, I’ve never seen a concerted effort like this take place. It’s both heartening and exciting and feels a bit like the early days of Twitter (before Jack Dorsey went public, bought some of the apps and effectively killed the rest and pushed the algorithmic timeline thus ruining Twitter to an extent). It’s a coalition of concerned creators, collaborating to create a better outcome for future podcast creators.
They’ve seen where podcasting has come from, where it’s going and if we get involved we can help deliver our own destiny and not leave it in the hands of corporations with questionable agendas to dictate.
]]>Not wishing for a repeat of this I purchased an 8TB external USB HardDrive and installed BackBlaze. The problem for me though was that BackBlaze was an ongoing expense, could only be used for a single machine and couldn’t really do anything other than be an offsite backup. I’d been considering a Network Attached Storage for years now and the thinking was, if I had a NAS then I could have backup redundancy1 plus a bunch of other really useful features and functionality.
The trigger was actually a series of crashes and disconnects of the 8TB USB HDD, and with the OS’s limited ability to troubleshoot HDD hardware-specific issues via USB I had some experience from my previous set of HDD failures many years ago, that this is how it all starts. So I gathered together a bunch of smaller HDDs and copied across all the data to them, while I still could, and resolved to get a better solution: hence the NAS.
Looking at both QNAP and Synology and my desire to have as broad a compatibility as possible, I settled on an Intel-based Synology, which in Synology-speak, means a “Plus” model. Specifically the DS918+ presented the best value for money with 4 Bays and the ability to extend with a 5 Bay external enclosure if I really felt the need in future. I waited until the DS920+ was released and noted that the benchmarks on the 920 weren’t particularly impressive and hence I stuck with the DS918+ and got a great deal as it had just become a clearance product to make way for the new model.
My series of external drives I had been using to hold an interim copy of my data were: a 4TB 3.5", a 4TB 2.5" (at that time I thought it was a drive in an enclosure you could extract), and a 2TB 3.5" drive as well as, of course, my 8TB drive which I wasn’t sure was toast yet. The goal was to reuse as many of my existing drives as possible and not spend even more money on more, new HDDs. I’d also given a disused but otherwise healthy 3.5" 4TB drive to my son for his PC earlier in the year and he hadn’t actually used it, so I reclaimed it temporarily for this exercise.
Here’s how it went down:
STEP 1: Insert 8TB Drive and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…hundreds of bad sectors. To be honest, that wasn’t too surprising since the 8TB drive was periodically disconnecting and reconnecting and rebuilding its file tables - but now I had the proof. The Synology refused to let me create a Storage Pool or a Volume or anything so I resigned myself to buying 1 new drive: I saw that SeaGate Barracudas were on sale so I grabbed one from UMart and tried it.
STEP 2: Insert new 4TB Barracuda and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…it worked perfectly! (As you’d expect) Though the test took a VERY long time, I was happy so I created a Storage Pool, Synology Hybrid RAID. Created a Volume, BTRFS because it came highly recommended, and then began copying over the first 4TB’s worth of data to the new Volume. So far, so good.
STEP 3: Insert my son’s 4TB drive and extend the SHR Storage Pool to include it. The Synology allowed me to do this and I did so for some reason without running a SMART Extended test on it first, and it let me so that should be fine right? Turns out, this was a terrible idea.
STEP 4: Once all data was copied off the 4TB data drive and to the Synology Volume, wipe that drive, extract the 3.5" HDD and insert the reclaimed 4TB 3.5" into the Synology and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…hundreds of bad sectors. Um, okay. That’s annoying. So I might be up for yet another HDD since I have 9TB to store.
OH DEAR MOMENT: As I was re-running the drive check the Synology began reporting that the Volume was Bad, and the Storage Pool was unhealthy. I looked into the HDD manager and saw that my sons reclaimed 3.5" drive was also full of bad sectors, as the Synology had run a periodic test while data was still copying. I also attempted to extract the 2.5" drive from the external enclosure only to discover that it was a fully integrated controller/drive/enclosure and couldn’t be extracted without breaking it. (So much for that) Whilst I still had a copy of my 4TB of data in BackBlaze at this point I wasn’t worried about losing data but the penny dropped: Stop trying to save money and just buy the right drives. So I went to Computer Alliance and bought three shiny new 4TB SeaGate IronWolf drives.
STEP 5: Insert all three new 4TB IronWolfs and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…the first drive perfect! The second and third drives however…had bad sectors. Bad sectors. On new drives? And not only that NAS-specific, high reliability drives? John = not impressed. I extended the Storage Pool (Barracuda + 1 IronWolf) and after running a Data Scrub it still threw up errors despite the fact both drives appeared to be fine and were brand new.
This is not what you want to see on a brand new drive…
TROUBLESHOOTING:
So I did what all good geeks do and got out of the DSM GUI and hit SSH and the Terminal. I ran “btrfs check –repair” and recover, super-recover and chunk-recover and ultimately the chunk tree recovery failed. I read that I had to stop everything running and accessing the Pool so I painstakingly killed every process and re-ran the recovery but ultimately it still failed after a 24 hour long attempt. There was nothing for it - it was time to start copying the data that was on there (what I could read) back on to a 4TB external drive and blow it all away and start over.
STEP 6: In the midst of a delusion that I could still recover the data without having to recopy the lot of it off the NAS (a two day exercise), I submitted a return request for first failed IronWolf, while I re-ran the SMART on the other potentially broken drive. The return policy stated that they needed to test the HDD and that could take a day or two and Computer Alliance is a two hour round trip from my house. Fortunately I met a wonderfully helpful and accomodating support person at CA on that day and he simply handed me a replacement, taking the Synology screenshot of the bad sector count and serial number confirming I wasn’t pulling a switch on him and handed me a replacement IronWolf on the spot! (Such a great guy - give him a raise) I returned home, this time treating the HDD like a delicate egg the whole trip, inserted it and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…perfect!
STEP 7: By this time I’d given up all hope of recovering the data and with three shiny new drives in the NAS, my 4TB of original data restored to my external drive (I had to pluck 5 files that failed to copy back from my BackBlaze backup) I wiped all the NAS drives…and started over. Not taking ANY chances I re-ran the SMART tests on all three and when they were clean (again) recreated the Pool, new Volume, and started copying my precious data back on to the NAS all over again.
STEP 8: I went back to Computer Alliance to return the second drive and this time I met a different support person, someone who was far more “by the book” and accepted the drive and asked me to come back another day once they’d tested it. I’d returned home and hours later they called and said “yeah it’s got bad sectors…” (you don’t say?) and unfortunately due to personal commitments I couldn’t return until the following day. I grabbed the replacement drive, drove on eggshells, added it to the last free bay and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…perfect! (FINALLY)
STEP 9: I copied all of the data across from all of my external drives on to the Synology. The Volume was an SHR with 10.9TB of usable space spread across x4 4TB drives, (x3 IronWolf, and x1 Barracuda). The Data Scrub passed, the SMART Tests passed, and the IronWolf-specific Health Management tests all passed with flying colours (all green, oh yes!) It was time to repurpose the 4TB 2.5" external drive as my offline backup for the fireproof safe. I reformatted it to ExFAT and set up HyperBackup for my critical files (Home Videos, Videos of the Family, my entire photo library), backed them up and put that in the safe.
CONCUSION:
Looking back the mistake was that I never should have extended the storage pool before the Synology had run a SMART test and flagged the bad sectors. In so doing it wrote data to those bad sectors and there were just too many for BTRFS to recover in the end. In addition I never should have tried to do this on the cheap. I should have just bought new drives from the get-go. Not only that, I should have just bought NAS-specific drives from the get-go as well. Despite the bad sectors and bad luck of getting two out of three bad IronWolf drives, in the end they have performed very well and completed their SMARTs faster with online forums suggesting a desktop-class HDD (the Barracuda) is a bad choice for a NAS. I now have my own test example to see if the Barracuda is actually suitable as a long-term NAS drive, since I ended up with both types in the same NAS, same age, same everything else, so I’ll report back in a few years to see which starts failing first.
Ultimately I also stopped using BackBlaze. It was slowing down my MacBook Pro, I found video compression on data recovery that was frustrating, and even with my 512GB SSD on the MBP with everything on it, I would often get errors about a lack of space for backups to BackBlaze. Whilst financially the total lifecycle cost of the NAS and the drives is far more than BackBlaze (or an equivalent backup service) would cost me, the NAS can also do so many more things, than just to backup my data via TimeMachine.
But that’s another story for another article. In the end the NAS plus drives cost me $1.5k AUD, 6 trips to two different computer stores and 6 weeks from start to finish, but it’s been running now since August 2020 and hasn’t skipped a beat. Oh…my…NAS.
Redundancy against the failure of an individual HDD ↩︎
As in my previous Lens Shootout I tested sharpness indoors, with controlled lighting conditions setting the D500 on a tripod, set with a Timer and adjusting the shutter speed leaving a constant shutter speed of 1/200th of a second, with Auto ISO and tweaked the Exposure during post to try and equalise the light level between exposures.
Setting the back of some packaging with a mixture of text and symbols as the target with the tripod at the same physical distance for each test photo.
I took photos across the aperture range at f/5.6, f/8 and f/11, cropped to 1,000 x 1,000 pixels in both the dead-center of the frame and the bottom-right edge of the frame.
200mm
200mm Center Crop f/5.6
200mm Center Crop f/8
200mm Center Crop f/11
200mm Edge Crop f/5.6
200mm Edge Crop f/8
200mm Edge Crop f/11
300mm
300mm Center Crop f/5.6
300mm Center Crop f/8
300mm Center Crop f/11
300mm Edge Crop f/5.6
300mm Edge Crop f/8
300mm Edge Crop f/11
400mm
400mm Center Crop f/5.6
400mm Center Crop f/8
400mm Center Crop f/11
400mm Edge Crop f/5.6
400mm Edge Crop f/8
400mm Edge Crop f/11
500mm
500mm Center Crop f/5.6
500mm Center Crop f/8
500mm Center Crop f/11
500mm Edge Crop f/5.6
500mm Edge Crop f/8
500mm Edge Crop f/11
What I wanted to test the most was the differences between Edge and Centre sharpness as well as the the effect of different Apertures. For me I think the sensor is starting to battle ISO grain at f/11 and this is impacting the apparent sharpness. In the field I’ve tried stopping down the Aperture to try and get a wider focus across the zoom area but it’s tough the further out you zoom and the images above support this observation.
My conclusions in terms of the questions I was seeking answers to though, is firstly there’s no noticable change in sharpness from the center to the edge at the closest zoom, irrespective of aperture. The edge starts to softens only slightly as you zoom in towards 500mm, and is independent of aperture.
The thing I didn’t expect was the sharpness at f/5.6 being so consistent, throughout the zoom range. If you’re isolating a subject at the extremes of zoom then it’s probably not worth stopping down the aperture and in future when I’m shooting I’ll just keep that aperture as wide open as I can unless I’m at the 200mm end of the zoom spectrum.
It’s a truly amazing lens for the money and whilst I realise there are many other factors to consider, I at least answered my own questions.
]]>I was curious just how much I could see with my D500 (1.5x as it’s a DX Crop-sensor) using the lens at 500mm maximum (750mm effective). The first step was to mount my kit on my trusty 20 year old, ultra-cheap, aluminium tripod. Guess what happened?
The bracket that holds the camera to the tripod base snapped under the weight of the lens and DSLR and surprising even myself, in the pitch dark, I miraculously caught them before they hit the tiles, by mere inches. Lucky me, in one sense, not so lucky in another - my tripod was now broken.
Not to be defeated, I applied my many years of engineering experience to strap it together with electrical tape…because…why not?
Using this combination I attempted several shots of the heavens and discovered a few interesting things. My PixelPro wireless shutter release did not engage the Image Stabilisation in the zoom lens. I suppose they figured that if you’re using the remote, you’ve probably got a tripod anyhow so who needs IS? Well John does, because his Tripod was a piece of broken junk that was swaying in the breeze - no matter how gentle that breeze was…
Hence I ended up ditching the Tripod and opted instead for handheld, using the IS in the Zoom Lens. The results were (to me at least) pretty amazing!
The Moon I photographed through all of its phases culminating in the above Full Moon image. By far the easier thing to take a photo of and in 1.3x crop mode on the D500 it practically filled the frame. Excellent detail and an amazing photograph.
Of course, I didn’t stop there. It was time to turn my attention to the planets and luckily for me several planets are at or near opposition at the moment. (Opposition is one of those astronomy terms I learned recently, where the planet appears at its largest and brightest, and is above the horizon for most of the night)
Jupiter and its moons, the cloud band stripes are just visible in this photo. Stacked two images, one exposure of the Moons and one of Jupiter itself. No colour correction applied.
Saturn’s rings are just visible in this image.
Mars is reddish and not as interesting unfortunately.
The ISS image above clearly shows the two large solar arrays on the space station.
What’s the problem?
Simple. It’s not a telescope…is the problem. Zoom Lenses are simply designed for a different purpose than maximum reach taking photos of planets. I’ve learned through research that the better option is to use a T-Ring adaptor and connect your DSLR to a telescope. If you’re REALLY serious you shouldn’t use a DSLR either since most have a red-light filter which changes the appearance of nebulae, you need to use a digital camera that’s specifically designed for Astrophotography (or hack your DSLR to remove it from some models if you’re crazy enough).
If you’re REALLY, REALLY interested in the best photos you can take, you need an AltAz or Altitude - Azimuth mount that automatically moves the camera in opposition to Earths rotation to keep the camera pointing in the same spot in the night sky for longer exposures. And if you’re REALLY, REALLY, REALLY serious you want to connect that to a guide scope that further ensures the auto-guided mount is tracking as precisely as possible. And if you’re REALLY, REALLY, REALLY, REALLY serious you’ll take many, many exposures including Bias Frames, Light Frames, Dark Frames, and Frame Frames and image stack them to reduce noise in the photo.
How Much Further Can You Go With a DSLR and Lenses?
Not much further, that’s for sure. I looked at adding Teleconverters, particularly the TC-14E (1.4x) and then a TC-20E (2x) which would give me an effective focal length of 1,050mm and 1,500mm respectively. The problem is that you lose a lot of light in the process and whilst you could get a passable photo at 1,050mm, with 1,500 on this lens you’re down to an aperture of f/11 which is frankly, terrible. Not only that but reports seem to indicate that coma (chromatic aberration) is pretty bad with the 2x Teleconverter coupled with this lens. The truth is that Teleconverters are meant for fast primes (f/4 or better) not a f/5.6 Zoom.
Going to an FX Camera Body wouldn’t help since you’d lose the 1.5x effective zoom from the DX sensor, although you might pick up a few extra pixels, the sensor on my D500 is pretty good in low light, so you’re not going to get a much better low-light sensor for this sort of imaging. (Interestingly the pixel density of the sensor between the D500 DX and D850 FX, leaves my camera with 6.7% more pixels per square cm so it’s still the better choice)
How Many Pixels Can You See?
Because I’m me, I thought let’s count some pixels. Picking Jupiter because it’s big, bright and easy to photograph (as planets go) with my current combination it’s 45 pixels across. Adding 1.4x Teleconverter gets me to an estimated 63 pixels, and 2.0x to 90 pixels diameter. Certainly that would be nicer, but probably still wouldn’t be enough detail to make out the red spot with any real clarity.
Just a Bit of Fun
Ultimately I wanted to see if it was possible to use my existing Camera equipment for Astronomy. The answer was: kinda, but don’t expect more than the Moon to look any good. If you want pictures somewhere between these above and what the Hubble can do, expect to spend somewhere between $10k –> $30k AUD on a large aperture, large focal length telescope, heavy duty AltAz mount, tracking system and specialised camera, and add in a massive dose of patience waiting for the clearest possible night too.
If nothing else for me at least, it’s reawakened a fascination that I haven’t felt since I was a teenager about where we sit in the Universe. With inter-planetary probes and the Hubble Space Telescope capturing amazing images, and CGI making it harder to pick real from not-real planets, suns and solar systems, it’s easy to become disconnected from reality. Looking at images of the planets in ultra-high resolution almost doesn’t feel as real as when you use your own equipment and see them with your own eyes.
So I’ve enjoyed playing around with this but not because I was trying to get amazing photographs. It’s been a chance to push the limits of the gear I have with me to see a bit more of our Solar System, completely and entirely on my own from my own backyard. And that made astronomy feel more real to me than it had for decades.
The stars, the moon, the planets and a huge space station that we humans built, are circling above our heads. All you need to do is look up…I’m really glad I took the time to do just that.
]]>Whilst I applaud Apple’s “Create Your Style” watch and band selector, the fact you STILL can’t select a Nike band or a Hermes band with your new watch. (I know right? No Hermes? I guess there’s always a Hermes store for that…the bands are next to the riding helmets I hear…)
Per Apple’s directions when ordering, I dutifully printed out the measuring tape/paper cutout measurement implement to find my wrist size was between 6 and 7 - exactly half way. I opted for a 7 when I ordered, plain white then attended the Chermside Apple Store to pick it up at a scheduled time through their door / COVID19 “window” for pickups.
Once in hand I opened and hastily put it on the watch and my wrist only to find it was too loose. Logic being that this was going to probably stretch over time, I went back to the “window” to swap it for a Size 6, one size down. After attempting to return just the band, and failing, then trying multiple times to return the entire watch, just to swap the band, after nearly 45 minutes I had the right fitting band and was on my way.
I’m not sure I’m complaining exactly as everything is relative. There are other parts of the world where Apple Stores are still closed due to local COVID19 lockdown restrictions, so I had it good…for sure.
The gap at the edge is quite small and tight, which is how I like to wear my watches. (I hate loose watches)
The band to the untrained eye looks just like a traditional White Sport Band.
The giveaway is underneath where there is no pin, and ultimately the reason that I like this band so much more than any of the existing sport bands. On standard two-piece sport bands, the pin isn’t so much the issue, it’s the slide-under segment through the hole that pulls out arm hairs on the way and places pressure on my carpel tunnels after many hours of wearing. (Sure I could wear it more loosely, but refer above - I hate doing that)
Feel and Comfort
The solo loop band is softer than my White Sport Band and is elastic but firm. The rubber-like texture is balanced with a smooth finish so it doesn’t grab your arm hairs too much like a rubber-band would when you take it off or put it on.
Beyond this I’ve found that like the other sport bands it’s the best option when you get it wet as it’s quick and easy to dry.
I Really Wanted A Nike Sport Loop Though
I’ve been a huge fan of my nearly two year old Blue Sport Loop band so much so that I’ve worn it more than any other band during that time and it’s frayed at the loop-back buckle and generally a bit worse for wear.
I had secretly hoped that when Apple released the Series 6 they would open up the selector to include Nike bands as options, alas they did not. So after wearing the Solo Loop for a week, I went back to the Apple Store and grabbed the band I actually wanted: the Spruce Aura Sport Loop.
Side by side the Pure White of the Solo Loop contrasts with the subtle Green weave of the Nike Sport Loop.
The Nike Loop is made from the same material and is just as comfortable as my previous favourite comfortable band with the bonus of being a pleasant light colour that’s reflective in the dark.
Concerns with the Solo Loops
Much has been written about the Solo Loop being a bad customer experience and certainly with so many Apple Stores not functioning as they used to due to COVID19 restrictions, finding the best fit is more difficult than it otherwise would be. That said, were they open the truly best way to get a feel for the band comfort isn’t wearing it in the store for two minutes - you really need serious time with it in general use for a few days or weeks to know for sure if it will work for you in that size.
Notwithstanding this the other issue is resale. Previously you could sell your Apple Watch or hand it down to other family members but now the variable of “will it fit their wrist” needs to be considered. If not, you’re up for another solo band that fits the recipient or one with flexible sizing that fits anyone.
If you can look past these issues, then the solo sport loop is comfortable, simple and I think better than the other Sport Bands on offer. That said…I’ll be sticking with my recommendation for the Sport Loops as the best all-round band for the Apple Watch.
]]>I started out loving Zoom lenses with my 55-200mm Nikon providing most of the work for outdoors sports, but with two of the key kinds of sports photography I was being called on happening at night or indoors in poor lighting (Netball and Basketball) then I had to invest in a better zoom with the Tamron 24-70mm f/2.8 being my choice.
It does a fine job and did double-duty for large group shots where I didn’t have space to move back and needed to work in close, and using a DX camera (Nikon D5500 and then D500) the 24mm short end wasn’t quite short enough. I invested in a kit lens second-hand on a lark, thinking it could do fine at the short end (18mm) for those tight situations. Unfortunately I kept having trouble with the sharpness of both the 24-70mm Tamron and the 18-55mm Nikon kit lenses.
A thought occurred to me that I’d become spoilt by my growing prime collection (35mm f/1.8, 50mm f/1.8, 85mm f/1.8) which are sharp as a tack at pretty much every aperture. Then I read many, many semi-professional and professional lens reviews to try and decide if I was imagining things.
Then I thought, “Hey, I could just do my own test…”
…and here it is…
I decided to test their sharpness indoors, with controlled lighting conditions setting the D500 on a tripod, set with a Timer and adjusting the shutter speed leaving a constant ISO160. Setting the back of some packaging with a mixture of text and symbols as the target with the tripod at the same distance for each. The only variable I think I could have done better was the distance from the lens element to the target was slightly different owing to the different lens designs and resulting imprecision of the 50mm mark on each Zoom ensuring the exact same image scale in the frame, but it’s close enough to make the point.
Tamron 24-70mm f/2.8 (Left) | Nikon 18-55mm f/3.5-f/5.6 (Middle) | Nikon 50mm f/1.8 (Right)
Finally to match the apertures I took photos across the range with two equivalence points that were possible on all three lenses, at f/5.6 which is the widest open the 18-55mm Lens could go, and f/8 because…f-8 and be there, or something like that. Additionally I tried out f/2.8 to provide another point of comparison for between the 50mm and the Tamron.
Firstly the f/5.6 Shoot-out…
18-55mm Nikon at f/5.6
24-70mm Tamron at f/5.6
50mm Nikon at f/5.6
Secondly the f/8 Shoot-out…
18-55mm Nikon at f/8
24-70mm Tamron at f/8
50mm Nikon at f/8
There’s no question that the 18-55mm Kit Lens is the worst by an obvious margin than both the other lenses. That shouldn’t be a revelation to anyone, it’s the cheapest lens I tried and honestly…it shows.
What’s more interesting is the colour reproduction and the sharpness between the Tamron and the Prime. At f/8 I think the Tamron has better colour and is marginally sharper, but at f/5.6 it’s almost a wash. It’s easy to take the darker lines on the Tamron as the better representation but the Prime picked up the dust and imperfections in the printed lines and text slightly better, leading to a slightly lighter colour.
Finally the f/2.8 Shoot-out…
24-70mm Tamron at f/2.8
50mm Nikon at f/2.8
In the end the Tamron on balance seems slightly sharper than the 50mm Prime at 50mm, but there’s also the amount of light and colour on the Prime is better. So what’s the conclusion? Clearly the Tamron is a fantastic lens, but the 50mm is probably good enough at 50mm, so the question is why do I need both?
For me, personally, what is each lens really for? If I have a 50mm and 85mm Prime, then I don’t really need the Tamron beyond 24mm. What’s clear to me is that I’m well covered between 35mm and 85mm with some great lenses but where I’m lacking in a decent Ultra-wide. The poor quality of the 18-55mm Kit Lens disqualifies it as a contender.
Hence my intention is that I definitely don’t need or want the Kit Lens anymore. It’s just not up the standard I’m looking for in terms of sharpness. Also, as hard as it is for me to part with it, the Tamron doesn’t fit a need I have any more. The gap I need to fill is in the Ultra-Wide category which is difficult with a crop-sensor to achieve, but 24mm isn’t enough. My intention therefore is to replace them both with a sharper Ultra-Wide Lens.
Which lens that is, I’m still uncertain, though the 10-20mm Nikon looks nice.
]]>Having looked at the type and volume of traffic it makes sense to consolidate them now rather than let them continue on for another year at their current homes.
The intention is to keep Podcasting over on The Engineered Network and everything else here at TechDistortion.
]]>Upon putting my name down at an agency I wasn’t sure what to expect and when I landed an audition, then I landed the narration job for an audiobook! I was ecstatic. Once that wore off I signed the contract and realised I was now on the hook to record, edit and supply a complete audiobook that someone else had poured their time, energy and effort into creating the written version of. It was my job to narrate that book and make it sing!
Easy huh?
Oh boy.
I think it’s fair to say that I underestimated how much work it would be and looking back, just how much I learned in making it.
Some of the key lessons I learned from this experience that weren’t obvious to me when I signed up:
Of course this is the first audiobook I’ve ever recorded for a client. Realistically though it wasn’t what my friends and family expected. Firstly it wasn’t fiction, I didn’t do any voices, and spoke in my normal accent. In some audiobooks I’m aware of, narrators tweak sentences and ad-lib to an extent, lending their own personality to the reading. That isn’t always the case and wasn’t for this book.
Am I Planning Another AudioBook?
Absolutely yes, I am. I’ve done another audition and I’m working on my own series of AudioBooks as an Author-Narrator. The next time I’ll have a much better idea what to expect and am intent on doing an even better job each subsequent book I narrate.
So How Long Was This Book?
The book runs for just a touch under 3 hours, which is quite short for an audiobook but I speak pretty fast. A “normal” narrator should take about 3.5hrs for the same word count. That said my client loved the pacing and that’s what matters to me.
So in terms of Raw Audio, unedited, including all re-records and edits was 5.3hrs of raw audio. The entire book took approximately 28 hours to record, edit, re-record, normalise, remove noises, review and organise ready for release.
That’s a lot. I suspect I’ll get better next book but it’s no walk in the park. I lost about 10 hours where I had to re-record effectively a third of the book so that didn’t help…
Conclusion
The book is “The Knack Of Selling” by Mat Harrington. In reading the book I have to admit, I learned a lot of little things I had long suspected were salesperson “tricks” and a few things I hadn’t picked up on too. So to be completely fair, not only did I record this book for Mat, I learned a lot about sales while I was at it!
It’s currently available on iTunes and the Google Play audiobook stores.
I’m planning my own audiobooks in future and I’m going to record some of my accents as well on my profile page at Findaway Voices.
If you’d like me to record your audiobook, reach out and let me know. I’d love to help bring your work to life too!
]]>Of course Marco has toyed with spending time developing a macOS port of Overcast but until that happens I needed a work-around. The requirements for my use case:
I tried Undercast and a few other web-wrappers but to be honest, they were all terrible. The Web player is a bare-minimum passable option that gets you by in a pinch but that’s all. Then I remembered you can turn your Mac into an AirPlay receiver by using an app from Rogue Amoeba. AirFoil Satellite can be trialled free but a licence costs $29 USD (plus applicable taxes). I had a copy laying around from years ago and I always just install it (just in case) on every new machine.
Open AirFoil Satellite and set a Play/Pause shortcut that makes sense for you (I chose Command-Shift-P) and then write an AppleScript to activate and then send the keyboard shortcut and give that a keyboard shortcut via FastScripts. I chose F17 (I love my extended keyboard).
on run
if application "Airfoil Satellite" is running then
tell application "Airfoil Satellite" to activate
tell application "System Events" to tell process "Airfoil Satellite" to keystroke "P" using {command down, shift down}
return
end if
end run
It’s not perfect but meets my criteria. There are other applications out there that do similar things and I’ve had trouble with Automator since the Catalina update restricting what can be executed as a global shortcut from ANYWHERE, which is why I’ve switched to FastScripts.
Hopefully that’s useful to someone, until native macOS app is released in the future. You can just load up your playlist, pipe it through your desktop speakers, sync position is kept, smart speed is your best friend, and away you go :)
]]>No so much at home.
As an electrical engineer with a background in radio I’m well aware of the issues with wireless connectivity. Particularly low power wireless, even broadband or spread-spectrum technologies can be thwarted by enough radio interference. So when I purchased a brand new Apple Magic Mouse 2 a few weeks ago, I could no longer avoid what had been nagging at me for over a year: there seemed to be something wrong with my Macbook Pros wireless connectivity. (Spoiler: So I thought)
Symptoms
I’ve had a Bluetooth Apple Magic Keyboard and Magic Trackpad 2 for over a year and they would occasionally disconnect from the Macbook Pro, and on the keyboard my keystrokes would occasionally lag behind what was shown on the screen. For the longest time I shrugged it off, it was passing and temporary.
Starting the use the Magic Mouse 2, I was irritated in the first minute I used it with a stuttering cursor across the screen. As a part of working from home I’ve been on Skype for Business, Microsoft Teams, even (Shudder) Zoom audio and video conferences, on some days for 9 hours straight. The obvious thing to reach for are my AirPods. They’re only six months old and the audio in my ears sounded perfectly clear, however I was getting consistent complaints from others on the conference call that my audio was breaking up, yet I was connected by hardwired Ethernet to my router and my Upload/Download connection speeds were first rate.
Diagnosis
Being a semi-professional podcaster (some say) I had plenty of audio gear to test my microphone and quickly connecting my MixPre3 and Heil PR-40 to the Macbook Pro, now using the MixPre3 as the Microphone and my AirPods as the receiver, there were no issues with audio any more. I noted that when connected to my iPad or iPhone the AirPods had no microphone drop-outs. At this point it was clear the problem was proximity to the Macbook Pro or the Macbook Pro had some issue with wireless connectivity, specifically these Bluetooth devices. To further confirm the mouse stutter wasn’t the mouse itself I borrowed my sons wired USB Mouse and noted that it did not stutter when connected via the USB hub or via the Thunderbolt dock.
Next I cabled my Magic Keyboard 2 to my USB Hub, hence disconnecting its Bluetooth connection. The Mouse stuttering continued, though it appeared to be marginally better. Turning off the trackpad and AirPods entirely and the stuttering seemed ever so marginally less pronounced though it was still visible and jarring.
Then to attempt to isolate further I disconnected the Macbook Pro from power with no change. I then disconnected the USB Hub, and the most marked improvement in stutter was clear. Then I turned my attention to the only other item connected: the StarTech.com Thunderbolt hub. At this point the Stuttering was gone.
The StarTech.com with my attempts to shield and repair the cable
Not Very Useful
I tried to wrap the StarTech.com cable with an RF Choke, shielding, but whatever noise it was producing would not be silenced. I needed to connect the Macbook Pro to multiple screens and I needed hardwired Ethernet and I only had 4 USB-C ports (mind you that’s better than some of Apple’s laptop machines).
I’d been eyeing one of these off for what seems like years (more like 18 months) so I finally ordered the CalDigit TS3+ Thunderbolt dock. I ordered it via Apple and it arrived only two business days later.
Devices I currently have plugged into the TS3+:
I’ve tested the SD Card reader (can pack away my old multi-card USB 2.0 reader now), and all of the other USB-A ports plus the USB-C front port but they’re currently vacant. With this dock I packed away my USB-C 61W charger and Apple’s Macbook Pro USB-C cable as well. My Magic Keyboard 2 is back in Bluetooth mode, so’d the Magic Trackpad, the Magic Mouse and the AirPods and guess what?
No Mouse Stutter
No Audio Dropouts of the Microphone from the AirPods
Okay so was this a case of throwing money at a problem to make it go away? Kinda sorta, but truth be told it was more an expensive process of elimination.
All BlueTooth Devices now Happily Working Simultaneously
Interference
The problem lies in one of three places, as it always does with anything wireless. For communication between two places you need A) a transmitter, B) a receiver and C) the transmission medium joining the two. In this case, the transmitter probably wasn’t a factor - everything was within tens of centimetres from each other so single strength wasn’t a problem, though interference could still be a factor for a receiver. A broad spectrum interferer would impact the devices no matter where you were in the house, no matter what you disconnected or didn’t - which eliminated a common interferer.
So it comes back to the transmitter or the receiver and the perspective of each. From the Mouse/AirPods (acting as a transmitter, sending data to the Macbook Pro) it has a relatively small battery to transmit BlueTooth back to the Macbook Pro. The mouse isn’t a receiver (well it is but it’s one we can’t test independently) and the AirPods as a receiver for audio playback (from the Macbook Pro to the AirPods) has a more powerful transmitter in the Macbook Pro to listen to.
If you have a localised interferer it will tend to drown-out the nearest radio receiver. In this case whatever is trying to communicate with the Macbook Pro via BlueTooth is going to struggle to pick out the desired signal over the top of the noisy interferer. How this manifests in this situation is lost data from the weaker transmitter (the battery powered device) to the receiver in the Macbook Pro. In the case of the:
Hopefully that all makes sense but what was causing the interference?
First About Bluetooth
BlueTooth operates between 2.400 and 2.485 GHz which is a narrow(-ish) 85 MHz of spectrum. Notwithstanding the guard bands at the top and bottom of that spectrum it operates using 79 channels each of 1 MHz bandwidth using Frequency-Hopping Spread-Spectrum technology. FHSS allows narrow band interference to be avoided by constantly hopping between segments of the spectrum within any given channel. Of course that’s fine if you only have narrow band interference. Broadband interferers that spew noise across vast segments of a band will cause enough data loss to drop packets.
USB 3.0
‘Superspeed’ USB (aka USB3) has delivered significantly faster data rates for several years but as clock speeds increase the frequency of interference increases to a point where the EMI (Electro-Magnetic Interference) emitted is centered around the base clock frequency and multiples thereof such that it’s difficult to obtain compliance to EMI standards in some frequency bands. To avoid multiple narrow-band EMI peaks across the frequency band and in an attempt to reduce EMI, the concept of spread-spectrum was applied to data clocking (in a manner of speaking). There’s an excellent article by Microsemi that explains: “Spread spectrum clocking is a technique used in electronics design to intentionally modulate the ideal position of the clock edge such that the resulting signal’s spectrum is ‘spread’, around the ideal frequency of the clock…”. This has the effect of spreading the noise across a very wide frequency range, significantly reducing narrow-band noise, but at the cost of increasing spread-spectrum noise.
Intel released a White Paper in 2012 that looked at the practical implementation of USB 3.0 and how the technology had an impact specifically on low powered wireless devices operating in the 2.4GHz band. Specifically WiFi and BlueTooth. The following table is extracted from that White Paper and shows the noise increase due to an externally connected USB 3.0 Hard Disk Drive.
Intel’s commentary: “…With the (external USB 3.0) HDD connected, the noise floor in the 2.4 GHz band is raised by nearly 20 dB. This could impact wireless device sensitivity significantly…”
The Root Cause
In years past when I had access to an RF Spectrum Analyser I could have connected some probes to stray cables and known for certain, but based on a process of elimination it’s clear that there were two interferers most likely due to USB 3.0 components:
The StarTech.com dock started to cut out intermittently over 9 months ago. The cut-outs caused a HDD to disconnect multiple times leading to a lot of frustration with directory rebuilding, reindexing and backup re-uploading such that I couldn’t leave it connected to my Macbook Pro via the dock anymore. That drove me to seek out an independent USB hub, so I’d switched to a combination of CableCreation USB-C to DisplayPort adaptors and a cheap Unitek USB-3 Hub via a cheap Orico USB-C to USB-A adaptor. This solution worked for a while but it ultimately consumed too many ports and once I had shifted to working at home full time, wouldn’t work.
Through use and abuse in the case of the StarTech.com dock I’ve come to appreciate that the shielding and cabling was damaged, and in the case of the cheaper USB 3 Hub from Unitek, I doubt it was ever particularly well shielded to begin with and I essentially got what I paid for as it was rather cheap.
Miscellaneous Adaptors I Used Along The Way
Well Shielded Cables Please
Poorly shielded cabling relating to high speed external data buses is far more often the culprit that you might think when you’re experiencing BlueTooth or WiFi issues. Whilst it’s true there are many layers to the comms stack, it’s also possible it’s purely a software issue, it could be a faulty BlueTooth device as well. Having said that, swapping out cables and docks may well solve your problems definitively.
I like to think about shielding as the bottle and RF Noise as the genie. Once that shielding is damaged or if it’s poorly designed or constructed, it lets the genie out of the bottle and once it’s out, it’s incredibly difficult to stop it interfering with other devices.
My advice: choose your USB hubs, devices and cables with care and treat them well, lest that EMI genie be let out of its bottle.
Hopefully this helps someone trying to understand why their BlueTooth devices are misbehaving, when said devices are in otherwise perfect condition.
]]>The way I discovered it had this feature was when I was driving to Austin on a slow left hand bend when I felt the steering wheel start to pull me off the road. Ever so slightly disconcerting at 70mph! What the heck was tugging on the steering wheel? I initially thought the car needed a wheel alignment or the tyre pressures were badly off.
Thinking back I’d been having warning alerts go off in the hour previously but didn’t know what they were for. I realised that it was complaining about my lane position. One of the challenges when you’re driving on the other side of the road is that the sight-line you’re used to using from the driving position to the center or outside lines of the road to get your correct road position is thrown out by sitting on the other side of the vehicle.
After a few days driving on the right hand side of the road I’d retrained my brain so that’s fine but the car was pointing this out to me for several hours before I realised what it was doing. (Please note: I wasn’t drifting OUT of my lane, but I was too far across to the right hand edge of my lane, not enough to cause an incident but enough to upset lane-keep).
Back to Auto-steer. I realised through observation that the green steering wheel icon would appear at speeds above 40mph when the car could “see” solid or regularly dashed lines on either side of the roadway ahead of it. If it did see them I could let go of the wheel for a period of time and the car would then keep itself in the lane. It worked well enough but there were a few little problems.
It’s not all bad news and limitations however:
I’m strongly considering a Tesla Model 3 or Model Y in a few years time when it’s time for my next car and I’m now more excited than ever that this kind of technology is becoming cheaper and hence more accessible and whilst the Kia implementation (according to others reviews I’ve read) isn’t as good as Teslas, it’s still good enough to be useful and I’m glad I had it.
]]>Having said that, I was told that tipping through drive-through isn’t generally the done thing and whilst you are technically served by someone in Target, Best Buy or a goods purchasing store, tipping isn’t required in those instances as they have a higher hourly rate that factors the lack of tipping in.
The idea seems to be the more personal, face to face, “service industry” (which can be confusing since someone telling me about a product in Best Buy is still ‘serving’ me) this industry is where you’re expected to tip, proportional to the service that is offered by the staff.
Okay so far I’m wrapping my head back around it. Next problem: when I came to the USA previously there were two types of transactions in the majority: cash and credit card. Cash was easy - they give you the bill and you pay them that amount plus a bit extra for the tip. Then you can ask for a receipt if you like. Super simple.
With credit cards in a sit-down restaurant environment you’d be given a small folding wallet thing, with a bill in it and a slot for your card. You’d fill in the tip amount, insert your card in the wallet and hand it back. Then they would wander off with it and hopefully come back without skimming your card and you’d sign and you’re done. Although requiring some degree of trust that was also straight forward to me.
Where I got lost this time was the introduction of payment at the till using a credit card either inserted (chip), swiped or pay-wave. In these cases they’d show me the amount, I’d usually insert my credit card, they’d print a receipt then I’d sign it, add a tip, then total it, then hand it back to them. At that point what happens? I’m assuming the original transaction is re-run or something? It’s not clear how that authorisation happens but they all seemed to accept this. Oh well, hope that worked. In those cases sometimes they’d give me a second receipt that included the tip amount, other times they wouldn’t with some looking confused when I asked for a receipt since I was still holding a pre-tip-filled-in copy.
The final conundrum was when it wasn’t a seated meal in a restaurant, when you’re just getting takeaway but it’s not via a drive-through I was given conflicting advice on whether to tip. The most regular example of this was a Barista. I defaulted to a tip for them however in the end I did it because I didn’t want to upset anyone, rather than it being a reflection of service.
The problem is that if you don’t grow up in a tipping culture, there’s no accepted set of rules and a lot comes back to the potential to reward good service or if you’re confused about whether tipping is the right thing to do, you end up insulting someone that’s good at their job that probably deserved a tip, at least in their opinion or based on the rules they are told apply.
I was once lectured by someone that grew up in that culture after they visited my country and they were horrified by a bad experience in a hotel blaming it all on our country’s lack of tipping leading to poor customer service. That was 20 years ago mind you, but I’m not entirely sure it’s that simple.
Either way towards the end of my trip I was so confused about the tipping grey areas I realised I was developing a ‘tipping anxiety’ where I was starting to avoid situations where it was unclear when I should or shouldn’t tip or how much to tip. Sigh.
Maybe I’ll do better next t(r)ip.
]]>Given that the message was up the entire time I was there, I expect this was for January to October inclusive (about 300 days) which is 19 people killed every two days in Texas alone.
Okay, so Texas is a big state and has a big population, so what’s that equate to in terms of people killed per head of population? There are 28.7M living in Texas as of 2018 which isn’t that different from all of Australia (25M). So the current statistics in Australia from January to September 2019: 914 people killed (1,015 corrected over 10 months) for an average of 6.6 every two days, which means that in Texas there are 2.5-3 times as many people killed than in Australia.
In conjunction with this I’d like to point out a few other observations with comparisons to Australia:
It’s likely that the high-density traffic in major cities is a focal point for accidents and it’s possible that due to large Texas cities having many freeways and congestion that this amplifies impatience and may go some way to explaining the tripling of the road toll compared to my home country.
In the end there’s probably a lot of complicated reasons why it’s so horrific but either way you slice it that’s a massive amount of bloodshed on the roads. There are other places in the world where people drive their cars just as much or even further on average, at or above those speed limits with significantly less fatalities. It can be better.
Anywhere you’re driving, drive safely. Please. Really, seriously please drive safely.
]]>I arrived at 6:30pm exactly, met a fellow geek who recognised my geekiness from my shirt and mentioned it was his first time coming to a meetup, not knowing what anyone looked like. Initially we didn’t see anyone else obvious so I ordered a coffee and then we checked again.
I recognised Manton immediately and we found a table to fit us all - seven in total. After introductions we talked about web development, the differences between ActivityPub and WebMention, different projects and sites we’re hosting and how, podcasts we’re involved with and lots and lots more.
It’s odd but for most of us being complete strangers it really felt quite comfortable and as I look back as I’m writing this I realise just how much I’ve missed out on not living in or near hubs where like minded software developers tend to live. Austin has become a focal point, San Francisco has been for some time as well whereas in Australia there aren’t really any I know of, perhaps Adelaide up to a point, certainly none near me.
As the evening was closing Manton walked through the upcoming IndieWebCamp which sounds really interesting so if you’re a developer in the area I’d check it out.
We talked for over 1.5 hours in total and I had a great time. If you’re in the Austin area and you’re interested in becoming or already are a web developer then I highly recommend dropping by to a meetup. The venue is usually Mozarts Coffee, which make great coffee and have a wonderful setup and no issue parking, though to be safe I’d follow Manton for announcements and updates.
Thanks to Manton Reece for organising it and to everyone else that attended and made me feel welcome.
]]>To reiterate the following notes once again:
In summary I’m really glad I tried this fast food. I almost sensed a bit of bewilderment from some of my friends. I got the feeling they thought I should be eating “better” options rather than the most popular Fast Food chains. Some suggested restaurants with award winning dishes and their personal niche chains for example.
I considered their suggestions seriously and decided the way to think about it was this…
The Fast Food chains I tried are a mixture of good marketing, good pricing, good food and overall popularity amongst a significant number of Americans. If I truly want to have the most representative American food experience then I should start with those restaurants and fast food outlets that the majority of Americans prefer. If they didn’t prefer them, they wouldn’t have succeeded in their business. Hence most of my choices.
Both of our countries have brought different culinary options to the table and the world is a better place for it. I’m grateful for the advice from my friends and family on what to try, and I regret nothing that I tried this trip. It was fun but I’m ready to get back eating healthier meals again now. My body is quite frankly done with junk food for a few weeks. (At least)
I look forward to returning to the USA again next year to sample some more.
Thank you America :)
(…until next time…)
]]>I digress…
An odd alert sound went off through the entire building. At first I thought it was a car alarm going off outside. It wasn’t. I looked around and nobody seemed to be reacting, flinching or panicking. In fact, most people looked as though there was nothing out of the ordinary and kept eating, talking and walking by. I was, frankly, puzzled. My iPhone is currently on an international roaming agreement with AT&T and then I received an Amber Alert on my phone.
Those people that follow me know that I always turn the volume off, using my Apple Watch for haptic feedback for incoming calls, messages, everything so I was shocked when my phone made noise and started vibrating! I had no idea what an Amber Alert was, so I Googled it (as you do) and realised that it was the US Emergency Warning system. I had heard of it, but never connected what it was until I read about how it worked.
I hope they find the child that was abducted - that’s a horrible thing and not unique to America. It happens the world over and it’s terrible.
The also disturbing part for me upon reflection was the lack of movement, lack of concern, lack of any real detectable reaction from the locals in the restaurant.
I study control system, human interfaces in my job and there’s a field of study that focuses on desensitisation of people to repetitive alarm inputs. How often must people be getting these alerts to have that reaction - i.e. no reaction? I looked it up. In 2018 there were 200 Amber Alerts issued nationwide averaging about one every two days. According to the Amber Alert website, as of April 2019 957 children had been rescued specifically because of an Amber Alert since the program began in 2006. Of course that’s an amazing result but I can’t get past the reactions of the locals.
Systems like this will fade in effectiveness with the passage of time, it’s inevitable. In the meantime I just hope that people don’t treat them like a nuisance EMail alert, and pay attention.
PS: I looked for a vehicle matching the description in the car park and on the drive back to the hotel. I did not see it.
]]>That said, I’ve been away from the USA for nearly two decades and with TV being more international, listening to lots of podcasts by Americans and between Twitter and Facebook I’ve heard many references to Fast Food outlets, various restaurants and the like - most of which don’t exist outside of the US.
Hence when over here for a conference I made it my personal mission to try as many as reasonably possible. Without making myself feel ill or enduring searing stomach pains…
I’ll release a full update on the flight back, but for now here’s what I’ve tried and my thoughts on each. Please note: it’s not possible to try every single menu item, it’s a one-hit-one-outlet-one-meal kinda deal, so I’ve asked friends for recommendations and of course, done my best to pick…the following aren’t in order of anything:
Before moving on part two I’d like to add the following notes:
Part Two soon…
]]>The mix of Cars is very different
When I visited I recall vividly being dwarfed by large trucks, Dodge RAMs, Chevrolet Silverados and the like, with many Buicks, Chryslers and American cars everywhere. Upon my return my rental car is a Kia, and on the road I see a roughly 50/50 split of US-made vs International (imported) vehicles. I realise that the US motor industry has been struggling in some dimensions but buyers not buying them isn’t a good thing. In Australia our local car manufacturing industry died only a few years ago. It’s now not possible to purchase a vehicle built in Australia. I hope the US doesn’t follow suit and whilst I’m sure it won’t entirely it’s a striking change in two decades and the source of some concern.
There are Fast Food Restaurants Everywhere
Maybe I wasn’t paying as much attention last time but I swear that on every city block on main roads there’s at least one food outlet. It’s also possible my memory of Houston is fuzzy (bound to be after so long) but it’s uncanny to me looking around as I’m driving. There’s no shortage of places to eat and my observation inside is that no one of them individually is particularly busy. Is there an oversupply to the market? Hmm.
OMG The Simpsons (S09E19) Weren’t Kidding About Starbucks
In 2000 I wasn’t drinking coffee, but I knew who Starbucks were. Back then there were 3,500 stores worldwide (okay I looked it up on their website) and today there are 27,340. I mean - holy F@cking Cr@p! In the Simpsons episode Bart is walking through the mall to get his ear pierced and is warned the owner that in 5 minutes it would become a Starbucks so he’d better hurry. As Bart departs, all of the stores were Starbucks including the one he was just inside. So yes obviously that’s an exaggeration, but the conference I’m attending is in Memorial City in Houston and in the Memorial Hospital there’s a Starbucks. There’s one in the Target, one in Macys and one in the dead-center of the mall itself. So that’s four stores in a radius of 750ft (230m)! That’s insane.
Having said all that there’s one thing I do remember about the busier parts of many US cities, of which Houston is no exception.
Concrete is everywhere
In other parts of the world using concrete for roadways such as freeways, highways, city streets and car parks is generally only an affordance spent on freeways due to lower rolling resistance, high load capacity and longer lifetime. It’s just too expensive to put it everywhere! Where I’ve been driving in Houston there’s concrete everywhere. It’s like everything is a shade of light-brownish-grey-concrete colour (I’m not an artist - it’s like concretey-colour), broken up mainly by tress and grass. Of course it’s not wrong exactly it’s just a really expensive way to do business. Then again those car parks won’t need much maintenance for the next thirty years and what’s a pot-hole? Not really an issue with concrete. The freeways also are an absolutely immaculate work of engineering art, with fly-overs, fly-unders and people speeding like heck! (Not me though, but the speed limit clearly isn’t fast enough for most other people I’m finding).
Anyhow it’s all good really. I do love the USA and I feel pretty comfortable here. The only shame is that I’m only here for a week. I am planning a proper holiday with the whole family late next year though, so hopefully I’ll have much longer to explore much further than I could this time around.
]]>