TechDistortion Articles https://techdistortion.comarticles en john@techdistortion.com Copyright 2009-2022 2022-01-26T17:08:50+10:00 TechDistortion Articles Wed, 26 Jan 2022 07:08:50 GMT Upgrading the Mac Pro RAM https://techdistortion.com/articles/upgrading-the-mac-pro-ram https://techdistortion.com/articles/upgrading-the-mac-pro-ram Upgrading the Mac Pro RAM I’ve been enjoying my 2013 Mac Pro immensely and wrote about it here seven months ago, and two months ago I upgraded the SSD, with the further thought that someday:

"…I can upgrade…and go to 64GB RAM for about $390 AUD…"

Today I did exactly that, although I’m not normally drawn-in by Boxing Day sales that start the week before Christmas Day and end on the last day of December (that’s not a “Day” kids…that’s two weeks…SIGH) but I fell for the deal and spent $321AUD on the RAM I had intended from Day 1 when I bought the secondhand Mac Pro.

I’d done my research (IKR…me?) and it turns out that due to the fact that the Mac Pro 2013 was designed and released before 32GB SDRAM DIMMs were available, whilst you can fit 32GB modules in the Mac Pro, if you do the bus speed clocks down from 1866MHz to 1333MHz due to bandwidth limits on the memory bus. The chances I’m going to push beyond the 64GB RAM mark in my use cases is zilch hence I have no intention of sacrificing speed for space and went with the maximum speed 64GB.

Shutting down and powering off, unlocking the cover and lifting it off reveals the RAM waiting to be upgraded:

Cover off with Original Apple 16GB RAM Fitted

Depressing the release tab at the top of each bank of two RAM modules was quite easy and OWC’s suggested spudger wasn’t necessary. The new RAM is ready to be opened and installed:

The 64GB RAM in its Box

Taking out the RAM was a bit awkward but with a bit of wriggling it came loose okay. I put the old modules in the Top of the packaging, leaving the new modules still in the Bottom of the packaging:

The Old and the New side by side

Fitting the RAM felt a bit strange as too much pressure on insertion starts to close the pivot/lever point of the RAM “sled”. Wasn’t too hard though and here it is now installed ready to be powered on:

New RAM Installed

Once we’ve booted back up again I confirmed that the system now recognised the new RAM:

Original 16GB RAM 16GB RAM Installed

Upgraded to 64GB RAM 64GB RAM Installed

The improvement in performance was quite obvious. I run several Alpine VMs headless for various tasks as well as have multiple Email clients open, five different messaging applications (that’s getting out of hand world at large…please stop…) between home and work needs as well as browsers with lots of tabs open. Unlike with 16GB RAM, once the applications were launched, they stayed in RAM and nothing slowed down at all! It’s just as quick with 20 windows open as it is with 1 open. The RAM Profiler demonstrates the huge difference:

RAM 24hr Profile RAM Profile shows the huge difference

The Purple shaded area is compressed memory, with Red Active and Blue Wired and clearly the vertical scale is a Percentage of maximum therefore the system was peaking at 80% of 16GB (13GB used), but now is peaking at maybe 60% of 64GB (38GB used), with no compression at all. The swap was peaking during 4K video encoding at 15GB which was insane, but now it has yet to pass 0GB.

In summary it’s an upgrade I’ve long wanted to do as I was aware I was pushing this Mac Pro too far with concurrent applications and performance was taking the rather obvious hit as a result. Now I have lots of memory and the Mac Pro is performing better than it ever has. All that’s left is to upgrade the CPU…

Seriously? Well…we’ll see…

]]>
Computer 2022-01-17T19:00:00+10:00 #TechDistortion
Buying A Tesla https://techdistortion.com/articles/buying-a-tesla https://techdistortion.com/articles/buying-a-tesla Buying A Tesla I’ve been closely watching Tesla since their original collaboration with Lotus in the 2010 Tesla Roadster. In Australia, Tesla didn’t open a store until December, 2014 and the Model S started at about $100k at the time with only 2 Superchargers, both in the Sydney area and of no use to me in Brisbane.

Mind you, money stopped me anyway and so I drove my 2012 Honda Jazz somewhat grumpily and waited. (Sorry Jazz…you’re a pretty solid car and have served me well…) My company was increasingly supportive of Electric Vehicles and had obtained a Nissan Leaf for each of their major office locations, and Brisbane was one of them. So in June 2016 I booked the company Leaf and took it for a drive home and back to the office again as a test run - could a Leaf be in my future?

Nissan Leaf in Garage

Certainly it was much more affordable and it was fun to drive but when the round trip between the Brisbane CBD and my home took me down to only 12km of range remaining, I decided that it was too short on range to meet my needs once the battery would ultimately degrade with long term use.

Nissan Leaf 12k’s Left

Again, I grumbled and watched Tesla from afar. The Model S was a hit, and was followed by the Model X with it’s technical problems (those Falcon Wing doors…really?) and finally the Model 3. Tesla had opened their first store in Brisbane in mid-2017 but it wasn’t until the end of 2018 that the Model 3 was available to be viewed - and then it was the Left-Hand Drive model and wasn’t allowed to be driven in Australia at that point - so no test drives were permitted.

Despite that I couldn’t contain myself any longer and took my sons with me to the Tesla dealership and test “sat” in the S, the X and the 3 - even if the steering wheel of the Model 3 was on the other side! It cemented in my mind that whilst the X was the family favourite its price made it perpetually out of my reach, but the 3 was the far nicer vehicle inside. Cleaner, simpler and helpfully it was also the cheapest!

And so my heart was set on the Model 3, but with two other car loans still in play, I had to wait just a little bit longer. A few of my friends had received their Model 3 locally in 2019 and Tesla now had test drives available, but on advice from the only Model S owner I knew I refrained and didn’t test drive a Model 3 out of fear that it would only make me want one even more…

He wasn’t wrong…

Fast-forward to 2021 and now with my eldest child having a drivers licence, we needed a third car and with both existing cars now paid out, I was finally able to seriously look at the Model 3. I test drove a Standard Range Plus with FSD installed on the 1st of September, 2021. I was allowed to drive it for 45 minutes and fell in love with it on the drive. The budget couldn’t stretch to the Performance or Long Range models, FSD was out of the question too, but might be enough to get the White Interior (I loved the look and feel of it) as well as the lovely Red paint. I’d been wanting a Red car for 20 years. (That’s another story)

Important Tangent

My daughter was in her final year of High School and like many of her friends they were starting to organise their Grade 12 formal…dresses, make-up, hair and of course, the “car” that would drop them off. My wife and daughter were very excited about the possibility of dropping her at the formal in a shiny new, Red Tesla Model 3 and after my test drive we saw the website reporting 1-4 weeks expected delivery, and decided that given how well it drove and it would be easily delivered within the 11 weeks I needed to make the formal, that I placed my order the night of the test drive. Two birds with one stone…as they say.

Back to the Tesla Bit

Finance was approved within a day and the Tesla app and website changed from 1-4 weeks expected delivery, to showing “We’re preparing your Invoice.” On the 17th the App changed from its blank entries to listing the 8 instructional videos. There was no doubt I’d entered the infamous Tesla Reservation Black Hole. I’d read about it, but when you’ve been excited about owning an EV, specifically a Tesla and most recently the Model 3, it was approaching 10 years of mounting anticipation. I thought it was supposed to get easier when you got older to deal with this sort of thing, but apparently it really hasn’t. So had begun what I thought would only be a few weeks wait. How wrong I was…

Tesla First Estimate

The Tesla representative I was assigned was not the best communicator. He didn’t return several of my calls and I originally had called once each week to see if there was an update, but on week three his tone made it clear that so far as checking on updates for where my car was, in his words…he “wouldn’t be doing that.” Realising that I was becoming “that guy” I decided there was no point in pressing and instead returned to habitually reloading the website and the app in the hopes of a change of status.

3 weeks. Still nothing.

4 weeks. Still nothing.

5 weeks. Still nothing, although my electrician had mounted the wall charger and completed the 3-phase power upgrade, but the charger still wasn’t wired in. Didn’t matter - no car to plug it in to…yet!

6 weeks, still nothing, though my electrician finished the wiring for the HPWC so that was some kind of progress, but still no car to plug it in to.

Time Out for a Second

It’s worth noting that the website claimed a 1-4 week wait when I ordered, and a 1-3 week wait on the 2nd week of September.

Tesla Second Estimate

It wasn’t a performance model or a long range either. Then I came across a growing list of videos from other Australian recent Model 3 buyers reporting that the website time estimates were essentially complete fiction. It was never up to date…even when the notification came through on their phone saying their delivery was ready, payment had been received and their delivery appointment was set, it didn’t always show up on the website.

Additionally I learned that even once a Tesla hits the shores in Oz, it still takes 2-3 weeks before you’re able to even pick it up hence when the site indicates 1-4 weeks, it means it will be 1-4 weeks before you actually get the chance to book a time to pick it up - not actually pick it up. So realistically even if I got a message saying I can book a pick up time, it will be another 2-3 weeks before I can actually pick it up. (Yay) So at this point it’s looking more likely that I’ll get the car late October, or the first week of November which would be just in time for the formal.

You might forgive me (or not) for my rant as an impatient child to an extent, to which I see that side. Then again I was also feeling the pressure of living up to the promise I thought was safe to make to my daughter based on conversations with the Tesla representative and the Tesla website. I also knew, even then, that there were those that reserved a Model 3 multiple YEARS before their Model 3 was even delivered. Although that was for a vehicle that wasn’t shipping to anyone, anywhere, when they placed a reservation.

I suspect (and likely will never know) that the problem I created for myself inadvertently was choosing an entry level Model 3 with a White Interior and Red Paint. Truth is that if you are REALLY strapped for cash, you’re likely to order the fully entry level, White paint, Black Interior, stock-standard Model 3 Standard Range Plus - for which I believe that the order time might even have genuinely been 1-4 weeks. Even if you ordered a Long Range or Performance model, with standard colours, you’d probably get one sooner as these are higher margin and Tesla have been known for prioritising higher margin vehicles.

Designing The Website

I think about how I would have developed the website and if it was possible to separate the quantity of ordered combinations by exterior and interior colours then I would. However to test my theory I tweaked the colours, both interior and exterior and sure enough the delivery estimates NEVER changed. Knowing that Tesla don’t generally make your car to order in a manner of speaking, they seem to batch them in every combination based in part on the prior quarters order demand, it’s clear that I just didn’t pick a popular combination and that Tesla don’t break down their supply/demand by every combination. Hence their website delivery estimates aren’t based on anything other than the base model and don’t account for options and any delays they might therefore incur.

Back to waiting I guess, though by the 5th week I’d just given up on the website now knowing it was effectively full of sh!t.

Tracking “Ship"ments Literally

Running low on whatever patience I had left, I was interested to find some articles linked on Reddit and a Twitter account called @vedaprime that claimed to track Teslas as they moved around the world, including to Australia. Unfortunately his “service” used the VIN number that was often associated with orders in the past when shipments came from the USA, however from China he indicated the VIN wasn’t as reliably extracted from the website as it had been in the past. I did learn a few interesting things though.

As of the time of publishing this article, Tesla ship all Model 3 Standard Range Plus models from Shanghai to Australia on a limited number of vehicle carrier ships most commonly on the primary route: Shanghai–>Brisbane–>Port Kembla (near Sydney). Despite Tesla having sales and delivery centers in Queensland (in Brisbane too) they do ALL of their inbound Quality Assurance (QA) in Port Kembla.

Once coming off the ship at Port Kembla, each car needs to be inspected and once it passes quarantine, customs and QA inspections, it waits its turn at AutoNexus for a car-transporter (semi-trailer connected to a prime mover aka a big truck) bound for Brisbane. The ships dock in several ports but only unload Teslas in southern NSW for the East Coast and Tasmania, and Brisbane isn’t as large a market as Sydney or Melbourne so gets less transporter trucks as a result.

Matching the VIN then progresses down the list of the first Reservation Number (RN) to match the configuration, then it’s attached to the RN, and assigned to the buyer. The whole process can take 1-2 weeks to QA all of the cars coming in from the docks with shipment sizes varying from a few hundred to well over a thousand - that’s a lot of cars to QA! Once the VIN is matched, then if Mr VedaPrime can find it, he can track the vehicle, but by then it should be imminently on its way to the buyer.

So I began searching for ships that fit the criteria. Using VedaPrime’s last 12 months of public shipping notifications on Twitter, I narrowed down the search to ships that had left Shanghai bound from Brisbane and eventually Port Kembla, and finally came across one that fit - the Morning Crystal. Departed Shanghai on the 26th of September, due to arrive in Brisbane on the 7th of October, then in Port Kembla most likely 9th of October. Assuming a 1-2 week QA delay then a 1 week delivery to Brisbane, the most likely date for a delivery would be the last week of October, about 9 weeks after placing an order but still within my 11 week limit.

Well then…I guess that means I’ll keep waiting then. Of course that assumed that my vehicle was on that ship. If it wasn’t, there were no current alternate candidates for at least another 2 weeks, possibly more.

Back to the story

7 weeks…still nothing. The first ship that I had my hopes pinned on (Morning Crystal) came and went without a word and the site now reported a 2–5 week delivery delay. The next candidate ship was the Passama, on the 19th of October in Port Kembla, but it also came and went without any Teslas aboard. I did however receive a call from Tesla, but from a different salesperson, informing me that my previous salesperson was no longer working for Tesla and he was taking over from him. Okay then. Great.

8 weeks and finally something changed on the website - there was now an estimated delivery date range of between 17th November to 1st December, 2021 and the VIN was now embedded in the Website HTML. A few days later and my final invoice notification arrived by EMail at 7am, though didn’t appear on the website until later that day. As I was financing my car it was advised I would get forms to sign shortly, and I did mid-morning. Submitted them and…back to waiting again.

VedaPrime’s Patreon

At this point I chose to join VedaPrime’s Patron and Discord group as I had a VIN now, and he claimed he could track it, or would do his best to do so. I’d reached the limit of what I could discern easily with my own knowledge and investigation on the public internet and Lord Veda (a nickname given to him by a popular Tesla YouTuber) clearly knew much more than I did about Tesla order tracking.

Now I’d seen suggestions about Tokyo Car and Morning Clara as potential candidate ships that could be carrying my Model 3. Tokyo Car was docked in Noumea, bound for Auckland then to Port Kembla (due 6th November) and Morning Clara was still in Shanghai, due to arrive on the 19th of November. So…back to waiting some more.

9 weeks…and my app and website began showing an estimated delivery schedule of between the 7th and 21st of November. There was mounting evidence that my car was in fact on the Toyko Car ship. With the 7th of November coming and going, I called my new Tesla representative to see if he had more information, and he didn’t. Of course. I’d given up calling Tesla about anything at this point. At this stage I’d called them five times in total following the order. They were generally unwilling or unable to help anyway, so there was no point in bothering them. I was learning far more from the VedaPrime Discord than from Tesla themselves.

10 weeks…and my app narrowed the dates down to between the 11th and 20th of November. Okay. We were cutting this really close. The morning of Friday the 19th of November was my latest possible chance if I was going to make the formal.

Then on the 11th of November, the text I had been waiting for arrived: and I was offered the choice of a Delivery appointment at either 10:00am, 1:00pm or 3:00pm on Monday the 22nd or Tuesday the 23rd of November. My youngest son had a school awards ceremony I would not miss which wrote off Monday almost entirely (not to mention an afternoon of meetings I couldn’t skip) leaving Tuesday morning as my sole option - so I booked 10am Tuesday as my pick up date.

Tesla Delivery Confirmed

My car had in fact, been on the ship Tokyo Car and was now landed in Australia. Hooray…of a sort because unfortunately…

It was over

So much for having that car for the formal. I reached out to Tesla, one last time, and left a message to which they texted back they would let me know if it could be delivered sooner but it was no use. They wouldn’t.

During the 11th week the finance finally cleared, funds cleared and on Friday that week as I was picking up my kids from school the call came in from the Tesla delivery center - we were good to go for Tuesday. I also received an Email and replied to that Email asking if Tesla could pick me up from the nearest train station but never got a response.

Why did I ask that?

I knew that on Tuesday morning at 10am, I had no convenient way of getting to the delivery center as my eldest daughter was away at schoolies all week, my wife was working, my mother no longer drives, my sister was working as were my friends in various locations, all too far away. So it was either public transport or a Taxi/Uber. Unfortunately for me I chose to live in the middle of nowhere, meaning the cheapest Uber would cost me $120 AUD one-way. The cheapest Taxi would be closer to $190 AUD. The Tesla home delivery option requires you to live over 250km from the nearest dealership so I didn’t qualify for that either. The closest a train got me was still a 45 minute walk and the bus connections to the trains were terrible. So it was going to be a combination of Train + Taxi in the end. Oh well…what can you do?

Pick-up Day…at LONG last

Tesla’s showroom in Brisbane is in the classy end of Fortitude Valley (yes, there is a classy end you Valley-haters…) near other dealerships like Porsche, Ferrari, Lamborghini and many others. Space is at a premium though and as such they will show you the cars at the showroom, you pick up test drives from there and they do have some limited servicing facilities, but their delivery center is far from the CBD of Brisbane.

It’s located in a somewhat run-down steel warehouse with a chipped concrete floor with a corrugated iron roof held up by exposed girders. A quintessential warehouse. The only way you know it’s a Tesla delivery center from the outside is a lone rectangular black sign partly obscured by trees along the roadside. The bigger issue was it was in Hendra and the nearest train station was a decent walk away.

Warehouse Sign A Lone Sign lets you know it’s the Delivery Center

That morning it was raining and so I decided to suck it up, take the train as close as I could and then get a Taxi from there - I don’t like using Uber on principle. (That’s another story)

Waiting for the Train in the rain

When I got to Northgate the rain had stopped and looking at the rain pattern on the radar I estimated I had about 60 minutes before the next wave of rain hit so I decided to save my Taxi money (about $45 from there) and walked to Hendra instead.

Walking to Hendra

To be honest, it really was quite a pleasant walk in the end. (Maybe I was too excited about the destination to care at that point?)

I arrived 45 minutes before the scheduled time and apologised for being early. Lord Veda had highly recommended getting there early so I think that was good advice.

Warehouse In Front of the Delivery Center

I was greeted by a very pleasant ex-Apple employee who now worked for Tesla and said: “You must be John! Yours is the only Red one going out today and it’s lucky because I literally just finished setting up your car.” I’d already spotted it, as I figured out the number plate from the Qld Transport site the previous day by searching the VIN.

Spotted Mine! Mine was the Red one in the far back right of this photo

Some photos, set up and giggling to myself later and I was off. Not before she insisted on taking a photo of me with the car and waving me off. As I was leaving it was 9:45am and still no other owners had shown up. I had quite literally…beaten the rush.

United with my car at last! United with my car at last!

I drove to Scarborough and my old favourite spot on Queens Beach where I once took photos of my car 20 years earlier and took some photos in about the same spot…then went home. Later in the day I picked up my kids from swimming and that’s pretty much it.

I finally had my dream EV.

Tesla at Queens Beach

The Minor Details

There are a few things I wasn’t 100% clear on until the delivery day. Firstly, you do get the UMC Gen-2 Mobile Charger with the two tails (10A and 15A) which is a single phase unit, delivering a maximum of 3.5kW. The Model 3 also came with cloth floor mats, and a 128GB USB Thumb Drive in the glovebox for sentry mode and other things. It did NOT come with a Mennekes Type 2 cable for connecting to BYO cable charging points which was disappointing and it didn’t come with a Tyre Repair kit. I was aware that Tesla’s don’t have a spare tyre so had pre-purchased a repair kit when I bought my HWPC.

In 2019 in Australia, all new Teslas came with a HPWC as well, but that was long since un-bundled. The car also comes with a free month of Premium Connectivity after which it’s $9.95 AUD/month, which I’ll be keeping after the free month ends.

My original sales assistant incorrectly informed me it didn’t come with car mats, so I ordered some. Now I needed to return them. Oh well. I’ll also need a Type 2 cable - there are too many of those chargers around to NOT have one of those in the boot, just in case.

Unicorns

Tesla are constantly tweaking their cars - from the motor to the battery to software and even the occasional luggage hook or seal. They don’t wait for model years most of the time and so it becomes an interesting lottery of sorts and they get themselves caught in knots a bit when they advertise something on the website but then they change it after you order it and it’s built to a different standard. In the Tesla fan-lingo they call those the “Unicorns”.

When I ordered mine the website stated: 508klm Range WLTP, 225kph Top Speed, 5.6s 0-100kph time. The current website however now says: 491klm Range WLTP, 225kph Top Speed, 6.1s 0-100kph time and to add more confusion the compliance sticker adds: 556klm Range.

What had happened is Tesla increased the size of the LFP battery pack mid-cycle from 55kWh to 60kWh (usable). At the same time Tesla changed the motor to one that was slightly less powerful, though it’s unclear why…it was likely due to either efficiency or cost reduction reasons. We may never know. The motor change though didn’t happen until late October which approximately coincided with the website specification change. This meant that there were three builds that had the more powerful motor but also had a larger battery.

The VIN ranges where this happened were those within my build range hence my vehicle is one of a few hundred Unicorn SR+ models. Lucky me?

Conclusion: Order to Delivery Day

The final time from Test Drive and Order to pick up was 11 weeks and 6 days, 8 days shy of three months. Others that received their cars a week before me, some had ordered in early October and only waited 6 weeks from order to pick-up. In one of those “there’s no way I could have known at the time” situations, I’d just ordered at the beginning of a build cycle for Q4 2021, I’d ordered a low-demand combination as well, so I had to wait the longest of almost everyone in my production batch of cars. Oh well…I have it now…so these three months can now be a fading memory…

My obsessing over a new car like this is something I’ve never done before. I’ve been trying to figure out why this was so different to my other experiences. Options include: I’m getting less patient and/or more entitled in my old age; The ordering process was more akin to ordering a tech product from Apple’s online store than any car purchase I’d ever experienced; or the information provided by the manufacturer was in fact worse, than having no information at all.

I honestly don’t think I’m getting less patient with age…more entitled though? Maybe. I think the difference is the contrast with a traditional car purchase. Traditionally sales people from Toyota, Mitsubishi, Honda and Subaru, were well versed with delivery times, standard delays and set realistic expectations up front or at least they certainly presented the situation more honestly than Tesla did.

Tesla appeared to be up-front in their estimates via their website, but it was fundamentally misleading and their sales people were generally unhelpful. Perhaps it was because the Tesla inventory system was not optimised to provide accurate information by specific build sub-types, production batches and such, to enable sales staff and customers to set realistic expectations. Either way it was exceedingly frustrating and had Tesla indicated up front I mightn’t have the car until late November, I would have made other arrangements for my daughters formal and let it be.

Conclusion: Delivery Day

The delivery experience was, quite frankly, the worst of any car purchase I’ve ever had in most respects…but it’s a subtle thing.

I’ve spoken with other owners that had a basic 10 minute run-down of pairing their phone and being shown the basics and shoved “out the door” so to speak. For me, I’d arrived early and they were busy getting everything ready for everyone else, so that’s on me, but if not for that it would have been 10 minutes, got it, great, now out you get, on to the next customer.

Also, when you put down a significant amount of money for your dream car and you show up to a dodgy-looking warehouse that’s hard to get to and treated a bit like a number, dealing with four different people from start to finish, it feels unprofessional and you feel like you don’t matter very much - you’re an imposition not a customer.

Tesla have a LOT to learn from existing car purchasing experiences from pretty much every other manufacturer.

Warehouses There’s some nice Electric Vehicles in this bunch of warehouses…seriously!

Warehouse Laneway The front door is down a dodgy laneway and isn’t signed anywhere

I’ve bought Honda, Subaru, Toyota and Mitsubishi between multiple countries and having a common point of contact from start to finish was consistent throughout. They all spent significant time with me or my wife walking us through every feature of the vehicle and were all in nicely presented showrooms when we picked them up. And yes, they even had a tacky red bow on the bonnet, because, why not? It’s not every day you buy a car. So why shouldn’t you make that a special experience for the buyer?

Maybe the problem is the model of existing dealers and the profit they need to make over the car’s actual price, requiring more salespeople, service departments and larger parcels of real-estate to house it all. If you are to believe the Tesla approach of being leaner, minimal up-sells, less salespeople and smaller showrooms, well then I should be getting more car for my dollar. Maybe I am? It’s hard to be sure. Or maybe it’s that Tesla have pushed their own leaner sales-model too far and the best experience lays somewhere in-between.

Tesla are finally making a lot of money after nearly going bankrupt in 2018. Maybe Tesla should reinvest some of that into customer service.

Conclusion: Lord Veda’s Patreon

I witnessed VedaPrimes Patreon start at $170 AUD/month and then rocket to $1,700 AUD/month over the two months I was a Patron. Unlike TEN though, once people have their cars they tend to drop off, so it varies significantly from month to month. In the end he was unable to find my VIN at any stage in the process. My car was transported on a smaller carrier and slipped past his radar. Either way though the value for me wasn’t the VIN tracking - it was the Discord.

In the Discord I met a lot of people that were hopefuls like me. We shared our frustrations, our knowledge of charging, 3rd party apps, tips and tricks and of course, talked about Charlie the Unicorn in relation to naming our new cars…when they actually arrived. It was a blast actually and without people sharing the hundreds of tidbits of information, from different Tesla salespeople, known VINs and such, I suspect Veda wouldn’t be able to paint as meaningful a picture for the broader group. In essence, the groups collective knowledge is a huge part of the VedaPrime services' value.

That said I now have to bow out of the group at this point and am grateful for the friendships and discussions we had during our long wait for our vehicles to arrive.

Final Thoughts

My advice for anyone buying a Tesla:

  1. Don’t trust the website about delivery times
  2. Don’t believe a word the salespeople tell you about when it’s arriving until you’ve had a booking text message
  3. Teach yourself how to use the car through the videos because Tesla don’t want to spend their time teaching you on delivery day.

Despite these things, there’s one thing Tesla have going for them that might make you forgive all of that.

They make some of the best cars in the world.

And I love mine already.

Afterword

This post was written as I went and has taken three months to finish. I know it’s a bit long, but it captures all the threads I pulled, all the investigations I did as well as the final result. If nothing else it’s a point of reference for anyone interested in what ordering a Model 3 in Australia was like in 2021.

My daughter went to the formal in a Mitsubishi Eclipse Cross Aspire PHEV. It was also Red. She was happy with that and returned safely from Schoolies having had a great time.

I did NOT call my Tesla “Charlie”. Sorry Discord gang. I just couldn’t…

]]>
Cars 2021-11-28T15:00:00+10:00 #TechDistortion
Upgrading the Mac Pro SSD https://techdistortion.com/articles/upgrading-the-mac-pro-ssd https://techdistortion.com/articles/upgrading-the-mac-pro-ssd Upgrading the Mac Pro SSD I’ve been enjoying my 2013 Mac Pro immensly and wrote about it here five months ago, with the thought that someday:

"…I can upgrade the SSD with a Sintech adaptor and a 2TB NVMe stick for $340 AUD…"

Last week I did exactly that. Using the amazing SuperDuper! I cloned my existing Apple SSD (SM0256F) 256GB SSD to a spare 500GB USB 3.0 external SSD I had left over from my recent Raspiblitz SSD upgrade. With that done I acquired a Crucial P1 M.2 2280 NVMe SSD for a good price from UMart for $269 with $28 for the Sintech Adaptor for a total upgrade cost of $297 AUD.

Shutting down and powering off, unlocking the cover and lifting it off reveals the SSD waiting to be upgraded:

Cover off with Original Apple 256GB SSD Fitted

Then using a Torx T8 bit, remove the holding screw at the top of the SSD and pulling the SSD ever to slightly towards you then wriggle it side to side, holding it at the top and the SSD should come away. Be warned: the Heatsink makes it heavier than you think, so don’t drop it! The Mac Pro now appears very bare down there:

No SSD Fitted Looks Wrong

Next we take the Sintech adaptor and gently slide that into the Apple Custom SSD socket, converting the socket to a standard M.2 NVMe slot. Make sure you push it down until it’s fully inserted - the hole should be clearly visible in the top notch. It should fit perfectly flush with that holding screw.

Sintech Adaptor sitting in place

The M.2 NVMe then slots into the Sintech adaptor but it sticks out at an odd angle you can see below. This is normal:

NVMe SSD sits in at an angle initially

Finally we re-secure the 2TB SSD and Sintech adaptor with the Torx screw and we’re fitted ready to replace the lid.

2TB SSD Fitted

Once we’ve booted back up again I booted to my SuperDuper clone (holding the Option key on boot), then did a fresh install of Monterey. With some basic apps loaded it was time to test, and the results are striking to say the least - beyond the fact I now have 2TB of SSD but the speeds:

Drive Size Read Speed Write Speed
256GB 1,019 454
2TB 1,317 1,186
Diff +1.3x +2.6x

Original SSD Top View of the Mac Pro

New SSD Top View of the Mac Pro

You do notice the improvement in performance in day to day tasks although I think when I retested this compared to five months ago, my 5GB file test was up against about 20GB of spare space on the 256GB SSD at that time, which impacted the write testing as it worked around available blocks on the drive.

A final note about the SSD regarding the heatsink. The Apple SSD heatsink is heavily bonded to the drive and is extremely difficult to remove. There’s no question that the SSD would benefit from fitting a heatsink to it, however the amount of heat dissipation in the NVMe drive relative to the GPUs and CPU is small in comparison. In my testing I couldn’t see a significant temperature change under heavy load, with it rising less than 10 degrees Celcius from Idle to maximum.

In summary it’s an upgrade I’ve long wanted to do as I was getting sick of swapping out larger files to the NAS and a USB drive. Now I have lots of high speed access storage space for editing photos and videos. Now…how’s my memory pressure going…

]]>
Sport 2021-11-08T06:00:00+10:00 #TechDistortion
RaspiBlitz SSD Upgrade https://techdistortion.com/articles/raspiblitz-ssd-upgrade https://techdistortion.com/articles/raspiblitz-ssd-upgrade RaspiBlitz SSD Upgrade I’ve been running my own node now for nearly 9 months and when it was built, the build documentation recommended a 512GB SSD. At the time I had one laying around so I used it, but honestly I knew this day was coming as I watched the free space get eaten up by the blockchain growth over time. I’m also not alone in this either with forums filled with comments about needing to upgrade their storage as well.

The blockchain will only get bigger, not smaller and fortunately the cost of storage is also dropping: the 500GB drive cost about $300 AUD six years ago, and the 1TB same brand similar model today cost only $184 AUD. In future upgrading to a 2TB SSD will probably cost $100 or less in another five years or so time.

This update is going to take a few hours, so during that time obviously your node will be offline. It can’t be helped.

My goals:

  • If possible, use nothing but the RaspiBlitz hardware and Pi 4 USB ports (SPOILER: Not so good it seems…)
  • Minimal Risk to the existing SSD to allow an easy rollback if I needed it
  • Document the process to help others

ATTEMPT 1

  1. Shutdown all services currently running on the RaspiBlitz

Extracted from the XXshutdown.sh script in the Admin Root Folder:

sudo systemctl stop electrs 2>/dev/null
sudo systemctl stop lnd 2>/dev/null
sudo -u bitcoin bitcoin-cli stop 2>/dev/null
sleep 10
sudo systemctl stop bitcoind 2>/dev/null
sleep 3
[Only use this if you're using BTRFS]: sudo btrfs scrub start /mnt/hdd/
sync
  1. Connect and confirm your shiny new drive
sudo blkid

The following is a list of all of the mounted drives and partitions: (not in listed order)

  • sda1: BLOCKCHAIN Is the existing in-use SSD for storing the configuration and blockchain data. That’s the one we want to clone.
  • sdb1: BLITZBACKUP Is my trusty mini-USB channel backup drive. Some people won’t have this, but really should!
  • sdc1: Samsung_T5 Is my new SSD with the default drive label.
  • mmcblk0: mmc = Micro-Memory Card - aka the MicroSD card that the RaspiBlitz software image is installed on. It has two partitions, P1 and P2.
  • mmcblk0p1: Partition 1 of the MicroSD card - used for the boot partition. Better leave this alone.
  • mmcblk0p2: Partition 2 of the MicroSD card - used for the root filesystem. We’ll also leave this alone…

If you want more verbose information you can also try:

sudo fdisk --list
  1. Clone the existing drive to the new drive:

There’s a few ways to do this, but I think using the dd utility is the best option as it will copy absolutely everything from one drive to the other. Make sure you specify a bigger blocksize - the default of 512bytes is horrifically slow, so I used 64k for mine.

sudo dd if=/dev/sda1 of=/dev/sdc1 bs=64k status=progress

In my case, I had a nearly full 500GB SSD to clone, so even though USB3.0 is quick and SSDs are quick, this was always going to take a while. For me it took about three hours but I finally got this error:

dd: writing to '/dev/sdc': Input/output error
416398841+0 records in
416398840+0 records out
213196206080 bytes (213 GB, 199 GiB) copied, 10896.5 s, 19.6 MB/s

Thinking about it, the most likely cause was a dip in power on the Raspiblitz. The tiny little device was trying to drive three USB drives and most likely there was a momentary power dip driving them all, and that was all it took to fail.

ATTEMPT 2

Research online suggested it would be much more reliable to use a Linux distro to do this properly. I had no machines with a host-installed Linux OS on it, so instead I needed to spin up my Virtual Box Ubuntu 19.04 VM.

It was safe enough to power off the RaspiBlitz at this point, so I do that then disconnect both drives from the Pi, then connected them to the PC.

To get VirtualBox to identify the drives I needed to enable USB 3.0 and then add the two drives to the USB filter, reboot the VM and then ran the above but now under Virtual Box.

499975847936 bytes (500 GB, 466 GiB) copied, 4783 s, 105 MB/s
7630219+1 records in
7630219+1 records out
500054069760 bytes (500 GB, 466 GiB) copied, 4784.58 s, 105 MB/s

This time it completed with the above output after about 1 hour and 20 minutes. Much better!

If you want to confirm all went well:

sudo diff -rq sda1 sdc1

An FDISK check now yields this error:

GPT PMBR size mismatch (976773167 != 1953525167) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
  1. Resizing the new drive Step 1

In my case I started with a 500GB drive and I moved to a 1TB drive. Obviously you can use whatever size drive you like (presumably bigger) but to utilise that additional space, you’ll need to resize it after you clone it.

sudo gdisk /dev/sdb
x (Expert Menus)
e (Move GPT Table to end of the disk)
m (Main Menus)
w (Write and Exit)
Y (Yes - do this)

All this does is shift the GPT table away from the current position in the middle of the disk to the end - without doing this you can’t resize it.

  1. Resizing the new drive Step 2

There’s a few ways to do this step, but in Ubuntu there’s a nice GUI tool that makes it really simple. Starting from the Ubuntu desktop install GParted from the Ubuntu Software library, then open it.

GParted Before GParted After

Noting the maximum size and leaving the preceding space alone, I adjusted the New size entry to 953,838 leaving 0 free space following. Select Resize/Move then Apply all operations (Green Tick in the top corner) and we’re done.

  1. Move the new drive back to the RaspiBlitz and power it on.

Hopefully it starts up and works fine. :)

Conclusion

I left this far too long and far too late. Much later than I should have. My volume was reporting only 3GB free space and 100% utilisation which is obviously not the right approach. I’d suggest people think about doing this when they hit 10% remaining and not much more than that.

The Bitcoin/Lightning also hammers your SSD, shortening its life so swapping out for an identically sized drive would follow all steps except 4 & 5 and should work fine as well.

Whilst this whole exercise had my node offline for 36 hours end to end, there were life distractions, sleep and a learning curve inbetween. It should really only take about 2-3 hours for a similar sized drive.

Good luck!

]]>
Podcasting 2021-10-16T16:00:00+10:00 #TechDistortion
Managing Lightning Nodes https://techdistortion.com/articles/managing-lightning-nodes https://techdistortion.com/articles/managing-lightning-nodes Managing Lightning Nodes Previously I’ve written about my Bitcoin/Lightning Node and more recently about setting up my RaspiBlitz.

It’s been five months since then. I’ve learned a lot and frankly the number of websites that actually provide information on how to manage your Lightning Node have a lot of assumed knowledge. So I’d like to share how I manage my node lately with a few things I learned along the way that will hopefully make things easier for others to avoid the mistakes I made.

The latest version of RaspiBlitz incorporates the lovely Lightning Terminal which incorporates Loop, Pool and Faraday into a simple web interface. So we’ll need that installed before we go further. Log into your Raspiblitz via Terminal and when you’re in the web interface, enable both of the below if you haven’t already:

  • (SERVICES) LIT (loop, pool, faraday)
  • (SERVICES) RTL Web interface

List of Services Install LIT from the Additional Services Menu

Updated Interface You should see LIT in the User Interface Main Menu Now

LIT Lightning Terminal note the port and your IP Address to Log in

Initial Funding

When you start adding funds to your node, if you don’t live in the USA, you’re not big on options. In the USA, you can use Strike but otherwise there aren’t any direct Fiat–>Lightning services I’ve found to date. That’s okay but to set up your node you’ll need to buy BitCoin and face the on-chain transaction fee.

The best option I have found is MoonPay and you simply select BTC, (you can change the Default Currency through the Hamburger menu on the top-right if you like), select the value in your Fiat Currency of choice or BitCoin amount, then after you continue, enter your BitCoin/Lightning Node’s BitCoin address (NOT the Lightning Address please…) and then your EMail. Following the verification EMail, enter your payment details and it will process the transaction and your BitCoin shows up.

Previously I’ve used apps that use MoonPay integration like BlueWallet and Breez, but that’s a problem because if you do buy BitCoin, it ends up on your mobile device’s BitCoin Wallet and it’s stuck. You need to then do another on-chain transaction which will cost you more in fees. By using MoonPay directly to your own node’s BitCoin address, you only have to deal with that once.

FYI: A $50 AUD transaction cost me $8.12 AUD in fees, though this is essentially flat so doubling that to $100 AUD and you’re up for $8.14 AUD in fees therefore if you’re setting up a node for the first time, be aware it makes sense to add as much as you can manage to get started. More about that later.

Another FYI: MoonPay has a KYC (Know Your Customer) cut-off value and this is the equivalent of $118USD (0.00271BTC at time of publishing) which requires Identification before they’ll process the transaction. If you’re concerned about this, then you can make multiple transactions but that’ll obviously cost more in fees. And about those fees, you don’t get the option to set the fee in sats/vB…more about that next.

Timing Is Everything

BitCoin isn’t like banking whereby transaction fees are fixed (mind you, Fiat transaction fees are often buried so deep you’ll never see them but believe me they’re there…) as in they don’t vary over time. (Insert joke about Fiat bank fees always going up over time, but I digress…)

BitCoin is totally different. Simplistically your fees are based on transaction backlog for the current block against the current mining fee. The more demand, the bigger backlog, the higher the fees. This is a simplification, but the details are quite dry but feel free to read up if you care.

Fees are typically referred to in sats/vB (Satoshis per virtual-Byte) which you can read about here and the differences between bytes and virtual bytes here. It’s a SegWit thing. Anyhow, the lower the number, the less your fees will be for your on-chain transaction.

The mechanism for setting your level of impatience for any on-chain transaction is the Fee in sats/vB. If you’re impatient then set a really high number, if you’re in no hurry then set a low number. To get an idea of the historical and current view of the fees, have a look at Mempool.space.

MemPool MemPool Shows Lots of Information About Block Transactions at a High Level

Fees are quite low at the moment so for transactions where you can set this, 1 sat/vB will see your transaction processed quite cheaply and very quickly - most likely even in the current block (10 minutes).

So Now You Have BitCoin

How does it feel now you have BitCoin on your Node? For me? Me’h - it’s a thing maybe I’m just used to it now, but you are effectively your own bank at this point. If you want to avoid losing money in on-chain fees then you need to stick to lightning transactions wherever you can where the fees are measured usually between 1 and 10 sats. BitCoin on-chain transactions all incur fees and using Lightning requires a Channel - multiple actually. To open a channel you need an on-chain transaction. To close a channel, you need an on-chain transaction. While that channel is open though, there are not on-chain fees at all.

To review - there are five transaction types people get charged on-chain fees for:

  1. Converting from Fiat to BitCoin
  2. Converting from BitCoin to Fiat
  3. Opening a Lightning Channel
  4. Closing a Lightning Channel
  5. A BitCoin transaction (i.e. purchasing something with BitCoin)

To be clear, these are all technically just a BitCoin on-chain transaction - it’s just the end purpose that I’m referring to.

Choose The Node, Choose The Channel Limits

There are two factors to consider when opening a channel to a new node: how well connected is it; and can I afford the minimum channel size?

A good resource to find the best connected node is 1ML but there’s a huge amount of information so finding the most relevant information isn’t always easy. In short, the best place to start is to think about where you’re intending to send sats to or to receive them from, simply because the more direct the connection to the node, the less fees and the more likely the transaction will succeed.

For incoming sats, in the world of podcasting, LNPay, Breez and Sphinx.

For outgoing sats, I personally use BitRefill to buy gift cards as a method to convert to Fiat from time to time. Another example of this is Fold.

However there’s an issue. There’s no indication on 1ML and no other way to easily determine the minimum channel size unless you attempt to open a channel with that node first. You first need enough sats on-chain for you to initiate an open channel request, and then if that throws an error it will tell you the minimum channel size. Thus you can only really determine this by interrogating, and poking the node. (Sigh)

For two I mentioned above, I’ve done the work for you:

  • BitRefill = 0.1 BTC (10M sats)
  • Fold = 0.05 BTC (5M sats)

COUGH

Well…I have 300k or so to play with, so I guess not.

The next best option is a node that’s connected to the one you want, which you can trace through 1ML if you have the patience.

Other factors to consider when choosing a node to open a channel with:

  • Age: The longer the node has been around, the more likely it is to be around in future
  • Availability: The more available it is the better. It’s annoying when sats are stuck in a channel with a node that’s offline and you can’t access them.
  • TOR: In the IP Address space if you see an Onion address, then TOR accessibility might be useful if you are privacy concerned.

If it’s the first channel you open, your best bet is to pick a big, well connected node as most of these are connected to one of the Lightning Loop Nodes (More on that later).

Channel Size

Since we want to minimise our on-chain fees, we want to try this “Lightning” thing everyone is raving about, so we open a channel. Since we don’t want to be endlessly opening and closing channels it’s best to open the biggest channel that you can afford. In order to use Loop In and Out, you must have at least 250k sats (about $105USD at time of writing) and if you want to quickly open channels and build a node with a lot of Inbound liquidity I’d recommend starting with at least 300k or more, as we know we’ll lose some as we Loop Out and open new channels. (More on that later)

The other issue with smaller channels is that they get clogged easily. When you want to spend any sats and all you have are a bunch of small channels, if the amount you’re trying to spend requires a little bit from each channel then all it takes is for one channel to fail and the transaction will fail overall. The routing and logic continues to improve but larger channels make spending and receiving sats so much easier and keeping your node balance above 250k sats lets you use Loop.

I made the mistake early on of not investing enough when opening channels so I had lots of small channels. It was a huge pain when I was trying to move around even moderate amounts (100k sats).

Circular Rebalance

Circular rebalancing is a nice feature you can use when you have two or more channels. It allows you to move local sats from the selected channel into the local sats of the destination channel - or you can think of it as receiving a sats balance increase from the other channel. The Ride The Lightning web interface is my favourite web UI for circular rebalancing.

Ride The Lightning Channels View Ride The Lightning Channels View

Rebalance Channel Step One Rebalance a Channel Step One

Rebalance Channel Step Two Rebalance a Channel Step Two

Behind the scenes it’s simply an Invoice from one channel to another channel. It gets routed outside through other Lightning Nodes and in the example above, there are 5 hops at a cost of 1011 milli-Sats (round that down to 1 sat).

Using this method you can shuffle sats between your channels for very few sats which can be handy if you want to stack your sats in one channel, distribute your sats evenly to balance your node for routing and so on.

Balancing Your Node

There are three ways you can “balance” your node:

  1. Outbound Priority (Spending lots of sats)
  2. Inbound Priority (Receiving lots of sats)
  3. Routing

For the longest time I was confused by the expression, “You can set up a routing node” insofar as what the hell that meant. It’s not a special “type” of node, it just means you keep all of your channels as balanced as possible - meaning your Inbound and Outbound balances are equal. Obviously to achieve a routing node it’s necessary to have 50% of the value of your channels in total in your node, otherwise it would be perfectly balanced.

Keeping in mind that “balancing” a node actually refers to the channels on that node being predominantly balanced or biased for one of the above three options. I suppose there should be a fourth option that describes my node best: “confused”.

Loop In

In Lightning you can move on-chain BitCoin into a channel that you want to add Local balance to changing it to Lightning sats you can spend via Lightning. Why would you do this?

Let’s say you’ve bought some new BitCoin and it’s appeared on your node - it’s not Lightning Sats yet so you can only spend it on-chain (high fees = no good). You already have a bunch of mostly empty channels and you don’t want to open a new channel: this is when you could use Loop In.

Loop In Loop In Interface in Lightning Terminal

Loop In only works for a single channel at a time, and with the 250k minimum, that channel must have at least that many sats of available capacity for Loop In to work.

Loop works by using a special looping Node (series of Nodes probably) maintained by Lightning Labs. At this time they enforce a 250k minimum to a 6.56M maximum per loop in a transaction. The concept is simple: reduce on-chain fees by grouping multiple loop transactions together. Your transaction attracts a significantly lower fee than if you were to open a new channel with your BitCoin balance and you don’t disturb the channels you already have.

Loop Out

Like Looping In, Out works the other way around. It some ways it’s far more useful as you can use Looping Out to build a series of channels cyclicly (more on that shortly).

Whilst Looping In carries the same 250k minimum, Loop Out is limited to your available Local capacity, though still can not exceed 6.56M maximum per loop out a transaction.

Loop Out Loop Out Interface in Lightning Terminal

Loop Out Loop Out can Manually Select Specific Channels if there’s Liquidity

Loop Out Loop Out of 340k sats from two channels

Loop Out Loop Out showing a fee of 980 sats

Loop Out Processing the Loop Out

If the Loop Out fails, you can try to rebalance your channels to put your sats into a highly connected node prior to the loop out, or you could lower the amount and try again until it succeeds. You can adjust the Confirmation Target and send it to a specific BitCoin destination if you want (if you leave that blank, it defaults to the node you’re initiating the Loop Out from which is normally what you’d do).

If you want to keep the fees as low as possible, you should set the number of block confirmations to a larger number. By default I believe it’s 9 blocks (not completely sure) which cost me 980 sats in my example, but by setting this higher it should drop the fees however I did not test enough times to confirm this myself.

Once it completes your node will report those sats now against your on-chain balance, ready for BitCoin spending directly should you wish to.

If you stack your sats into a single channel, you can also use the RTL interface, under Channels select the right-hand side drop down and select “Loop Out”. Again, a minimum 250k sats are required.

Loop Out RTL Looping Out via Ride The Lightning

Stack a Channel, Loop It Out, Open New Channel, Repeat

If you’re building your Node from scratch and you’ve started with a single channel that you opened with your initial BitCoin injection, then there’s a technique you can use to build your single channel node into a well connected node with many channels.

The process:

  1. Stack a Channel (Once you have 2 or more Channels)
  2. Loop It Out
  3. Open New Channel
  4. Repeat

The whole process could take multiple days to complete for multiple channels and it will consume some of your sats in the process, but you’re essentially shuffling around the same sats and re-using them to open more channels to improve your nodes connectivity.

Maintenance

Operating a node isn’t a full time job, but it’s also not a set and forget thing either. I had an issue with my DuckDNS not updating the dynamic address after a power outage at home. I noticed that there hadn’t been many streaming sats coming in for a week when I checked and found the error and corrected it. Another time I noticed I’d had a large number of transactions pass through my node and my channels were pegged and skewed and no routing was occurring. So I rebalanced my channels.

Sometimes I’ve had people open channels and then every balance/re-balance I attempted failed. Others open a channel and their end is highly unreliable trapping a lot of sats in the channel. When I need/want to use them I have to wait until they’re online again.

My observation has been that there are many people tinkering with BitCoin Lightning, and they tend not to put much money into it. That’s fine - I can’t really judge that since that’s how I started out. However these are the sorts of people that aren’t tending to their node, ensuring it’s online, ensuring it’s well funded and hence are most likely to have poor availability.

I originally allowed channels of only 30k sats, but have since increased this to 250k sats minimum channel size. Since doing this I’ve had less nuisance channels be opened and have had to prune far fewer channels. The message is: it’s not set and forget, in the same way your bank account isn’t either. If you care about your money, check your transactions.

That’s it

I think that’s it for now. Hopefully the things I’ve learned are helpful to somebody. Whilst a lot of the above is a simplification in some dimensions, I realise I still have a lot to learn and it’s a journey. Whether you think BitCoin and Lightning are the future or just a stepping stone along the way, one thing I believe for certain: it’s a fascinating system that’s truly disrupting the financial sector in a way that hasn’t previously been possible and it’s fun to learn how it works.

Many thanks to Podcasting 2.0, RaspiBlitz and both Adam Curry and Dave Jones for their inspiration.

]]>
Podcasting 2021-07-31T21:00:00+10:00 #TechDistortion
Retro Mac Pro Part 2 https://techdistortion.com/articles/retro-mac-pro-part-2 https://techdistortion.com/articles/retro-mac-pro-part-2 Retro Mac Pro Part 2 I wrote previously about why I invested in a Mac Pro and I realised I didn’t describe how I’d connected everything up, in case anyone cares. (They probably don’t but whatever…)

New Desk Configuration with 3 4K Displays

The Mac Pro 2013 has three Thunderbolt 2 buses and due to the bandwidth restrictions for 4K UHD 60Hz displays, you can only fully drive one 60Hz display per Thunderbolt 2 bus. Hence I have the two 60Hz monitors connected via Mini-DisplayPort to DisplayPort cables, one to each Bus.

The third monitor is an interesting quandry. I’d read that you can’t use a third monitor unless you connect it to the HDMI port, and it’s only HDMI 1.4 therefore it can only output 30Hz at 4K. However that’s not entirely true. Yes, it is HDMI 1.4 but that’s not the only way you can connect a monitor. By using a spare Mini-DisplayPort to HDMI Cable you can connect a monitor directly to the third Thunderbolt bus and it lights up the display, also at 30Hz.

I suspect that Apple made a design choice with the third Thunderbolt 2 bus, such that it’s also connected to the two Gigabit Ethernet ports and HDMI output. Therefore whatever remaining bandwidth would be available by limiting video output to 30Hz at 4k, allows the other components the bandwidth they require. In my case it’s annoying but not the end of the world, given the next best option was about four times the price.

Top View of the Mac Pro

Seeing as how I have a perfectly good TS3+ dock, and that Apple still sell Thunderbolt 2 cables and a bi-directional Thunderbolt 2 to Thunderbolt 3 adaptor, I’ve connected those to that third Thunderbolt 2 bus, then I drive the third monitor using a DisplayPort to DisplayPort cable from the TS3+ instead. This then allows me to connect anything to the TS3+ that’s USB-C to the Mac Pro and adds a much needed SD Card slot as well.

In order to fit everything on my desk I’ve added a monitor arm on each side for the side monitors which overhang the desk, and placed the Mac Pro behind the gap between the middle and right-hand side monitors. If you need access to the Mac Pro or the TS3+ Dock, simply swing the right hand monitor out of the way.

Moving the Right-hand Side Monitor reveals Mac Pro and TS3+

Since I podcast sometimes, I’ve also attached my Boom Arm behind the Dock and the Mix Pre3 is connected via the powered USB-C output on the TS3+ and it works perfectly. Less interesting are the connections to the hardwired Ethernet, speakers and webcam but that’s pretty much it.

]]>
Technology 2021-07-01T07:00:00+10:00 #TechDistortion
Retro Mac Pro https://techdistortion.com/articles/retro-mac-pro https://techdistortion.com/articles/retro-mac-pro Retro Mac Pro After an extended forced work-from-home mandated due to COVID19, I’ve had a lot of time to think about how best to optimise my work environment at home for optimal efficiency. I started with a sit/stand desk and found that connecting my MacBook Pro 13" via a CalDigit TS3+ allowed me to drive two 4K UHD displays at 60Hz and give me a huge amount of screen real-estate that was very useful for my job.

I retained the ability to disconnect and move into the office should I wish to, though in reality I only spent a total of 37 days physically in the office (not continuously, between various lockdown orders) in the past 12 months. When I was outside the office, I used my laptop occasionally but found the iPad Pro was good enough for most things I wanted to do and its battery life was better, plus I could sign documents - which is a common thing in my line of work.

It all wasn’t smooth sailing though. I found that the MBP was actually quite sluggish in the user interface when connected to the 4K screens, and that the internal fans would spin up to maximum all the time, many times without any obvious cause. I started to remove applications that were running in the background like iStat Menus, Dropbox, and a few others and that helped, but I still noticed that it was also spinning up now during Time Machine backups and Skype, Skype for Business, Microsoft Teams and Zoom.

This was a problem since I spent most of my workday on Teams calls and the microphone was picking up the annoying background grind of the cooling fans in the MBP. For this reason I started thinking about how to resolve the two issues: sluggish graphics and running the laptop hot all of the time, without sacrificing screen real-estate in HiDPi (of which I’d become rather dependent).

So I got to thinking: why am I still using a laptop when I’m spending 90% of my time at my home office desk? I wanted to keep using a Mac, and whilst I missed my 2009 Nehalem Mac Pro, I didn’t miss how noisy it also was, it’s power drain, the fact it was an effective space-heater all year round and frankly wasn’t currently officially supported by Apple1 anyway.

There are only a few currently supported Macs that can drive the amount of screen real-estate I wanted: the Mac Pros (2013, 2019), the iMac 5K (with discrete graphics) and the iMac Pro. There are, as yet, no M1 (Apple Silicon) Macs that can drive more than one external display. Buying a new Mac was out of the question with my budget strictly set at $1,400 AUD (about $1K USD at time of writing) it was down to used Macs. The goal was to get a powerful Mac that I could extend and upgrade as funds permitted. The more recent iMacs weren’t as upgradable and even a used iMac Pro was out of my budget and I won’t find a 2019 Mac Pro used since they’re too new and would also be too expensive (even used).

So call me crazy if you like, but I invested in a used 2013 Mac Pro - a Retro-Mac Pro if you like. It had spent its life in an office environment and for the past two years lay unused in a corner with its previous user leaving the company and they’d long since switched to Mac Minis. It had a damaged power cable, no box and no manuals and apart from some dust was in excellent condition.

I’ve now had a it for just under a week and I’m loving it! It’s the original entry-level model with twin FirePro D300s, 3.7GHz Quad-core Intel Xeon E5 with 16GB DDR3 RAM and a basic 256GB SSD. I can upgrade the SSD with a Sintech adaptor and a 2TB NVMe stick for $340 AUD, and go to 64GB RAM for about $390 AUD, but I’m in no hurry for the moment.

Admittedly the Mac Pro can only drive two of the 4k UHD screens at 60Hz with the third only at 30Hz but that amount of high-DPI screen real-estate is exactly what I’m looking for. Dragging a window between the 60Hz and 30Hz screens is a bit weird, but I have my oldest, cheapest 4K monitor as my static/cross-reference/parking screen anyway so that’s a limitation I can live with.

Yes, I could have built a Hackintosh.

Yes, I could run Windows on any old PC.

I wanted a currently supported Mac.

For those thinking, “But John, there’s Apple Silicon Macs with multi-display support just around the corner” well yes, that’s probably true. But I know Apple. They will leave multi-UHD monitor support only for their highest-end products which will still cost the Earth. So you might ALSO say, “But John, Intel Macs are about to die, melt, burn and become the neglected step-son that was once the golden-haired-child of the Apple family” and that’s true too, but I can still run Linux/Windows/ANYTHING on this thing, for a decade to come long after macOS ceases to be officially supported. That said, the fact you can still apply hacks to the 2009 Mac Pro and run Big Sur, it’s likely the 2013 Mac Pro will be running a slightly crippled but functional macOS for a long time yet, or at least until Apple give up on Intel support for Apple Silicon features, but that’s another story.

And you might also think, “John, why the hell would you buy a Mac that’s had so many reliability problems?” Well I did a lot of research given the Mac Pro 2013’s reputation, and based on what I found the original D300 model was relatively fine with very few issues. The D500 and D700 models had significantly worse reliability as they ran hotter (they were more powerful) and due to the thermal corner Apple built themselves into with the Mac Pro design at that point, ended up being unreliable with prolonged usage, due to excessive heat.

I can report the Mac Pro runs the two primary screens buttery smooth, it is effectively silent and doesn’t ever break a sweat. Being a geek however subjective measurements aren’t enough. The following GeekBench 5 scores for comparison:

Metric Mac Pro Score MacBook Pro Score % Difference
CPU Single-Core 837 1,026 - 22.5%
CPU Multi-Core 3,374 3,862 - 14.4%
OpenCL 20,482 / 21,366 8,539 + 239%
Metal 23,165 / 23,758 7,883 + 293%
Disk Read (MB/s) 926 2,412 - 260%
Disk Write (MB/s) 775 2,039 - 263%

By all measurements above my Macbook Pro should be the better machine, and you’d hope so being 5 years newer than the Mac Pro 2013. My usage to date however hasn’t shown that - almost the opposite, which begs the question - for my use case where screen real-estate matters the most, the graphics power from a discrete FirePro is far more valuable than a significantly faster SSD. Not only that but with the same amount of RAM you’d think the Macbook Pro would perform as well, however it’s using an integrated graphics chipset, hence sharing that RAM and driving two 4K screens was killing its performance, whereas the Mac Pro doesn’t sacrifice any of its RAM and maintains full performance even when driving those screens.

I don’t often encode video in Handbrake anymore or audio but when I do the Mac Pro isn’t quite as fast but it’s pretty close to the Macbook Pro or certainly good enough for me. The interesting and surprising thing to note is that a 7 year old desktop machine was a better fit for my needs at the price than any current model on offer by Apple.

I’m looking forward to many years of use out of a stable desktop machine, noting that whilst my use-case was a bit niche, it’s been an effective choice for me.


  1. An officially support Mac is one where Apple releases an Operating System version that will install without modification on that model of Mac. ↩︎

]]>
Technology 2021-07-01T06:00:00+10:00 #TechDistortion
Podcasting 2.0 Phase 3 Tags https://techdistortion.com/articles/podcasting-2-0-phase-3-tags https://techdistortion.com/articles/podcasting-2-0-phase-3-tags Podcasting 2.0 Phase 3 Tags I’ve been keeping a close eye on Podcasting 2.0 and a few weeks ago they finalised their Phase 3 tags. As I last wrote about this in December 2020, I thought I’d quickly update on thoughts on each of the Phase 3 tags:

  • < podcast:trailer > Is a compact and more flexible version of the existing iTunes < itunes:episodeType >trailer< /itunes:episodeType > tag. The Apple-spec isn’t supported outside of Apple, however more importantly you can only have one trailer per podcast, whereas the PC2.0 tag allows multiple trailers and trailers per season if desired. It also is more economical than the Apple equivalent, as it acts as an enclosure tag, rather than requiring an entire RSS Item in the Apple Spec.
  • < podcast:license > Used to specify the licence terms of the podcast content, either by show or by episode, relative to the SPDX definitions.
  • < podcast:alternateEnclosure > With this it’s possible to have more than one audio/video enclosure specified for each episode. You could use this for different audio encoding bitrates and video if you want to.
  • < podcast:guid > Rather than the using the Apple GUID guideline, the PC2.0 suggests using UUIDv5 using the RSS feed as the seed value.

In terms of TEN, I’m intending to add Trailer in future and I’m considering Licence as well, but beyond that probably not much else for the moment. I don’t see that GUID adds much for my use case over my existing setup (using the CDATA URL at time of publishing) and since my publicly available MP3s are already 64kbps Mono, Alternate Enclosure for low bitrate isn’t going to add any value to anyone in the world. I did consider linking to the YouTube videos of episodes where they exist however I don’t see this as beneficial in my use case either. In future I could explore an IPFS stored MP3 audio option for resiliency, however this would only make sense if this became more widely supported by client applications.

It’s good to see things moving forward and whilst I’m aware that the Value tag is being enhanced iteratively, I’m hopeful that this can incorporate client-value and extend the current lightning keysend protocol options to include details where supporters can flag “who” the streamed sats came from (if they choose to). It’s true that customKey/Value exist however they’re intentionally generic for the moment.

Of course, it’s a work in progress and it’s amazing that it works so well already, but I’m also aware that KeySend as it exists today, might be deprecated by the AMP aka Atomic-Multipath Payment protocol, so there may be some potential tweaks yet to come.

It’s great to see the namespace incorporating more tags over time and I’m hopeful that more client applications can start supporting them as well in future.

]]>
Podcasting 2021-06-13T16:30:00+10:00 #TechDistortion
Pushover and PodPing from RSS https://techdistortion.com/articles/pushover-and-podping-from-rss https://techdistortion.com/articles/pushover-and-podping-from-rss Pushover and PodPing from RSS In my efforts to support the Podcasting 2.0 initiative, I thought I should see how easy it was to incorporate their new PodPing concept, which is effectively a distributed RSS notification system specifically tailored for Podcasts. The idea is that when a new episode goes live, you notify the PodPing server and it then adds that notification to the distributed Hive blockchain system and then any apps can simply watch the blockchain and this can trigger the download of the new episode in the podcast client.

This has come predominantly from their attempts to leverage existing technology in WebSub, however when I tried the WebSub angle a few months ago, the results were very disappointing with many minutes, hours passing before a notification was seen and in some cases it wasn’t seen at all.

I leveraged parts of an existing Python script I’ve been using for years for my RSS social media poster, but stripped it down to the bare minimum. It consists of two files, checkfeeds.py (which just creates an instance of the RssChecker class) and then the actual code is in rss.py.

This beauty of this approach is that it will work on ANY site’s RSS target. Ideally if you have a dynamic system you could trigger the GET request on an episode posting event, however since my sites are statically generated and the posts are created ahead of time (and hence don’t appear until the site builder coincides with a point in time after that post is set to go live) it’s problematic to create a trigger from the static site generator.

Whilst I’m an Electrical Engineer, I consider myself a software developer of many different languages and platforms, but for Python I see myself more of a hacker and a slasher. Yes, there are better ways of doing this. Yes, I know already. Thanks in advance for keeping that to yourself.

Both are below for your interest/re-use or otherwise:

from rss import RssChecker

rssobject=RssChecker()

checkfeeds.py

CACHE_FILE = '<Cache File Here>'
CACHE_FILE_LENGTH = 10000
POPULATE_CACHE = 0
RSS_URLS = ["https://RSS FEED URL 1/index.xml", "https://RSS FEED URL 2/index.xml"]
TEST_MODE = 0
PUSHOVER_ENABLE = 0
PUSHOVER_USER_TOKEN = "<TOKEN HERE>"
PUSHOVER_API_TOKEN = "<TOKEN HERE>"
PODPING_ENABLE = 0
PODPING_AUTH_TOKEN = "<TOKEN HERE>"
PODPING_USER_AGENT = "<USER AGENT HERE>"

from collections import deque
import feedparser
import os
import os.path
import pycurl
import json
from io import BytesIO

class RssChecker():
    feedurl = ""

    def __init__(self):
        '''Initialise'''
        self.feedurl = RSS_URLS
        self.main()
        self.parse()
        self.close()

    def getdeque(self):
        '''return the deque'''
        return self.dbfeed

    def main(self):
        '''Main of the FeedCache class'''
        if os.path.exists(CACHE_FILE):
            with open(CACHE_FILE) as dbdsc:
                dbfromfile = dbdsc.readlines()
            dblist = [i.strip() for i in dbfromfile]
            self.dbfeed = deque(dblist, CACHE_FILE_LENGTH)
        else:
            self.dbfeed = deque([], CACHE_FILE_LENGTH)

    def append(self, rssid):
        '''Append a rss id to the cache'''
        self.dbfeed.append(rssid)

    def clear(self):
        '''Append a rss id to the cache'''
        self.dbfeed.clear()

    def close(self):
        '''Close the cache'''
        with open(CACHE_FILE, 'w') as dbdsc:
            dbdsc.writelines((''.join([i, os.linesep]) for i in self.dbfeed))

    def parse(self):
        '''Parse the Feed(s)'''
        if POPULATE_CACHE:
            self.clear()
        for currentfeedurl in self.feedurl:
            currentfeed = feedparser.parse(currentfeedurl)

            if POPULATE_CACHE:
                for thefeedentry in currentfeed.entries:
                    self.append(thefeedentry.get("guid", ""))
            else:
                for thefeedentry in currentfeed.entries:
                    if thefeedentry.get("guid", "") not in self.getdeque():
#                        print("Not Found in Cache: " + thefeedentry.get("title", ""))
                        if PUSHOVER_ENABLE:
                            crl = pycurl.Curl()
                            crl.setopt(crl.URL, 'https://api.pushover.net/1/messages.json')
                            crl.setopt(pycurl.HTTPHEADER, ['Content-Type: application/json' , 'Accept: application/json'])
                            data = json.dumps({"token": PUSHOVER_API_TOKEN, "user": PUSHOVER_USER_TOKEN, "title": "RSS Notifier", "message": thefeedentry.get("title", "") + " Now Live"})
                            crl.setopt(pycurl.POST, 1)
                            crl.setopt(pycurl.POSTFIELDS, data)
                            crl.perform()
                            crl.close()

                        if PODPING_ENABLE:
                            crl2 = pycurl.Curl()
                            crl2.setopt(crl2.URL, 'https://podping.cloud/?url=' + currentfeedurl)
                            crl2.setopt(pycurl.HTTPHEADER, ['Authorization: ' + PODPING_AUTH_TOKEN, 'User-Agent: ' + PODPING_USER_AGENT])
                            crl2.perform()
                            crl2.close()

                        if not TEST_MODE:
                            self.append(thefeedentry.get("guid", ""))

rss.py

The basic idea is:

  1. Create a cache file that keeps a list of all of the RSS entries you already have and are already live
  2. Connect up PushOver (if you want push notifications, or you could add your own if you like)
  3. Connect up PodPing (ask @dave@podcastindex.social or @brianoflondon@podcastindex.social for a posting API TOKEN)
  4. Set it up as a repeating task on your device of choice (preferably a server, but should work on a Synology, a Raspberry Pi or a VPS)

VPS

I built this initially on my Macbook Pro using the Homebrew installed Python 3 development environment, then installed the same on a CentOS7 VPS I have running as my Origin web server. Assuming you already have Python 3 installed, I added the following so I could use pycurl:

yum install -y openssl-devel
yum install python3-devel
yum group install "Development Tools"
yum install libcurl-devel
python3 -m pip install wheel
python3 -m pip install --compile --install-option="--with-openssl" pycurl

Whether you like “pycurl” or not, obviously there are other options but I stick with what works. Rather than refactor for a different library I just jumped through some extra hoops to get pycurl running.

Finally I bridge the checkfeeds.py with a simply bash script wrapper and call it from a CRON Job every 10 minutes.

Job done.

Enjoy.

]]>
Technology 2021-05-25T08:00:00+10:00 #TechDistortion
Fun With Apple Podcasts Connect https://techdistortion.com/articles/fun-with-apple-podcasts-connect https://techdistortion.com/articles/fun-with-apple-podcasts-connect Fun With Apple Podcasts Connect Apple Podcasts will shortly open to the public but for podcasters like me, we’ve been having fun with Apple’s first major update to their podcasting backend in several years, and it hasn’t really been that much fun. Before talking about why I’m putting so much time and effort into this at all, I’ll go through the highlights of my experiences to date.

Fun Times at the Podcasts Connect Mk2

Previously I’d used the Patreon/Breaker integration but that fell apart when Breaker was acquired by Twitter and the truth was that very, very few Patrons utilised the feature and the Breaker app was never big enough to attract any new subscribers. The Breaker audio integration and content has since been removed even though the company had the service taken over (to an extent) as it was one less thing for me to upload content to. In a way…this has been a bit déjà-vu and “here we go again…” 1

The back-catalogue of ad-free episodes as well as bonus content between Sleep, Pragmatic, Analytical and Causality adds up to 144 individual episodes.

For practically every one I had the original project files which I restored and re-exported in WAV format then uploaded them via the Apple Podcasts updated interface. (The format must be WAV or FLAC and Stereo, which is funny for a Mono podcast like mine and added up to about 50GB of audio) It’s straight-forward enough although there were a few annoying glitches that after using it for 10 days were still unresolved. Each of the key issues I encountered: (there were others but some were resolved at time of writing this so I’ve excluded those)

  1. Ratings and Reviews made a brief appearance then disappeared and still haven’t come back (I’m sure they will at some point)
  2. Not all show analytics time spans work (Past 60 days still doesn’t work, everything is blank)
  3. Archived shows in the Podcast-drop-down list appear but don’t in the main overview even when displaying ‘All’
  4. The order you save and upload audio files, changes the episode date such that if you create the episode meta-data, set the date, then upload the audio the episode date defaults to todays date. It does this AFTER you leave the page though, so it’s not obvious, but if you upload the audio THEN set the date it’s fine.
  5. The audio upload hit/miss ratio for me was about 8 out of 10, meaning for every 10 episodes I uploaded, 2 got stuck. What do I mean? The episode WAV file uploads, completes and then the page shows the following:

Initial WAV Upload Attempt

…and the “Processing Audio” never actually finishes. Hoping this was just a back-log issue with high end user demand I uploaded everything and came back minutes, hours then days later and finally after waiting five days I set about to try to unstick it.

Can’t Publish! Five Days of Waiting and seeing this I gave up waiting for it to resolve itself…

The obvious thing to try: select “Edit” and delete then re-upload the audio. Simple enough, keeps the meta-data intact (except the date I had to re-save after every audio re-upload) then I waited another few days. Same result. Okay, so that didn’t work at all.

Next thing to try, re-create the entire episode again from scratch! So I did that for the 30 episodes that were stuck. Finally I see this (in some cases up to an hour later):

Blitz

And sure enough…

Blitz

Of course, that only worked for 25 episodes out of the 30 I uploaded a second time. I then had to wash-rinse-repeat for the 5 that had failed for a second time and repeated until they all worked. I’d hate to think about doing this on a low-bandwidth connection like I had a decade ago. Even at 40Mbps up it took a long time for the 2GB+ episodes of Pragmatic. The entire exercise has probably taken me 4 work-days of effort end to end, or about 32 hours of my life. There’s no way to delete the stuck episodes either so I now have a small collection of “Archived” non-episodes. Oh well…

Why John…Why?

I’ve read a lot of differing opinions from podcasters about Apples latest move and frankly I think the people most dismissive are those with significant existing revenue streams for their shows, or those that have already made their money and don’t need/want income for their show(s). Saying that you can reduce fees by using Stripe and your own website integration, by using Memberful, Patreon, or more recently by streaming Satoshis (very cool BTW), all have barriers to entry for the Podcast creator that can not be ignored.

For me, I’m a geek and I love that stuff so sure, I’ll have a crack at that (looks over at the Raspberry Pi Lightning Node on desk with a nod) but not everyone is like me (probably a good thing on balance).

So far as I can tell, Apple Podcasts is currently the most fee-expensive way for podcasters to get support from listeners. It’s also a walled garden2, but then so is Patreon, Spotify/Anchor (if you’re eligible and I’m not…for now), Breaker, and building your own system with Memberful or Stripe website integration requires developer chops most don’t have so isn’t an option. By far the easiest (once you figure out BitCoin/Lightning and set up your own Node) is actually streaming Sats, but that knowledge ramp is tough and lots of people HATE BitCoin. (That’s another, more controversial story).

Apple Podcasts has one thing going for it: It’s going to be the quickest, easiest way for someone to support your show coupled with the biggest audience in a single Podcasting ecosystem. You can’t and shouldn’t ignore that, and that’s why I’m giving this a chance. The same risks apply to Apple as to all the other walled gardens (Patreon, Breaker, Spotify/Anchor etc): you could be kicked-off the platform, they could stop supporting their platform slowly, sell it off or shut it down entirely and if any of that happens, your supporters will mostly disappear with it. That’s why no-one should rely on it as the sole pathway for support.

It’s about being present and assessing after 6-12 months. If you’re not in it, then you might miss out on supporters that love your work and want to support it and this is the only way they’re comfortable doing that. So I’m giving this a shot and when it launches for Beta testing will be looking for any fans that want to give it a try so I can tweak anything that needs tweaking, and will post publicly when it goes live for all. Hopefully all of my efforts (and Apples) are worth it for all concerned.

Time will tell. (It always does)


  1. Realistically if every Podcasting-walled-garden offers something like this (as Breaker did and Spotify is about to) then at some point Podcasters have to draw a line of effort vs reward. Right now I’m uploading files to two places, and with Apple that will be a third. If I add Spotify, Facebook, Breaker then I’m up to triple my current effort to support 5 walled gardens. Eventually if the platform isn’t popular then it’s not going to be worth that effort. Apple is worth considering because its platform is significant. The same won’t always be true for the “next walled garden” whatever that may be. ↩︎

  2. To be crystal clear, I love walled gardens as in actual GARDENS, but I don’t mean those ones, I mean closed ecosystems aka ‘walled gardens’, before you say that. Actually no geek thought that, that’s just my sense of humour. Alas. ↩︎

]]>
Technology 2021-04-30T20:00:00+10:00 #TechDistortion
Causality Transcriptions https://techdistortion.com/articles/causality-transcriptions https://techdistortion.com/articles/causality-transcriptions Causality Transcriptions Spurred on by Podcasting 2.0 and reflecting on my previous attempt at transcriptions, I thought it was time to have another crack at this. The initial attempts were basic TXT files that weren’t time-synced nor proofed and used a very old version of Dragon Dictate I had laying around.

This time around my focus is on making Causality as good as it possibly can be. From the PC2.0 guidelines:

SRT: The SRT format was designed for video captions but provides a suitable solution for podcast transcripts. The SRT format contains medium-fidelity timestamps and are a popular export option from transcription services. SRT transcripts used for podcasts should adhere to the following specifications.

Properties:

  • Max number of lines: 2
  • Max characters per line: 32
  • Speaker names (optional): Start a new card when the speaker changes. Include the speaker’s name, followed by a colon.

This is closely related to defaults I found using Otter.ai but that’s not free if you want time-sync’d SRT files. So my workflow uses YouTube (for something useful)…

STEPS:

  1. Upload episode directly converted from the original public audio file to YouTube as a Video (I use Ferrite to create a video export). Previously I was using LibSyn as part of their YouTube destination which also works.
  2. Wait a while. It can take anywhere from a few minutes to a few hours, then go to your YouTube Studio, pick an episode, Video Details, under the section: “Language, subtitles, and closed captions”, select “English by YouTube (automatic)” three vertical dots, “Download” (NOTE BELOW). Alternatively select Subtitles, and next to DUPLICATE AND EDIT, select the three dots and Download, then .srt
  3. If you can only get the SBV File: Open this file, untitled.sbv in a raw text editor, then select all, copy and paste it into: DCMP’s website, click Convert, select all, then create a new blank file: untitled.srt and paste in the converted format.
  4. If you have the SRT now, and don’t have the source video (eg if it was created by LibSyn automatically, I didn’t have a copy locally) download the converted YouTube video using the embed link for the episode to: SaveFrom or use a YouTube downloader if you prefer.
  5. Download the Video in low-res and put all into a single directory.
  6. I’m using Subtitle Studio and it’s not free but it was the easiest for me to get my head around and it works for me. Open the SRT file just created/downloaded then drag the video for the episode in question onto the new window.
  7. Visually skim and fix obvious errors before you press play (Title Case, ends of Sentences, words for numbers, MY NAME!)
  8. Export the SRT file and add to the website and RSS Feed!

NOTE: In 1 case out of 46 uploads it thought I was speaking in Russian for some reason? The auto-translation in Russian was funny but not useful, but for all others it correctly translated automatically into English and the quality of the conversion is quite good.

I’ve also flattened the SRT into a fixed Text file, which is useful for full text search. The process for that takes me two steps:

  1. Upload the file to Happy Scribe and select “Text File” as the output format.
  2. Open the downloaded file in a text editor, select all the text and then go to Tool Slick’s line merge tool, pasting the text into the Input Text box, then “Join Lines” and select all of the Output Joined Lines box and paste over what you had in your local text file.
  3. Rename the file and add to the website and RSS Feed!

As of publishing I’ve only done the sub-titles in SRT and TXT formats of two episodes, but I will continue to churn my way through them as time permits until they’re all done.

Of course you could save yourself a bit of effort and use Otter, and save yourself even more effort and don’t proof-read the automatically converted text. If I wasn’t so much of a stickler for detail, I’d probably do that myself but it’s that refusal to just accept that, that makes me the Engineer I am I suppose.

Enjoy!

]]>
Podcasting 2021-03-30T06:00:00+10:00 #TechDistortion
Building A Synology Hugo Builder https://techdistortion.com/articles/building-a-synology-hugo-builder https://techdistortion.com/articles/building-a-synology-hugo-builder Building A Synology Hugo Builder I’ve been using GoHugo (Hugo) as a static site generator on all of my sites for about three years now and I love it’s speed and its flexibility. That said a recent policy change at a VPS host had me reassessing my options and now that I have my own Synology with Docker capability I was looking for a way to go ultra-slim and run my own builder, using a lightweight (read VERY low spec) OpenVZ VPS as the Nginx front-end web server behind a CDN like CloudFlare. Previously I’d used Netlify but their rebuild limitations on the free tier were getting a touch much.

I regularly create content that I want to set to release automatically in the future at a set time and date. In order to accomplish this Hugo needs to rebuild the site periodically in the background such that when new pages are ready to go live, they are automatically built and available to the world to see. When I’m debugging or writing articles I’ll run the local environment on my Macbook Pro and only when I’m happy with the final result will I push to the Git repo. Hence I need a set-and-forget automatic build environment. I’ve done this on spare machines (of which I current have none), on a beefier VPS using CronJobs and scripts, on my Synology as a Virtual machine using the same (wasn’t reliable) before settling on this design.

Requirements

The VPS needed to be capable of serving Nginx from folders that are RSync’d from the DropBox. I searched through LowEnd Stock looking for deals for 256GB of RAM, SSD for a cheap annual rate and at the time got the “Special Mini Sailor OpenVZ SSD” for $6 USD/yr which was that amount of RAM and 10GB of SSD space, running CentOS7. (Note: These have sold out but there’s plenty of others around that price range at time of writing)

Setting up the RSync, NGinx, SSH etc is beyond the scope of this article however it is relatively straight-forward. Some guides here might be helpful if you’re interested.

My sites are controlled via a Git workflow, which is quite common for website management of static sites and in my case I’ve used GitHub, GitLab and most recently settled on the lightweight and solid Gitea which I also self-host now on my Synology. Any of the above would work fine but having them on the same device makes the Git Clone very fast but you can adjust that step if you’re using an external hosting platform.

I also had three sites I wanted to build from the same platform. The requirements roughly were:

  • Must stay within Synology DSM Docker environment (no hacking, no portainer which means DroneCI is out)
  • Must use all self-hosted, owned docker/system environment
  • A single docker image to build multiple websites
  • Support error logging and notifications on build errors
  • Must be lightweight
  • Must be an updated/recent/current docker image of Hugo

The Docker Image And Folders

I struggled for a while with different images because I needed one that included RSync, Git, Hugo and allowed me to modify the startup script. Some of the hugo build dockers out there were actually quite restricted to a set workflow like running up the local server to serve from memory or assumed you had a single website. The XdevBase / HugoBuilder was perfect for what I needed. Preinstalled it has:

  • rsync
  • git
  • Hugo (Obviously)

Search for “xdevbase” in the Docker Registry and you should find it. Select it and Download the latest - at time of writing it’s very lightweight only taking up 84MB.

XDevBase

After this open “File Station” and start building the supporting folder structure you’ll need. For me I had three websites: TechDistortion, The Engineered Network and SlipApps, hence I created three folders. Firstly under the Docker folder which you should already have if you’ve played with Synology docker before, create a sub-folder for Hugo - for me I imaginatively called mine “gohugo”, then under that I created a sub-folder for each site plus one for my logs.

Folders

Under each website folder I also created two more folders: “src” for the website source I’ll be checking out of Gitea, and “output” for the final publicly generated Hugo website output from the generator.

Scripts

I spent a fair amount of time perfecting the scripts below. The idea was to have an over-arching script that called each site one after the other in a never-ending loop with a mandatory wait-time between the loops. If you attempt to run independent dockers each on a timer and any other task runs on the Synology, the two or three independently running dockers will overlap leading to an overload condition the Synology will not recover from. The only viable option is to serialise the builds and synchronising those builds is easiest using a single docker like I have.

Using the “Text Editor” on the Synology or using your text editor of choice and copying the files across to the correct folder, create a main build.sh file and as many build-xyz.sh files as you have sites you want to build.

#!/bin/sh
# Main build.sh

# Stash the current time and date in the log file and note the start of the docker
current_time=$(date)
echo "$current_time :: GoHugo Docker Startup" >> /root/logs/main-build-log.txt

while :
do
	current_time=$(date)
	echo "$current_time :: TEN Build Called" >> /root/logs/main-build-log.txt
	/root/build-ten.sh
	current_time=$(date)
	echo "$current_time :: TEN Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m

	current_time=$(date)
	echo "$current_time :: TD Build Called" >> /root/logs/main-build-log.txt
	/root/build-td.sh
	current_time=$(date)
	echo "$current_time :: TD Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m

	current_time=$(date)
	echo "$current_time :: SLIP Build Called" >> /root/logs/main-build-log.txt
	/root/build-slip.sh
	current_time=$(date)
	echo "$current_time :: SLIP Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m
done

current_time=$(date)
echo "$current_time :: GoHugo Docker Build Loop Ungraceful Exit" >> /root/logs/main-build-log.txt
curl -s -F "token=xxxthisisatokenxxx" -F "user=xxxthisisauserxxx1" -F "title=Hugo Site Builds" -F "message=\"Ungraceful Exit from Build Loop\"" https://api.pushover.net/1/messages.json

# When debugging is handy to jump out into the Shell, but once it's working okay, comment this out:
#sh

This will create a main build log file and calls each sub-script in sequence. If it ever jumps out of the loop, I’ve set up a Pushover API notification to let me know.

Since all three sub-scripts are effectively identical except for the directories and repositories for each, The Engineered Network script follows:

#!/bin/sh

# BUILD The Engineered Network website: build-ten.sh
# Set Time Stamp of this build
current_time=$(date)
echo "$current_time :: TEN Build Started" >> /root/logs/ten-build-log.txt

rm -rf /ten/src/* /ten/src/.* 2> /dev/null
current_time=$(date)
if [[ -z "$(ls -A /ten/src)" ]];
then
	echo "$current_time :: Repository (TEN) successfully cleared." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Repository (TEN) not cleared." >> /root/logs/ten-build-log.txt
fi

# The following is easy since my Gitea repos are on the same device. You could also set this up to Clone from an external repo.
git --git-dir /ten/src/ clone /repos/engineered.git /ten/src/ --quiet
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Repository (TEN) successfully cloned." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Repository (TEN) not cloned." >> /root/logs/ten-build-log.txt
fi

rm -rf /ten/output/* /ten/output/.* 2> /dev/null
current_time=$(date)
if [[ -z "$(ls -A /ten/output)" ]];
then
	echo "$current_time :: Site (TEN) successfully cleared." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not cleared." >> /root/logs/ten-build-log.txt
fi

hugo -s /ten/src/ -d /ten/output/ -b "https://engineered.network" --quiet
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Site (TEN) successfully generated." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not generated." >> /root/logs/ten-build-log.txt
fi

rsync -arvz --quiet -e 'ssh -p 22' --delete /ten/output/ bobtheuser@myhostsailorvps:/var/www/html/engineered
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Site (TEN) successfully synchronised." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not synchronised." >> /root/logs/ten-build-log.txt
fi

current_time=$(date)
echo "$current_time :: TEN Build Ended" >> /root/logs/ten-build-log.txt

The above script can be broken down into several steps as follows:

  1. Clear the Hugo Source directory
  2. Pull the current released Source code from the Git repo
  3. Clear the Hugo Output directory
  4. Hugo generate the Output of the website
  5. RSync the output to the remote VPS

Each step has a pass/fail check and logs the result either way.

Your SSH Key

For this work you need to confirm that RSync works and you can push to the remote VPS securely. For that extract the id_rsa key (preferably generate a fresh key-pair) and place that in the /docker/gohugo/ folder on the Synology ready for the next step. As they say it should “just work” but you can test if it does once your docker is running. Open the GoHugo docker, go to the Terminal tab and Create–>Launch with command “sh” then select the “sh” terminal window. In there enter:

ssh bobtheuser@myhostsailorvps -p22

That should log you in without a password, securely via ssh. Once it’s working you can exit that terminal and smile. If not, you’ll need to dig into the SSH keys which is beyond the scope of this article.

Gitea Repo

This is now specific to my use case. You could also clone your Repo from any other location but for me this was quicker easier and simpler to map my repo from the Gitea Docker folder location. If you’re like me and running your own Gitea on the Synology you’ll find that repo directory under the /docker/gitea sub-directories at …data/git/respositories/ and that’s it. Of course many will not be doing that, but setting up external Git cloning isn’t too difficult but beyond the scope of this article.

Configuring The Docker Container

Under the Docker –> Image section, select the downloaded image then “Launch” it, set the Container Name to “gohugo” (or whatever name you want…doesn’t matter) then configure the Advanced Settings as follows:

  • Enable auto-restart: Checked
  • Volume: (See below)
  • Network: Leave it as bridge is fine
  • Port Settings: Since I’m using this as a builder I don’t care about web-server functionality so I left this at Auto and never use that feature
  • Links: Leave this empty
  • Environment –> Command: /root/build.sh (Really important to set this start-up command here and now, since thanks to Synology’s DSM Docker implementation, you can’t change this after the Docker container has been created without destroying and recreating the entire docker container!)

There’s a lot of little things to add here to make this work for all the sites. In future if you want to add more sites then stopping the Docker, adding Folders and modifying the scripts is straight-forward.

Add the following Files: (Where xxx, yyy, zzz are the script names representing your sites we created above, aaa is your local repo folder name)

  • docker/gohugo/build-xxx.sh map to /root/build-xxx.sh (Read-Only)
  • docker/gohugo/build-yyy.sh map to /root/build-yyy.sh (Read-Only)
  • docker/gohugo/build-zzz.sh map to /root/build-zzz.sh (Read-Only)
  • docker/gohugo/build.sh map to /root/build.sh
  • docker/gohugo/id_rsa map to /root/.ssh/id_rsa (Read-Only)
  • docker/gitea/data/git/respositories/aaa map to /repos (Read-Only) Only for a locally hosted Gitea repo

Add the following Folders:

  • docker/gohugo/xxx/output map to /xxx/output
  • docker/gohugo/xxx/src map to /xxx/src
  • docker/gohugo/yyy/output map to /yyy/output
  • docker/gohugo/yyy/src map to /yyy/src
  • docker/gohugo/zzz/output map to /zzz/output
  • docker/gohugo/zzz/src map to /zzz/src
  • docker/gohugo/logs map to /root/logs

When finished and fully built the Volumes will look something like this:

Volumes

Apply the Advanced Settings then Next and select “Run this container after the wizard is finished” then Apply and away we go.

Of course, you can put whatever folder structure and naming you like, but I like keeping my abbreviations consistent and brief for easier coding and fault-finding. Feel free to use artistic licence as you please…

Away We Go!

At this point the Docker should now be periodically regenerating your Hugo websites like clockwork. I’ve had this setup running now for many weeks without a single hiccup and on rebooting it comes back to life and just picks up and runs without any issues.

As a final bonus you can also configure the Synology Web Server to point at each Output directory and double-check what’s being posted live if you want to.

Enjoy your automated Hugo build environment that you completely control :)

]]>
Hugo 2021-02-22T06:00:00+10:00 #TechDistortion
Building Your Own Bitcoin Lightning Node Part Two https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node-part-two https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node-part-two Building Your Own Bitcoin Lightning Node Part Two Previously I’ve written about my Synology BitCoin Node Failure and more recently about my RaspiBlitz that was actually successful. Now I’d like to share how I set it up with a few things I learned along the way that will hopefully make things easier for others to avoid the mistakes I made.

Previously I suggested the following:

  • Set up the node to download a fresh copy of the BlockChain
  • Use an External IP, as it’s more compatible than TOR (unless you’re a privacy nut)

Beyond that here’s some more suggestions:

  • If you’re on a home network behind a standard Internet Modem/Router: change the Raspberry Pi to a fixed IP address and set up port forwarding for the services you need (TCP 9735 at a minimum for Lightning)
  • Don’t change the IP from DHCP to Fixed IP until you’ve first enabled and set up your Wireless connection as a backup
  • Sign up for DuckDNS before you add ANY Services (I tried FreeDNS but DuckDNS was the only one I found that supports Let’s Encrypt)

Let’s get started then…

WiFi First

Of course this is optional, but I think it’s worth having even if you’re not intending to pull the physical cable and shove the Pi in a drawer somewhere (please don’t though it will probably overheat if you did that). Go to the Terminal on the Pi and enter the following:

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

Then add the following to the bottom of the file:

network={
ssid="My WiFi SSID Here"
psk="My WiFi Password Here"
}

This is the short-summary version of the Pi instructions.

Once this is done you can reboot or enter this to restart the WiFi connection:

sudo wpa_cli -i wlan0 reconfigure

You can confirm it’s connected with:

iwgetid

You should now see:

wlan0     ESSID:"My WiFi SSID Here"

Fixed IP

The Raspberry Pi docs walk through what to change but I’ll summarise it here. Firstly if you have a router to connect to the internet, likely it’s one of the standard subnets like 192.168.1.1 and it’s your gateway, but to be sure from the Raspberry Pi terminal (after you’ve SSH’d in) type:

route -ne

It should come back with a table with Destination 0.0.0.0 to a Gateway, most likely something like 192.168.1.1 as Iface (Interface) Eth0 for hardwired Ethernet and wlan0 for WiFi. Next type:

cat /etc/resolv.conf

This should list the nameservers you’re using - make a note of these in a text-editor if you like. Then edit your dhcpcd.conf. I use nano but you can use vi or any other linux editor of your choice:

sudo nano /etc/dhcpcd.conf

Add the following (or your equivalent) to the end of the conf: (Where xxx is your Fixed IP)

interface eth0
static ip_address=192.168.1.xxx
static routers=192.168.1.1
static domain_name_servers=192.168.1.1  fe80::9fe9:ecdf:fc7e:ad1f%eth0

Of course when picking your Fixed IP on the local network, make sure your DHCP allocation has a free zone above or below which it’s a safe space. On my network I only allow DHCP between .20 and .254 of my subnet but you can reserve any which way you prefer.

Once this is done reboot your Raspberry Pi and confirm you can connect via SSH at the Fixed IP. If you can’t, try the WiFi IP address and check your settings. If you still can’t, oh dear you’ll need to reflash your SD card and start over. (If that happens don’t worry, your Blockchain on the SSD will not be lost)

Dynamic DNS

If you’re like me you’re running this on your home network and you have a “normal” internet plan behind an ISP that charges more for a Fixed IP on the Internet and hence you’ve got to deal with a Dynamic IP address that’s public-facing. #Alas

There are many Dynamic DNS sites out there, but finding one that will work reliably, automatically, with Let’s Encrypt isn’t easy. Of course if you’re not intending to use public-facing utilities that need a TLS certificate like I am (Sphinx) then you probably don’t need to worry about this step or at least any Dynamic DNS provider would be fine. For me, I had to do this to get Sphinx to work properly.

DuckDNS allows you to sign in with credentials ranging from Persona, to Twitter, GitHub, Reddit and Google: pick whichever you have or whichever you prefer. Once logged in you can create a subdomain and add up to 5 in total. Take note of your Token and your subdomain.

In the RaspiBlitz menu go to SUBSCRIBE and select NEW2 (LetsEncrypt HTTPS Domain [free] not under Settings!) then enter the above information as requested. When it comes to the Update URL leave this blank. The Blitz will reboot and hopefully everything should just work. When you’re done the Domain will then appear on the LCD of your Blitz at the top.

You won’t know if your certificates are correctly issued until later or if you want you can dive into the terminal again and manually check, but that’s your call.

Port Forwarding Warning

Personally I only Port Forward the following that I believe is the minimum required to get the Node and Sphinx Relay working properly:

  • TCP 9735 (Lightning)
  • TCP 3300 & 3301 (Sphinx Relay)
  • TCP 8080 (Let’s Encrypt)

I think there’s an incremental risk in forwarding a lot of other services - particularly those that allow administration of your Node and Wallet. I also use an Open VPN to my household network with a different endpoint and I use the Web UIs and Zap application on my iPhone for interacting with my Node. Even with a TLS certificate and password per application I don’t think opening things wide open is a good idea. You may see that convenience differently, so make your own decisions in this regard.

Okay…now what?

As a podcaster and casual user of your Lightning Node, not everything in the Settings and Services is of interest. For me I’ve enabled the following that are important for use and monitoring:

  • (SETTINGS) LND Auto-Unlock
  • (SERVICES) Accept KeySend
  • (SERVICES) RTL Web interface
  • (SERVICES) ThunderHub
  • (SERVICES) BTC-RPC-Explorer
  • (SERVICES) Lightning Loop
  • (SERVICES) Sphinx-Relay

Each in turn…

LND Auto-Unlock

In lightning’s LND implementation, the Wallet with your coinage in it is automatically locked when you restart your system. If you’re comfortable with auto-unlocking your wallet on reboot without you explicitly entering your Wallet password then this feature means a recovery from a reboot/power failure etc will be that little bit quicker and easier. That said, storing your wallet password on your device for privacy nuts is probably not the best idea. I’ll let you balance convenience against security for yourself.

Accept KeySend

One of the more recent additions to the Lightning standard in mid-2020 was KeySend. This feature allows anyone to send an open Invoice to any Node that supports it, from any Node that supports it. With the Podcasting 2.0 model, the key is using KeySend to stream Sats to your nominated Node either per minute listened or as one-off Boost payments showing appreciation on behalf of the listener. For me this was the whole point, but for some maybe they might not be comfortable accepting payments from random people at random times of the day. Who can say?

RTL Web interface

The Ride The Lightning web interface is a basic but handy web UI for looking at your Wallet, your channels and to create and receive Invoices. I enabled this because it was more light-weight than ThunderHub but as I’ve learned more about BitCoin and Lightning, I must confess I rarely use it now and prefer ThunderHub. It’s a great place to start though and handy to have.

ThunderHub

By far the most detailed and extensive UI I’ve found yet for the LND implementation, ThunderHub allows everything that RTL’s UI does plus channel rebalancing, Statistics, Swaps and Reporting. It’s become my go to UI for interacting with my Node.

BTC-RPC-Explorer

I only recently added this because I was sick of going to internet-based web pages to look at information about BitCoin - things like the current leading block, pending transactions, fee targets, block times and lots and lots more. Having said all of that, it took about 9 hours to crunch through the blockchain and derive this information on my Pi, and it took up about 8% of my remaining storage for the privilege. You could probably live without it though, but if you’re really wanting to learn about the state of the BitCoin blockchain then this is very useful.

Lightning Loop

Looping payments in and out is handy to have and a welcome addition to the LND implementation. At a high level Looping allows you to send funds to/from users/services that aren’t Lightning enabled and reduces transaction fees by reusing Lightning channels. That said, maybe that’s another topic for another post.

Sphinx-Relay

The one I really wanted. The truth is that at the time of writing, the best implementation of streaming podcasts with Lightning integration is Sphinx.

Sphinx started out as a Chat application, but one that uses the distributed Lightning network to pass messages. The idea seems bizarre to start with but if you have a channel between two people you can send them a message attached to a Sat payment. The recipient can then send that same Sat back to you with their own message in response.

Of course you can add fees if you want to for peer to peer but that’s optional. If you want to chat with someone else on Sphinx, so long as they have a Wallet on a Node that has a Sphinx-Relay on it, you can participate. Things get more interesting if you create a group chat, that Sphinx call a “Tribe” at which point you can “Stake” an amount to post on the channel with a “Time to Stake” both set by the Tribe owner. If the poster posts something good, the time to stake elapses and the Staked amount returns to the original poster. If the poster posts something inflammatory then the Tribe owner can delete that post and those funds are claimed by the Tribe owner.

This effectively puts a price on poor behaviour and conversely poor-acting owners that delete all posts will find themselves with an empty Tribe very quickly. It’s an interesting system for sure but has led to some well moderated conversations in my experiences thus far even in controversial Tribes.

In mid/late 2020 Sphinx integrated Podcasts into Tribe functionality. Hence I can create a Tribe, link a single Podcast RSS Feed to that Tribe and then anyone listening to an episode in the Sphinx app and Tribe will automatically stream Sats to the RSS Feed’s nominated Lightning Node. The “Value Slider” defaults to the Streaming Sats suggested in the RSS Feed, however this can be adjusted by the listener on a sliding bar all the way down to 0 if they wish - it’s Opt in. The player itself is basic but works well enough with Skip Forwards and Backwards as well as speed adjustment.

Additionally Sphinx has apps available for iOS (TestFlight Beta), Android (Sideload, Android 7.0 and higher) and desktop OSs including Windows, Linux and MacOS as well. Most functions exist on all apps however I find myself sometimes going back to the iOS app to send/receive Sats to my Wallet/Node which isn’t currently implemented on the MacOS version. (Not since I started my own Node however) You can of course host a Node on Sphinx for a monthly fee if you prefer, but this article is about owning your own Node.

One Last Thing: Inbound Liquidity

The only part of this equation that’s a bit odd (or was for me at the beginning) is understanding liquidity. I mentioned it briefly here, but in short when you open a channel with someone the funds are on your own side, meaning you have outbound liquidity. Hence I can spend Lightning/BitCoin on things in the Network. That’s fine. No issue. The problem is when you’re a Podcaster you want to receive payments in streaming Sats, but without Inbound Liquidity you can’t do that.

The simplest way to build it is to ask, really, really nicely for an existing Lightning user to open a channel with you. Fortunately my Podcasting 2.0 acquaintance Dave Jones was kind enough to open a channel for 100k Sats to my node, thus allowing inbound liquidity for testing and setting up.

In current terms, 100k isn’t a huge channel but it’s more than enough to get anyone started. There are other ways I’ve seen including pushing tokens to the partner on the channel when it’s created (at a cost) but that’s something that I need to learn more about before venturing more thoughts on it.

That’s it

That’s pretty much it. If you’re a podcaster and you’ve made it this far you now have your own Node, you’ve added your Value tag to your RSS feed with your new Node ID, you’ve set up Sphinx Relay and your own Tribe and with Inbound Liquidity you’re now having Sats streamed to you by your fans and loyal listeners!

Many thanks to Podcasting 2.0, Sphinx, RaspiBlitz, DuckDNS and both Adam Curry and Dave Jones for inspiration and guidance.

Please consider supporting each of these projects and groups as they are working in the open to provide a better podcasting future for everyone.

]]>
Podcasting 2021-02-16T06:00:00+10:00 #TechDistortion
Building Your Own Bitcoin Lightning Node https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node Building Your Own Bitcoin Lightning Node After my previous attempts to build my own node to take control of my slowly growing podcast streaming income didn’t go so well I decided to bite the bullet and build my own Lightning Node with new hardware. The criteria was:

  1. Minimise expenditure and transaction fees (host my own node)
  2. Must be always connected (via home internet is fine)
  3. Use low-cost hardware and open-source software with minimal command-line work

Because of the above, I couldn’t use my Macbook Pro since that comes with me regularly when I leave the house. I tried to use my Synology, but that didn’t work out. The next best option was a Raspberry Pi, and two of the most popular options out there are the RaspiBolt and RaspiBlitz. Note: Umbrel is coming along but not quite as far as the other two.

The Blitz was my choice as it seems to be more popular and I could build it easily enough myself. The GitHub Repo is very detailed and extremely helpful. This article is not intended to just repeat those instructions, but rather describe my own experiences in building my own Blitz.

Parts

The GitHub instructions suggest Amazon links, but in Australia Amazon isn’t what it is in the States or even Europe. So instead I sourced the parts from a local importer of Rasperry Pi parts. I picked from the “Standard” list:

Core Electronics

  • $92.50 / Raspberry Pi 4 Model B 4GB
  • $16.45 / Raspberry Pi 4 Power Supply (Official) - USB-C 5.1V 15.3W (White)
  • $23.50 / Aluminium Heatsink Case for Raspberry Pi 4 Black (Passive Cooling, Silent)
  • $34.65 / Waveshare 3.5inch LCD 480x320 (The LCD referred to was a 3.5" RPi Display, GPIO connection, XPT2046 Touch Controller but they had either no stock on Amazon or wouldn’t ship to Australia)

Blitz All the parts from Core Electronics

UMart

  • $14 / Samsung 32GB Micro SDHC Evo Plus W90MB Class 10 with SD Adapter

On Hand

Admittedly a 1TB SSD and Case would’ve cost an additional $160 AUD, which in future I will extend probably to a fully future-proof 2TB SSD but at this point the Bitcoin Blockchain uses about 82% of that so a bigger SSD is on the cards for me, in the next 6-9 months time for sure.

Total cost: $181.10 AUD (about $139 USD or 300k Sats at time of writing)

Blitz The WaveShare LCD Front View

Blitz The WaveShare LCD Rear View

Assembly

The power supply is simple: unwrap, plug in to the USB-C Power Port and done. The Heatsink comes with some different sized thermal pads to sandwich between the heatsink and the key components on the Pi motherboard and four screws to clamp the two pieces together around the motherboard. Finally lining up the screen with the outer-most pins on the I/O Header and gently pressing them together. They won’t sit flat against the HeatSink/case but they don’t have to, to connect well.

Blitz The Power Supply

Blitz The HeatSink

Blitz The Raspberry Pi 4B Motherboard

Burning the Image

I downloaded the boot image from the GitHub repo, and used Balena Etcher to write it on my Macbook Pro. Afterward you insert that into the Raspberry Pi, connected up the SSD to the motherboard side USB3.0 port, connect up an Ethernet cable and then power it up!

Installing the System

If everything is hooked up correctly (and you have a router/DHCP server on your hardwired ethernet you just connected it to) the screen should light up with the DHCP allocated IP Address you can reach it on with instructions on how to SSH via the terminal, like “ssh admin@192.168.1.121” or similar. Open up Terminal, enter that and you’ll get a nice neat blue-screen with the same information on it. From here everything is done via the menu installer.

If you get kicked out of that interface just enter ‘raspiblitz’ and it will restart the menu.

Getting the Order Right

  1. Pick Your Poison For me I chose BitCoin and Lightning which is the default. There are other Crypto-currencies if that’s your choice then set your passwords and please use a Password manager with at least 32 characters - make it as secure as you can from Day One!
  2. TOR vs Public IP Some privacy nuts run behind TOR to obscure their identity and location. I’ve done both and can tell you that TOR takes a lot longer to sync and access and will kill off a lot of apps and makes opening channels to some other nodes and services difficult or impossible. For me, I just wanted a working node that was as interoperable as possible so I chose Public IP.
  3. Let the BlockChain Sync Once your SSD is formatted, if you have the patience then I recommend syncing the Blockchain from scratch. I already had a copy of it that I SCP’d across from my Synology and it saved me about 36 hours but it also caused my installer to ungracefully exit and it took me another day of messing with the command line to get it to start again and complete the installation. In retrospect, not a time saver end to end but your mileage may vary.
  4. Set up a New Node Or in my case, I recovered my old node at this point by copying the channel.backup over but for most others it’s a New Node and a new Wallet and for goodness sake when you make a new wallet; KEEP A COPY OF YOUR SEED WORDS!!!
  5. Let Lightning “Sync” It’s actually validating blocks technically but this also takes a while. For me it took nearly 6 hours for both Lightning and Bitcoin blocks to sync.

Blitz The Final Assembled Node up and Running

My Money from Attempt 2 on the Synology Recovered!

I was able to copy the channel.backup and wallet.dat files from the Synology and was able to successfully recover my $60 AUD investment from my previous attempts, so that’s good! (And it worked pretty easily actually)

In order to prevent any loss of wallet, I’ve also added a USB3.0 Thumb Drive to the other USB3.0 port and set up “Static Channel Backup on USB Drive” which required a brief format to EXT4 but worked without any real drama.

Conclusion

Building the node using a salvaged SSD cost under $200 AUD and took about 2 days to sync and set up. Installing the software and setting up all the services is another story for another post, but it’s working great!

]]>
Podcasting 2021-02-12T06:00:00+10:00 #TechDistortion
BitCoin, Lightning and Patience https://techdistortion.com/articles/bitcoin-lightning-and-patience https://techdistortion.com/articles/bitcoin-lightning-and-patience BitCoin, Lightning and Patience I’ve been vaguely aware of BitCoin for a decade but never really dug into it until recently, as a direct result of my interest in the Podcasting 2.0 team.

My goals were:

  1. Minimise expenditure and transaction fees
  2. Use existing hardware and open-source software
  3. Setup a functional lightning node to both make and accept payments

I’m the proud owner of a Synology, and it can run docker, and you can run BitCoin and Lightning in Docker containers? Okay then…this should be easy enough, right?

BitCoin Node Take 1

I set up the kylemanna/bitcoind docker on my Synology and started it syncing to the Mainnet blockchain. About a week later and I was sitting at 18% complete and averaging 1.5% per day and dropping. Reading up on this and the problem was two-fold: validating the blockchain is a CPU and HDD/SSD intensive task and my Synology had neither. I threw more RAM at it (3GB out of the 4GB it had) with no difference in performance, set the CPU restrictions to give the Docker the most performance possible with no difference and basically ran out of levers to pull.

I then learned it’s possible to copy a blockchain from one device to another and the Raspberry Pi’s sold as your own private node come with the blockchain pre-synced (up to the point they’re shipped) so they don’t take too long to catch up to the front of the chain. I then downloaded BitCoin Core for MacOS and set it running. After two days it had finished (much better) and I copied the directories to the Synology only to find that the settings on BitCoin Core were to “prune” the blockchain after validation, meaning the entire blockchain was no longer stored on my Mac, and the docker container would need to start over.

Ugh.

So I disabled pruning on the Mac, and started again. The blockchain was about 300GB (so I was told) and with my 512GB SSD on my MBP I thought that would be enough, but alas no, as the amount of free space diminished at a rapid rate of knots, I madly off-loaded and deleted what I could finishing with about 2GB to spare and the entire blockchain and associated files weighed in at 367GB.

Transferring them to the Synology and firing up the Docker…it worked! Although it had to revalidate the 6 most recent blocks (taking about 26 minutes EVERY time you restarted the BitCoin docker) it sprang to life nicely. I had a BitCoin node up and running!

Lightning Node Take 1

There are several docker containers to choose from, the two most popular seemed to be LND and c-Lightning. Without understanding the differences I went with the container that was said to be more lightweight and work better on a Synology: c-Lightning.

Later I was to discover that more plugins, applications, GUIs, relays (Sphinx for example) only work with LND and require LND Macaroons, which c-Lightning doesn’t support. Not only that design decisions by the c-Lightning developers to only permit single connections between nodes makes building liquidity problematic when you’re starting out. (More on that in another post someday…)

After messing around with RPC for the cLightning docker to communicate with the KyleManna Bitcoind docker, I realised that I needed to install ZMQ support since RPC Username and Password authentication were being phased out in preference for a token authentication through a shared folder.

UGH

I was so frustrated at losing 26 minutes every time I had to change a single setting in the Bitcoin docker, and in an incident overnight both dockers crashed, didn’t restart and then took over a day to catch up to the blockchain again. I had decided more or less at this point to give up on it.

SSD or don’t bother

Interestingly my oldest son pointed out that all of the kits for sale used SSDs for the Bitcoin data storage - even the cheapest versions. A bit more research and it turns out that crunching through the blockchain is less of a CPU intensive exercise and more of a data store read/write intensive exercise. I had a 512GB Samsung USB 3.0 SSD laying around and in a fit of insanity decided to try connecting it to the Synology’s rear port, shift the entire contents of the docker shared folders (that contained all of the blocks and indexes) to that SSD and try it again.

Oh My God it was like night and day.

Both docker containers started, synced and were running in minutes. Suddenly I was interested again!

Bitcoin Node Take 2

With renewed interest I returned to my previous headache - linking the docker containers properly. The LNCM/Bitcoind docker had precompiled support for ZMQ and it was surprisingly easy to set up the docker shared file to expose the token I needed for authentication with the cLightning docker image. It started up referencing the same docker folder (now mounted on the SSD) and honestly, seemed to “just work” straight up. So far so good.

Lightning Node Take 2

This time I went for the more-supported LND, and picked one that was quite popular by Guggero, and also spun it up rather quickly. My funds on my old cLightning node would simply have to remain trapped until I could figure out how to recover them in future.

Host-Network

The instructions I had read all related to TestNet, and advised not to use money you weren’t prepared to lose. I set myself a starting budget of $40 AUD and tried to make this work. Using the Breez app on iOS and their integration with MoonPay I managed to convert about 110k Sats. The next problem was getting them from Breez to my own Node and my attempts with Lightning failed with “no route.” (I learned later I needed channels…d’uh) Sending via BitCoin was the only option. “On-chain” they call it. This cost me a lot of Sats, but I finally had some Sats on my Node.

Satoshi’s

BitCoin has a few quirky little problems. One interesting one is that a single BitCoin is worth a LOT of money - currently 1 BTC = $62,000.00 AUD. So it’s not a practical measure and hence BitCoin is more commonly referred to in Satoshi’s which are 1/100,000,000th of a BitCoin. BitCoin is a crypto-currency which is transacted on the BitCoin blockchain, via the BitCoin network. Lightning is a Layer 2 network that also deals in BitCoin but in smaller amounts, peer to peer connected via channels and because the values are much smaller is regularly transacted in values of Satoshi’s.

Everything you do requires Satoshi’s (SATS). It costs SATS to fund a channel. It costs SATS to close a channel. I couldn’t find out how to determine the minimum amount of Sats needed to open a channel without first opening one via the command line. I only had a limited number of SATs to play with so I had to choose carefully. Most channels wanted 10,000 or 20,000 but I managed a find a few that only required 1,000. The initial thought was to open as many channels as you could then make some transactions and your inbound liquidity will improve as others in the network transact.

Services exist to help build that inbound liquidity, without which, you can’t accept payments from anyone else. Another story for a future post.

Anything On-Chain Is Slow and Expensive

For a technology that’s supposed to be reducing fees overall, Lightning seems to cost you a bit up-front to get into it, and anytime you want to shuffle things around, it costs SATS. I initially bought into it wishing to fund my own node and try for that oft-touted “self-soverignty” of BitCoin, but to achieve that you have to invest some money to get started. In the end however I hadn’t invested enough because my channels I opened didn’t allow inbound payments.

I asked some people to open some channels to me and give me some inbound liquidity however not a single one of them successfully opened. My BitCoin and Lightning experiment had ground to a halt, once again.

At first I experimented with TOR, then by publishing on an external IP address, port-forwarding to expose the Lightning external access port 9735 to allow incoming connections. Research into why highlighted that I needed to recreate my dockers but connect them to a custom Docker network and then resync the containers otherwise the open channel attempts would continue to fail.

I did that and it still didn’t work.

Then I stumbled across the next idea: you needed to modify the Synology Docker DSM implementation to allow direct mounting of the Docker images without them being forced through a Double-NAT. Doing so was likely to impact my other, otherwise perfectly happily running Dockers.

UGH

That’s it.

I’m out.

Playing with BitCoin today feels like programming COBOL for a bank in the 80s

Did you know that COBOL is behind nearly half of all financial transactions in 2017? Yes and the world is gradually ripping it out (thankfully).

IDENTIFICATION DIVISION.
   PROGRAM-ID. CONDITIONALS.
   DATA DIVISION.
     WORKING-STORAGE SECTION.
     *> I'm not joking, Lightning-cli and Bitcoin-cli make me think I'm programming for a bank
     01 NUM1 SATSJOHNHAS 0(0).
   PROCEDURE DIVISION.
     MOVE 20000 TO NUM1.
     IF NUM1 > 0 THEN
       DISPLAY 'YAY I HAZ 20000 SATS!'
     END-IF
     *> I'd like to make all of transactions using the command line, just like when I do normal banking...oh wait...
     EVALUATE TRUE
       WHEN SATS = 0
         DISPLAY 'NO MORE SATS NOW :('
     END-EVALUATE.
   STOP RUN.

There is no doubt there’s a bit geek-elitism amongst many of the people involved with BitCoin. Comments like “Don’t use a GUI, to understand it you MUST use the command line…” reminds me of those that whined about the Macintosh in 1984 having a GUI. A “real” computer used DOS. OMFG seriously?

A real financial system is as painless for the user as possible. Unbeknownst to me, I’d chosen a method that was perhaps the least advisable: the wrong hardware running the wrong software, running a less-compatible set of dockers and my conclusion was that setting up your own Node that you control is not easy.

It’s not intuitive either and it will make you think about things like inbound liquidity that you never thought you’d need to know, since you’re geek - not an investment banker. I suppose the point is that owning your own bank means you have to learn a bit about how a bank needs to work and that takes time and effort.

If you’re happy to just pay someone else to build and operate a node for you then that’s fine and that’s just what you’re doing today with any bank account. I spent weeks learning just how much I don’t want to be my own bank - thank you very much, or at least I didn’t want to using the equipment that I had laying about and living in the Terminal.

Synology as a Node Verdict

Docker was not reliable enough either. In some instances I would modify a single dockers configuration file and restart the container only get “Docker API failed”. Sometimes I could recover by picking the Docker Container I thought had caused the failure (most likely the one I modified but not always) by clearing the container and restarting it.

Other times I had to completely reboot the Synology to recover it and sometimes I had to do both for Docker to restart. Every restart of the Bitcoin Container and there would go another half an hour restarting and then the container would “go backwards” and be 250 blocks behind taking a further 24-48 hours of resynchronising with the blockchain before the Lightning Container could then resynchronise with it. All the while the node was offline.

Unless your Synology is running SSDs, has at least 8GB of RAM, is relatively new and you don’t mind hacking your DSM Docker installation, you could probably make it work, but it’s horses for courses in the end. If you have an old PC laying about then use that. If you have RAM and SSD on your NAS then build a VM rather than use Docker, maybe. Or better yet, get a Raspberry Pi and have a dedicated little PC that can do the work.

Don’t Do What I Did

Money is Gone

The truth is in an attempt to get incoming channel opens working, I flicked between Bridge and Host and back again, opening different ports with Socks failed errors and finally gave up when after many hours the LND docker just wouldn’t connect via ZMQ any more.

And with that my $100 AUD investment is now stuck between two virtual wallets.

I will keep trying and report back but at this point my intention is to invest in a Raspberry Pi to run my own Node. I’ll let you know how that goes in due course.

]]>
Podcasting 2021-02-01T12:30:00+10:00 #TechDistortion
Podcasting 2.0 Addendum https://techdistortion.com/articles/podcasting-2-0-addendum https://techdistortion.com/articles/podcasting-2-0-addendum Podcasting 2.0 Addendum I recently wrote about Podcasting 2.0 and thought I should add a further amendment regarding their goals. I previously wrote:

To solve the problems above there are a few key angles being tackled: Search, Discoverability and Monetisation.

I’d like to add a fourth key angle to that, which I didn’t think at the time should be listed as it’s own however having listened more to Episodes 16 and 17 and their intention to add XML tags for IRC/Chat Room integration I think I should add the fourth key angle: Interactivity.

Interactivity

The problem with broadcast historically is that audience participation is difficult given the tools and effort required. Pick up the phone, make a call, you need a big incentive (think cash prizes, competitions, discounts, something!) or audiences just don’t participate. It’s less personal and with less of a personal connection the desire for listeners to connect is much less.

In podcasting as an internet-first application and being far more personal, the bar is set differently and we can think of real-time feedback methods as verbal via a dial-in/patch-through to the live show or written via messaging, like a chat room. There are also non-real-time methods predominantly via webforms and EMail. With contact EMails already in the RSS XML specification, adding a webform submission entry might be of some use (perhaps < podcast:contactform > with a url=“https://contact.me/form"), but real-time is far more interesting.

Real Time Interactivity

In podcasting initially (like so many internet-first technology applications) geeks that understood how it works, led the way. That is to say with podcasts originally there was a way for a percentage of the listeners to use IRC as a Chat Room (Pragmatic did this for the better part of a year in 2014, as well as other far more popular shows like ATP, Back To Work etc.) to get real-time listener interaction during a podcast recording, albeit with a slight delay between audio out and listener response in the chat room.

YouTube introduced live streaming and live chat with playback that integrated the chat room with the video content to lower the barrier of entry for their platform. For equivalent podcast functionality to go beyond the geek-% of the podcast listeners, podcast clients will need to do the same. In order for podcast clients to be pressured to support it, standardisation of the XML tags and backend infrastructure is a must.

The problem with interactivity is that whilst it starts with the tag, it must end with the client applications otherwise only the geek-% of listeners will use it as they do now.

From my own experiences with live chat rooms during my own and other podcasts, people that are able to tune in to a live show and be present (lots of people just “sit” in a channel and aren’t actually present) is about 1-2% of your overall downloads and that’s for a technical podcast with a high geek-%. I’ve also found there are timezone-effects such that if you podcast live during different times of the day or night directly impacts those percentages even further (it’s 1am somewhere in the world right now, so if your listeners live in that timezone chances are they won’t be listening live).

The final concern is that chat rooms only work for a certain kind of podcast. For me, it could only potentially work with Pragmatic and in my experience I wanted Pragmatic to be focussed and chat rooms turned out to be a huge distraction. Over and again my listeners reiterated that one of the main attractions of podcasts was their ability to time-shift and listen to them when they wanted to listen to them. Being live to them was a minus not a plus.

For these reasons I don’t see that this kind of interactivity will uplift the podcasting ecosystem for the vast majority of podcasters, though it’s certainly nice to have and attempt to standardise.

Crowd-sourced Chapters

Previously I wrote:

The interesting opportunity that Adam puts forward with chapters is he wants the audience to be able to participate with crowd-sourced chapters as a new vector of audience participation and interaction with podcast creators.

Whilst I looked at this last time from a practical standpoint of “how would I as a podcaster use this?” concluding that I wouldn’t use it since I’m a self-confessed control-freak, but I didn’t fully appreciate the angle of audience interaction. I think for podcasts that have a truly significant audience with listeners that really want to help out (but can’t help financially) this feature provides a potential avenue to assist in a non-financial aspect, which is a great idea.

Crowd-source Everything?

(Except recording the show!)

From pre-production to post-production any task in the podcast creation chain could be outsourced to an extent. The pre-production dilemma could look like a feed level XML Tag < podcast:proposedtopics > to a planned topic list (popular podcasts currently use Twitter #Tags like #AskTheBobbyMcBobShow), to cut-out centralised platforms like Twitter from the creation chain in the long term. Again, only useful for certain kinds of shows, but could also include a URL Link to a shared document (probably a JSON file), an episode index reference (i.e. Currently released episode is 85, proposed topics for Episode 86, could also be an array for multiple episodes.)

The post-production dilemma generally consists of show notes, chapters (solution in progress) and audio editing. Perhaps a similar system to crowd-sourced chapters could be used for show notes that could include useful/relevant links for the current episode that aren’t/can’t be easily embedded as Chapter markers.

In either case there’s no reason why it couldn’t work the same way as crowd-sourced chapter markers. The podcaster could also have (with sufficient privileges) the administrative access to add/modify remove content from either of these, with guests also having read/write access. With an appropriate client tool this would then eliminate the plethora of different methods in use today: shared google documents being quite popular with many podcasters today, will not be around indefinitely.

All In One App?

Of course the more features we pile into the Podcasting client app, the more difficult it becomes to write and maintain. Previously an excellent programmer, come podcaster, come audiophile like Marco Arment, could create Overcast. With lightning network integration, plus crowd-sourced chapters, shared document support (notes etc) and a text chat client (IRC) the application is quickly becoming much heavier and complex, with fewer developers with the knowledge in each dimension to create an all-in-one app client.

The need for better frameworks to make feature integration easier for developers is obvious. There may well be the need to two classes of app or at least two views: the listener view and the podcaster view, or simply multiple apps for different purposes. Either way it’s interesting to see where the Tag + Use Case + Tool-chain can lead us.

]]>
Podcasting 2021-01-01T12:15:00+10:00 #TechDistortion
Podcasting 2.0 https://techdistortion.com/articles/podcasting-2-0 https://techdistortion.com/articles/podcasting-2-0 Podcasting 2.0 I’ve been podcasting from close to a decade and whilst I’m not what some might refer to as the “Old Guard” I’ve come across someone that definitely qualifies as such: Adam Curry.

Interestingly when I visited Houston in late 2019 pre-COVID19 my long-time podfriend Vic Hudson suggested I catch up with Adam as he lived nearby and referred to him as the “Podfather.” I had no idea who Adam was at that point and thought nothing of it at the time and although I caught up with Manton Reece at the IndieWeb Meetup in Austin I ran out of time for much else. Since then a lot has happened and I’ve come across Podcasting 2.0 and thus began my somewhat belated self-education of my pre-podcast-involvement podcasting history of which I had clearly been ignorant until recently.

In the first episode of Podcasting 2.0, “Episode 1: We are upgrading podcasting” on the 29th of August, 2020 at about 17 minutes in, Adam regales the story of when Apple and Steve Jobs wooed him with regards to podcasting as he handed over his own Podcast Index as it stood at the time to Apple as the new custodians. He refers to Steve Jobs' appearance at D3 and at 17:45, Steve defined podcasting as being iPod + Broadcasting = Podcasting, further describing it as “Wayne’s World for Podcasting” and even plays a clip of Adam Curry complaining about the unreliability of his Mac.

The approximate turn of events thereafter: Adam hands over podcast index to Apple, Apple builds podcasting into iTunes and their iPod line up and become the largest podcast index, many other services launch but indies and small networks dominate podcasting for the most part but for the longest time Apple didn’t do much at all to extend podcasting. Apple added a few RSS Feed namespace tags here and there but did not attempt to monetise Podcasting even as many others came into the Podcasting space, bringing big names from conventional media and with them many companies starting or attempting to convert podcast content into something that wasn’t as open as it had been with “exclusive” pay-for content.

What Do I Mean About Open?

To be a podcast by its original definition it must contain an RSS Feed, that can be hosted on any machine serving pages to the internet, readable by any other machine on the internet with an audio tag referring to an audio file that can be streamed or downloaded by anyone. A subscription podcast requires login credentials of some kind, usually associated with a payment scheme, in order to listen to the audio of those episodes. Some people draw the line at free = open (and nothing else), others are happy with the occasional authenticated feed that’s still available on any platform/player as that still presents an ‘open’ choice, but much further beyond that (won’t play in any player, not everyone can find/get the audio) and things start becoming a bit more closed.

Due to their open nature, tracking of podcast listeners, demographics and such is difficult. Whilst advertisers see this as a minus, most privacy conscious listeners see this as a plus.

Back To The History Lesson

With big money and big names a new kind of podcast emerged, one behind a paywall with features and functionality that other podcast platforms didn’t or couldn’t have with a traditional open podcast using current namespace tags. With platforms scaling and big money flowing into podcasting, it effectively brought down the average ad-revenue across the board in podcasting and introduced more self-censorship and forced-censorship of content that previously was freely open.

With Spotify and Amazon gaining traction, more multi-million dollar deals and a lack of action from Apple, it’s become quite clear to me that podcasting as I’ve known it in the past decade is in a battle with more traditional, radio-type production companies with money from their traditional radio, movie and music businesses behind them. The larger the more closed podcast eco-systems become, the harder it then becomes for those that aren’t anointed by those companies as being worthy, to be heard amongst them.

Advertisers instead of spending time and energy with highly targeted advertising by carefully selecting shows (and podcasters) individually to attract their target demographic, instead they start dealing only with the bigger companies in the space since they want demographics from user tracking with bigger companies claiming a large slice of the audience they then over-sell their ad-inventory leading to lower-value DAI and less-personal advertising further driving down ad-revenues.

(Is this starting to sound like radio yet? I thought podcasting was supposed to get us away from that…)

Finally another issue emerged: that of controversial content. What one person finds controversial another person finds acceptable. With many countries around the world, each with different laws regarding freedom of speech and with people of many different belief systems, having a way to censor content with a fundamentally open ecosystem (albeit with partly centralised search) was a lever that would inevitably be pulled at some point.

As such many podcasts have been removed from different indexes/directories for different reasons, some more valid than others perhaps, however that is a subjective measure and one I don’t wish to debate here. If podcasts are no longer open then their corporate controller can even more easily remove them in part or in whole as they control both the search and the feed.

To solve the problems above there are a few key angles being tackled: Search, Discoverability and Monetisation.

Search

Quick and easy, the Podcast Index is a complete list of any podcast currently available that’s been submitted. It isn’t censored and is operated and maintained by the support of it’s users. As it’s independent there is no hierarchy to pressure the removal of content from it.

Monetisation

The concept here is ingenuous but requires a leap of faith (of a sort). Bitcoin or rather Lightning, which is a micro-transaction layer that sits aside Bitcoin. If you are already au fait with having a Bitcoin Node, Lightning Node and Wallet then there’s nothing for me to add but the interesting concept is this: by submitting your Node address in your Podcast RSS feed (using the podcast:value tag) a compliant Podcast player can then optionally use the KeySend Lightning command to send a periodic payment “as you listen.” It’s voluntary but it’s seamless.

The podcaster sets a suggested rate in Sats (Satoshis) per minute of podcast played (recorded minute - not played minute if you’re listening at 2x, and the rate is adjustable by the listener) to directly compensate the podcast creator for their work. You can also “Boost” and provide one-off payments via a similar mechanism to support your podcast creator.

The transactions are so small and carry such minimal transaction fees that effectively the entire amount is transferred from listener to podcaster without any significant middle-person skimming off the top in a manner that both reflects the value in time listened vs created and without relying on a single piece of centralised infrastructure.

Beyond this the podcaster can choose additional splits for the listener streaming Sats to go to their co-hosts, to the podcast player app-developer and more. Imagine being able to directly compensate audio editors, artwork contributors, hosting providers all directly and fairly based on listeners actually consuming the content in real time.

This allows a more balanced value distribution and protects against the current non-advertising podcast-funding model via a support platform like Patreon and Patreon (oh I mean Memberful but that’s actually Patreon ). When Patreon goes out of business all of those supportive audiences will be partly crippled as their creators scramble to migrate their users to an alternative. The question is will it be another centralised platform or service, or a decentralised system like this?

That’s what’s so appealing about the Podcasting 2.0 proposition: it’s future proof, balanced and sensible and it avoids the centralisation problems that have stifled creativity in the music and radio industries in the past. There’s only one problem and it’s a rather big one: the lack of adoption of Lightning and Bitcoin. Currently only Sphinx supports podcast KeySend at the time of publishing and adding more client applications to that list of one is an easier problem to solve than listener mass adoption of BitCoin/Lightning.

Adam is betting that Podcasting might be the gateway to mass adoption of BitCoin and Lightning and if he’s going to have a chance of self-realising that bet, he will need the word spread far and wide to drive that outcome.

As of time of writing I have created a Causality Sphinx Tribe for those that wish to contribute by listening or via Boosting. It’s already had a good response and I’m grateful to those that are supporting Causality via that means or any other for that matter.

Discoverability

This is by far the biggest problem to solve and if we don’t improve it dramatically, the only people and content that will be ‘findable’ will be that of the big names with big budgets/networks behind them, leaving the better creators without such backing, left lacking. It should be just as easy to find an independent podcast with amazing content made by one person as it is to find a multi-million dollar podcast made by an entire production company. (And if the independent show has better content, then the Sats should flow to them…)

Current efforts are focussed on the addition of better tags in the Podcasting NameSpace to allow automated and manual searches for relevant content, and to add levers to improve promotability of podcasts.

They are sensibly splitting the namespace into Phases, each Phase containaing a small group of tags and progressively agreeing several tags at a time with the primary focus of closing out one Phase of tags before embarking on too much detail for the next. The first phase (now released) included the following:

  • < podcast:locked > (Technically not discoverability) If ‘yes’ the podcast platform is NOT permitted to be imported. This needs to be implemented by all platforms (or as many as possible) to be effective in preventing podcast theft which is rampant on platforms like Anchor aka Spotify
  • < podcast:transcript > A link to an episode transcript file
  • < podcast:funding > (Technically not discoverability) Link to the approved funding page/method (in my case Patreon)
  • < podcast:chapters > A server-side JSON format for chapters that can be static or collaborative (more below)
  • < podcast:soundbite > Link to one or more excerpts from the episode for a prospective listener to check out the episode before downloading or streaming the whole episode from the beginning

I’ve implemented those that I see as having a benefit for me, which is all of them (soundbite is a WIP for Causality), with the exception of Chapters. The interesting opportunity that Adam puts forward with chapters is he wants the audience to be able to participate with crowd-sourced chapters as a new vector of audience participation and interaction with podcast creators. They’re working with HyperCatcher’s developer to get this working smoothly but for now at least I’ll watch from a safe distance. I think I’m just too much of a control freak to hand that out on Causality to others to make chapter suggestions. That said it could be a small time saver for me for Pragmatic…maybe.

The second phase (currently a work in progress) is tackling six more:

  • < podcast:person > List of people that are on an episode or the show as a whole, along with a canonical reference URL to identify them
  • < podcast:location > The location of the focus of the podcast or episodes specific content (for TEN, this only makes sense for Causality)
  • < podcast:season > Extension of the iTunes season tag that allows a text string name in addition to the season number integer
  • < podcast:episode > Modification of the iTunes episode tag that allows non-integer values including decimal and alpha-numeric
  • < podcast:id > Platforms, directories, hosts, apps and services this podcast is listed on
  • < podcast:social > List of social media platform/accounts for the podcast/episode

Whilst there are many more in Phase 3 which is still open, the most interesting is the aforementioned < podcast:value > where the podcaster can provide a Lightning Node ID for payment using the KeySend protocol.

TEN Makes It Easy

This is my “that’s fine for John” moment, where I point out that me incorporating these into the fabric of The Engineered Network website hasn’t taken too much effort. TEN runs on GoHugo as a static site generator and whilst it was based on a very old fork of Castanet, I’ve re-written and extended so much of that now that’s not recognisable.

I already had people name tagging, people name files, funding, subscribe-to links on other platforms and social media tags and transcripts (for some episodes) already in the MarkDown YAML front-matter and templates so adding them into the RSS XML template was extremely quick and easy and required very little additional work.

The most intensive tags are those that require additional Meta-Data to make them work. Namely, Location only makes sense to implement on Causality, but it took me about four hours of Open Street Map searching to compile about 40 episode-locations worth of information. The other one is soundbite (WIP) where searching for one or more choice quotes retrospectively is time-consuming.

Not everyone out there is a developer (part or full-time) and hence rely on services to support these tags. There’s a relatively well maintained list at Podcast Index and at time of writing: Castopod, BuzzSprout, Fireside, Podserve and Transistor support one or more tags, with Fireside (thank you Dan!) supporting an impressive six of them: Transcript, Locked, Funding, Chapters, Soundbite and Person.

Moving Forward

I’ve occasionally chatted with the lovely Dave Jones on the Fediverse (Adam’s co-host and the developer working on many aspects of 2.0) and listen to 2.0 via Sphinx when I can (unfortunately I can’t on my mobile/iPad as the app has been banned by my company’s remote device management policy) and I’ve implemented the majority of their proposed tags thus far on my shows. I’m also in the process of setting up my own private BitCoin/Lightning Node.

For the entire time I’ve been involved in the podcasting space, I’ve never seen a concerted effort like this take place. It’s both heartening and exciting and feels a bit like the early days of Twitter (before Jack Dorsey went public, bought some of the apps and effectively killed the rest and pushed the algorithmic timeline thus ruining Twitter to an extent). It’s a coalition of concerned creators, collaborating to create a better outcome for future podcast creators.

They’ve seen where podcasting has come from, where it’s going and if we get involved we can help deliver our own destiny and not leave it in the hands of corporations with questionable agendas to dictate.

]]>
Podcasting 2020-12-29T15:25:00+10:00 #TechDistortion
Oh My NAS https://techdistortion.com/articles/oh-my-nas https://techdistortion.com/articles/oh-my-nas Oh My NAS I’ve been on the receiving end of failing hard drives in the past and lost many of my original podcast source audio files and more importantly a years' worth of home videos, gone forever.

Not wishing for a repeat of this I purchased an 8TB external USB HardDrive and installed BackBlaze. The problem for me though was that BackBlaze was an ongoing expense, could only be used for a single machine and couldn’t really do anything other than be an offsite backup. I’d been considering a Network Attached Storage for years now and the thinking was, if I had a NAS then I could have backup redundancy1 plus a bunch of other really useful features and functionality.

The trigger was actually a series of crashes and disconnects of the 8TB USB HDD, and with the OS’s limited ability to troubleshoot HDD hardware-specific issues via USB I had some experience from my previous set of HDD failures many years ago, that this is how it all starts. So I gathered together a bunch of smaller HDDs and copied across all the data to them, while I still could, and resolved to get a better solution: hence the NAS.

Looking at both QNAP and Synology and my desire to have as broad a compatibility as possible, I settled on an Intel-based Synology, which in Synology-speak, means a “Plus” model. Specifically the DS918+ presented the best value for money with 4 Bays and the ability to extend with a 5 Bay external enclosure if I really felt the need in future. I waited until the DS920+ was released and noted that the benchmarks on the 920 weren’t particularly impressive and hence I stuck with the DS918+ and got a great deal as it had just become a clearance product to make way for the new model.

My series of external drives I had been using to hold an interim copy of my data were: a 4TB 3.5", a 4TB 2.5" (at that time I thought it was a drive in an enclosure you could extract), and a 2TB 3.5" drive as well as, of course, my 8TB drive which I wasn’t sure was toast yet. The goal was to reuse as many of my existing drives as possible and not spend even more money on more, new HDDs. I’d also given a disused but otherwise healthy 3.5" 4TB drive to my son for his PC earlier in the year and he hadn’t actually used it, so I reclaimed it temporarily for this exercise.

Here’s how it went down:

STEP 1: Insert 8TB Drive and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…hundreds of bad sectors. To be honest, that wasn’t too surprising since the 8TB drive was periodically disconnecting and reconnecting and rebuilding its file tables - but now I had the proof. The Synology refused to let me create a Storage Pool or a Volume or anything so I resigned myself to buying 1 new drive: I saw that SeaGate Barracudas were on sale so I grabbed one from UMart and tried it.

STEP 2: Insert new 4TB Barracuda and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…it worked perfectly! (As you’d expect) Though the test took a VERY long time, I was happy so I created a Storage Pool, Synology Hybrid RAID. Created a Volume, BTRFS because it came highly recommended, and then began copying over the first 4TB’s worth of data to the new Volume. So far, so good.

STEP 3: Insert my son’s 4TB drive and extend the SHR Storage Pool to include it. The Synology allowed me to do this and I did so for some reason without running a SMART Extended test on it first, and it let me so that should be fine right? Turns out, this was a terrible idea.

STEP 4: Once all data was copied off the 4TB data drive and to the Synology Volume, wipe that drive, extract the 3.5" HDD and insert the reclaimed 4TB 3.5" into the Synology and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…hundreds of bad sectors. Um, okay. That’s annoying. So I might be up for yet another HDD since I have 9TB to store.

OH DEAR MOMENT: As I was re-running the drive check the Synology began reporting that the Volume was Bad, and the Storage Pool was unhealthy. I looked into the HDD manager and saw that my sons reclaimed 3.5" drive was also full of bad sectors, as the Synology had run a periodic test while data was still copying. I also attempted to extract the 2.5" drive from the external enclosure only to discover that it was a fully integrated controller/drive/enclosure and couldn’t be extracted without breaking it. (So much for that) Whilst I still had a copy of my 4TB of data in BackBlaze at this point I wasn’t worried about losing data but the penny dropped: Stop trying to save money and just buy the right drives. So I went to Computer Alliance and bought three shiny new 4TB SeaGate IronWolf drives.

STEP 5: Insert all three new 4TB IronWolfs and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…the first drive perfect! The second and third drives however…had bad sectors. Bad sectors. On new drives? And not only that NAS-specific, high reliability drives? John = not impressed. I extended the Storage Pool (Barracuda + 1 IronWolf) and after running a Data Scrub it still threw up errors despite the fact both drives appeared to be fine and were brand new.

IronWolf Fail This is not what you want to see on a brand new drive…

TROUBLESHOOTING:

So I did what all good geeks do and got out of the DSM GUI and hit SSH and the Terminal. I ran “btrfs check –repair” and recover, super-recover and chunk-recover and ultimately the chunk tree recovery failed. I read that I had to stop everything running and accessing the Pool so I painstakingly killed every process and re-ran the recovery but ultimately it still failed after a 24 hour long attempt. There was nothing for it - it was time to start copying the data that was on there (what I could read) back on to a 4TB external drive and blow it all away and start over.

Chunk Fail

STEP 6: In the midst of a delusion that I could still recover the data without having to recopy the lot of it off the NAS (a two day exercise), I submitted a return request for first failed IronWolf, while I re-ran the SMART on the other potentially broken drive. The return policy stated that they needed to test the HDD and that could take a day or two and Computer Alliance is a two hour round trip from my house. Fortunately I met a wonderfully helpful and accomodating support person at CA on that day and he simply handed me a replacement, taking the Synology screenshot of the bad sector count and serial number confirming I wasn’t pulling a switch on him and handed me a replacement IronWolf on the spot! (Such a great guy - give him a raise) I returned home, this time treating the HDD like a delicate egg the whole trip, inserted it and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…perfect!

STEP 7: By this time I’d given up all hope of recovering the data and with three shiny new drives in the NAS, my 4TB of original data restored to my external drive (I had to pluck 5 files that failed to copy back from my BackBlaze backup) I wiped all the NAS drives…and started over. Not taking ANY chances I re-ran the SMART tests on all three and when they were clean (again) recreated the Pool, new Volume, and started copying my precious data back on to the NAS all over again.

STEP 8: I went back to Computer Alliance to return the second drive and this time I met a different support person, someone who was far more “by the book” and accepted the drive and asked me to come back another day once they’d tested it. I’d returned home and hours later they called and said “yeah it’s got bad sectors…” (you don’t say?) and unfortunately due to personal commitments I couldn’t return until the following day. I grabbed the replacement drive, drove on eggshells, added it to the last free bay and in Storage Manager, Drive Info, run an Extended SMART test…and hours later…perfect! (FINALLY)

STEP 9: I copied all of the data across from all of my external drives on to the Synology. The Volume was an SHR with 10.9TB of usable space spread across x4 4TB drives, (x3 IronWolf, and x1 Barracuda). The Data Scrub passed, the SMART Tests passed, and the IronWolf-specific Health Management tests all passed with flying colours (all green, oh yes!) It was time to repurpose the 4TB 2.5" external drive as my offline backup for the fireproof safe. I reformatted it to ExFAT and set up HyperBackup for my critical files (Home Videos, Videos of the Family, my entire photo library), backed them up and put that in the safe.

CONCUSION:

Looking back the mistake was that I never should have extended the storage pool before the Synology had run a SMART test and flagged the bad sectors. In so doing it wrote data to those bad sectors and there were just too many for BTRFS to recover in the end. In addition I never should have tried to do this on the cheap. I should have just bought new drives from the get-go. Not only that, I should have just bought NAS-specific drives from the get-go as well. Despite the bad sectors and bad luck of getting two out of three bad IronWolf drives, in the end they have performed very well and completed their SMARTs faster with online forums suggesting a desktop-class HDD (the Barracuda) is a bad choice for a NAS. I now have my own test example to see if the Barracuda is actually suitable as a long-term NAS drive, since I ended up with both types in the same NAS, same age, same everything else, so I’ll report back in a few years to see which starts failing first.

Ultimately I also stopped using BackBlaze. It was slowing down my MacBook Pro, I found video compression on data recovery that was frustrating, and even with my 512GB SSD on the MBP with everything on it, I would often get errors about a lack of space for backups to BackBlaze. Whilst financially the total lifecycle cost of the NAS and the drives is far more than BackBlaze (or an equivalent backup service) would cost me, the NAS can also do so many more things, than just to backup my data via TimeMachine.

But that’s another story for another article. In the end the NAS plus drives cost me $1.5k AUD, 6 trips to two different computer stores and 6 weeks from start to finish, but it’s been running now since August 2020 and hasn’t skipped a beat. Oh…my…NAS.


  1. Redundancy against the failure of an individual HDD ↩︎

]]>
Technology 2020-11-29T09:00:00+10:00 #TechDistortion
200-500mm Zoom Lens Test https://techdistortion.com/articles/200-500-zoom-lens-test https://techdistortion.com/articles/200-500-zoom-lens-test 200-500mm Zoom Lens Test I’ve been exploring my new 200-500mm Nikon f/5.6 Zoom Lens on my D500 and pushing the limits of what it can do. I’ve used it for several weeks taking photos of Soccer and Cricket and I thought I should run a few of my own lens sharpness tests to see how it’s performing in a controlled environment.

As in my previous Lens Shootout I tested sharpness indoors, with controlled lighting conditions setting the D500 on a tripod, set with a Timer and adjusting the shutter speed leaving a constant shutter speed of 1/200th of a second, with Auto ISO and tweaked the Exposure during post to try and equalise the light level between exposures.

Setting the back of some packaging with a mixture of text and symbols as the target with the tripod at the same physical distance for each test photo.

Nikon 200-500mm Zoom Lens

I took photos across the aperture range at f/5.6, f/8 and f/11, cropped to 1,000 x 1,000 pixels in both the dead-center of the frame and the bottom-right edge of the frame.


200mm

200mm Edge f/5.6 200mm Center Crop f/5.6

200mm Edge f/8 200mm Center Crop f/8

200mm Edge f/11 200mm Center Crop f/11

200mm Edge f/5.6 200mm Edge Crop f/5.6

200mm Edge f/8 200mm Edge Crop f/8

200mm Edge f/11 200mm Edge Crop f/11


300mm

300mm Edge f/5.6 300mm Center Crop f/5.6

300mm Edge f/8 300mm Center Crop f/8

300mm Edge f/11 300mm Center Crop f/11

300mm Edge f/5.6 300mm Edge Crop f/5.6

300mm Edge f/8 300mm Edge Crop f/8

300mm Edge f/11 300mm Edge Crop f/11


400mm

400mm Edge f/5.6 400mm Center Crop f/5.6

400mm Edge f/8 400mm Center Crop f/8

400mm Edge f/11 400mm Center Crop f/11

400mm Edge f/5.6 400mm Edge Crop f/5.6

400mm Edge f/8 400mm Edge Crop f/8

400mm Edge f/11 400mm Edge Crop f/11


500mm

500mm Edge f/5.6 500mm Center Crop f/5.6

500mm Edge f/8 500mm Center Crop f/8

500mm Edge f/11 500mm Center Crop f/11

500mm Edge f/5.6 500mm Edge Crop f/5.6

500mm Edge f/8 500mm Edge Crop f/8

500mm Edge f/11 500mm Edge Crop f/11


What I wanted to test the most was the differences between Edge and Centre sharpness as well as the the effect of different Apertures. For me I think the sensor is starting to battle ISO grain at f/11 and this is impacting the apparent sharpness. In the field I’ve tried stopping down the Aperture to try and get a wider focus across the zoom area but it’s tough the further out you zoom and the images above support this observation.

My conclusions in terms of the questions I was seeking answers to though, is firstly there’s no noticable change in sharpness from the center to the edge at the closest zoom, irrespective of aperture. The edge starts to softens only slightly as you zoom in towards 500mm, and is independent of aperture.

The thing I didn’t expect was the sharpness at f/5.6 being so consistent, throughout the zoom range. If you’re isolating a subject at the extremes of zoom then it’s probably not worth stopping down the aperture and in future when I’m shooting I’ll just keep that aperture as wide open as I can unless I’m at the 200mm end of the zoom spectrum.

It’s a truly amazing lens for the money and whilst I realise there are many other factors to consider, I at least answered my own questions.

]]>
Photography 2020-10-25T06:00:00+10:00 #TechDistortion
Astronomy With Zoom Lenses https://techdistortion.com/articles/astronomy-with-zoom-lenses https://techdistortion.com/articles/astronomy-with-zoom-lenses Astronomy With Zoom Lenses About a month ago I started renting a used Nikon 200-500mm Zoom Lens that was in excellent condition. Initially my intention was to use it for photographing the kids playing outdoor sports, namely Soccer, Netball and Cricket. Having said that the thought occurred to me that it would be excellent for some Wildlife photography, here, here and here, and also…Astrophotography.

Nikon 200-500mm Zoom Lens

I was curious just how much I could see with my D500 (1.5x as it’s a DX Crop-sensor) using the lens at 500mm maximum (750mm effective). The first step was to mount my kit on my trusty 20 year old, ultra-cheap, aluminium tripod. Guess what happened?

The bracket that holds the camera to the tripod base snapped under the weight of the lens and DSLR and surprising even myself, in the pitch dark, I miraculously caught them before they hit the tiles, by mere inches. Lucky me, in one sense, not so lucky in another - my tripod was now broken.

Not to be defeated, I applied my many years of engineering experience to strap it together with electrical tape…because…why not?

D500 and 200-500 Zoom on Tripod

Using this combination I attempted several shots of the heavens and discovered a few interesting things. My PixelPro wireless shutter release did not engage the Image Stabilisation in the zoom lens. I suppose they figured that if you’re using the remote, you’ve probably got a tripod anyhow so who needs IS? Well John does, because his Tripod was a piece of broken junk that was swaying in the breeze - no matter how gentle that breeze was…

Hence I ended up ditching the Tripod and opted instead for handheld, using the IS in the Zoom Lens. The results were (to me at least) pretty amazing!

Earth’s Moon

The Moon I photographed through all of its phases culminating in the above Full Moon image. By far the easier thing to take a photo of and in 1.3x crop mode on the D500 it practically filled the frame. Excellent detail and an amazing photograph.

Of course, I didn’t stop there. It was time to turn my attention to the planets and luckily for me several planets are at or near opposition at the moment. (Opposition is one of those astronomy terms I learned recently, where the planet appears at its largest and brightest, and is above the horizon for most of the night)

Planet Jupiter

Jupiter and its moons, the cloud band stripes are just visible in this photo. Stacked two images, one exposure of the Moons and one of Jupiter itself. No colour correction applied.

Planet Saturn

Saturn’s rings are just visible in this image.

Planet Mars

Mars is reddish and not as interesting unfortunately.

International Space Station

The ISS image above clearly shows the two large solar arrays on the space station.

What’s the problem?

Simple. It’s not a telescope…is the problem. Zoom Lenses are simply designed for a different purpose than maximum reach taking photos of planets. I’ve learned through research that the better option is to use a T-Ring adaptor and connect your DSLR to a telescope. If you’re REALLY serious you shouldn’t use a DSLR either since most have a red-light filter which changes the appearance of nebulae, you need to use a digital camera that’s specifically designed for Astrophotography (or hack your DSLR to remove it from some models if you’re crazy enough).

If you’re REALLY, REALLY interested in the best photos you can take, you need an AltAz or Altitude - Azimuth mount that automatically moves the camera in opposition to Earths rotation to keep the camera pointing in the same spot in the night sky for longer exposures. And if you’re REALLY, REALLY, REALLY serious you want to connect that to a guide scope that further ensures the auto-guided mount is tracking as precisely as possible. And if you’re REALLY, REALLY, REALLY, REALLY serious you’ll take many, many exposures including Bias Frames, Light Frames, Dark Frames, and Frame Frames and image stack them to reduce noise in the photo.

How Much Further Can You Go With a DSLR and Lenses?

Not much further, that’s for sure. I looked at adding Teleconverters, particularly the TC-14E (1.4x) and then a TC-20E (2x) which would give me an effective focal length of 1,050mm and 1,500mm respectively. The problem is that you lose a lot of light in the process and whilst you could get a passable photo at 1,050mm, with 1,500 on this lens you’re down to an aperture of f/11 which is frankly, terrible. Not only that but reports seem to indicate that coma (chromatic aberration) is pretty bad with the 2x Teleconverter coupled with this lens. The truth is that Teleconverters are meant for fast primes (f/4 or better) not a f/5.6 Zoom.

Going to an FX Camera Body wouldn’t help since you’d lose the 1.5x effective zoom from the DX sensor, although you might pick up a few extra pixels, the sensor on my D500 is pretty good in low light, so you’re not going to get a much better low-light sensor for this sort of imaging. (Interestingly the pixel density of the sensor between the D500 DX and D850 FX, leaves my camera with 6.7% more pixels per square cm so it’s still the better choice)

How Many Pixels Can You See?

Because I’m me, I thought let’s count some pixels. Picking Jupiter because it’s big, bright and easy to photograph (as planets go) with my current combination it’s 45 pixels across. Adding 1.4x Teleconverter gets me to an estimated 63 pixels, and 2.0x to 90 pixels diameter. Certainly that would be nicer, but probably still wouldn’t be enough detail to make out the red spot with any real clarity.

Just a Bit of Fun

Ultimately I wanted to see if it was possible to use my existing Camera equipment for Astronomy. The answer was: kinda, but don’t expect more than the Moon to look any good. If you want pictures somewhere between these above and what the Hubble can do, expect to spend somewhere between $10k –> $30k AUD on a large aperture, large focal length telescope, heavy duty AltAz mount, tracking system and specialised camera, and add in a massive dose of patience waiting for the clearest possible night too.

If nothing else for me at least, it’s reawakened a fascination that I haven’t felt since I was a teenager about where we sit in the Universe. With inter-planetary probes and the Hubble Space Telescope capturing amazing images, and CGI making it harder to pick real from not-real planets, suns and solar systems, it’s easy to become disconnected from reality. Looking at images of the planets in ultra-high resolution almost doesn’t feel as real as when you use your own equipment and see them with your own eyes.

So I’ve enjoyed playing around with this but not because I was trying to get amazing photographs. It’s been a chance to push the limits of the gear I have with me to see a bit more of our Solar System, completely and entirely on my own from my own backyard. And that made astronomy feel more real to me than it had for decades.

The stars, the moon, the planets and a huge space station that we humans built, are circling above our heads. All you need to do is look up…I’m really glad I took the time to do just that.

]]>
Technology 2020-10-17T08:00:00+10:00 #TechDistortion