TechDistortion Articles https://techdistortion.comarticles en john@techdistortion.com Copyright 2009-2022 2022-12-09T04:31:14+00:00 TechDistortion Articles Fri, 09 Dec 2022 04:31:14 GMT Letter To Tesla Part One https://techdistortion.com/articles/letter-to-tesla-part-one https://techdistortion.com/articles/letter-to-tesla-part-one Letter To Tesla Part One Being a very happy (for the most part) Tesla Model 3 owner for over 9 months now, and with 22k on the clock I’ve finally decided it’s time to write down some areas for improvement for this amazing vehicle. These aren’t physical change suggestions or anything complicated and I have four to suggest.

I could point out things like better water resistance of the electric window components (mine have developed a squeak on raise/lower whereas my Honda Jazz is going over 8 years without a single squeak ever from any of its electric windows) or a non-bluetooth auto-unlock proximity sensor (long story) instead I’m going to focus initially on things that are simple software tweaks.

I’m certain that Tesla get a lot of these requests and I’m also sure that Elon Musk is a busy man (okay I know he is) so these many not find their way to implementation in the short term but I’ll settle for a feature that’s on a backlog list somewhere for a future update - even if it’s a few years down the road.

None of these are Rocket Science Mr Musk sir, that’s the other company you also run…where it really, actually is…

FM Radio: Add a Seek Function

When you’re in range of cellular data (so far as I can tell) another option appears called “Stations” which is a list of FM stations that the Tesla believes it can find or that should be there based on your location. But if that’s not there, you can direct tune if you know the exact frequency you’re looking for. However under “Direct Tune” there is no way to ‘Seek’ for an FM Station as the current screen looks today:

Tesla Radio

In my experience on roads only a few hours inland from Brisbane, direct entry found listenable stations on FM (minimal fade and chop) and yet they didn’t show up in a Stations List. A very simple set of seek buttons like every other digital FM Radio I’ve ever used has, would be very helpful.

REQUEST: Add a “Seek +” and a “Seek -” button on the Direct Tune FM Radio screen.

Air-Conditioning Slide-up View Without Starting the A/C

In V11 user interface, when you want to adjust the details of the Air-Conditioning, you need to tap the temperature which slides up the panel. If the A/C is currently off, this single tap automatically turns on the A/C, which more often than not is NOT what I need to do. I just want to confirm if the Recycle is open to vent for example, or to adjust the set heaters in the rear.

Tesla Air-Conditioning Slide-up View

Pressing and holding the temperature will toggle the A/C on or off without raising the panel. Tesla added some short-cuts to the menu bar to adjust the front seat heaters and the steering wheel heater but there are still plenty of other useful functions that are contained within the main slide-up view. If the A/C is currently off it’s because I turned it off and want it to stay off.

REQUEST: Change a tap to the temperature to slide up the panel only, irrespective of whether the A/C is running or not.

Mirror Dimming: Add Manual Enable/Disable

The (re)addition of dimmable mirrors in mid-2020 was welcomed by Tesla drivers however there’s only one option in the menu system to enable “Auto Dim” mirrors. If enabled, the side mirrors as well as the interior rear-view mirror will dim provided the car is in Drive, the angle of the sun is 5 degrees below the horizon and the headlights are turned on.

Tesla Auto-Dim

The problem with this is that many drivers turn on their lights before sunset and with its low ride height the Model 3 gets plenty of larger 4WDs blaring into your eyes 30 minutes or more before the Tesla logic will permit the dimming. In every other car I’ve ever owned, if I needed to dim the mirror I’d flip the metal/plastic toggle on the rearview mirror and instant dimming would occur.

REQUEST: Add a toggle button to turn the Dimming On and Off that’s quickly accessible.

Predictive Speed Reduction in AutoPilot

When in full AutoPilot, the Tesla attempts to read the speed sign and when the Tesla passes in-line with that sign, it will automatically adjust the speed limit up (if it was set higher as it’s Max set-speed previously in the drive) or it will adjust the speed limit down accordingly.

Going up is fine, but going down isn’t, since it only drops your speed once you’ve passed the sign meaning that until the cruise-control lag slows the car down you’ve been speeding for a hundred meters or so after the speed limit dropped when you’re dropping from 100 to 80 kph for example.

The fix is simple using Newtons second law of motion. You have depth perception (and a GPS too) in a Tesla so you could calculate the distance between the current position and the upcoming speed sign change position, so figure out based on the deceleration curve when to start slowing down so you reach the speed limit when you actually pass the sign.

In Australia the police will intentionally set up speed traps in those spots trying to catch drivers out…it’s a speeding ticket waiting to happen.

REQUEST: Add a toggle in Autopilot for “Predictive Speed Reduction”

Other Articles

For reference I’ve written a lot of articles about my Model 3 in the past 9 months.

I’m sure there’s more, but that’s enough for now.

]]>
Cars 2022-08-02T20:00:00+10:00 #TechDistortion
Leaving Fast Chargers Behind https://techdistortion.com/articles/leaving-fast-chargers-behind https://techdistortion.com/articles/leaving-fast-chargers-behind Leaving Fast Chargers Behind This weekend gone I was invited to visit an extended family member’s Pub in Monto. For my wife and I it was a bit of an adventure - somewhere neither of us had been before and for me, staying a few nights in a country pub was something I hadn’t done for over 25 years.

Monto Trip Map

Monto is located off the main highways in Queensland and yet, I wanted to take the Tesla and see how it handled the long road trip. Monto is about 5 hours drive from my home about 400k’s (250 miles) one way. The shortest route takes the Bruce Highway North to Gympie where there are two DC Fast Charging locations and then the route turns inland via Kilkivan via the Wide Bay Highway, then a short-cut to Tansey, then to Eidsvold and eventually Monto via the Burnett Highway. After Gympie there are no charging points at all. None. No DC Chargers, CCS-2 Combo or CHaDeMo connections, no AC outlets meaning no 3-Phase outlets, no Type 2 or even Type 1 charging points. You’re down to a standard 10A household outlet or if you’re lucky a 15A outlet.

There are two short-cuts between the Burnett Highway and the Bruce Highway where all the DC Fast Chargers are: via Kalpower which is effectively a dirt road and via Mount Perry, though the two roads via Mount Perry, the nearer to Monto is closed for roadworks until 2023 and the other joins so far South it’s effectively doubling-back for no net gain. So you can forget a quick charge or any charge - you’re on your own.

Destination Charging

The Pub owners were happy to let me park and charge Tessie around the side near their 24-hour Laundromat and I initially plugged in a 10A lead and used the Tesla UMC. A full charge from 12% was predicted to take 25 hours. Good thing I wasn’t in a hurry to leave!

The next day after a reasonable nights sleep I realised that in my haste the previous night to start charging, that there were actually multiple 15A outlets installed to power the larger dryers in the Laundromat. I was then able to swap out the 10A lead, for the 15A extension lead I’d brought just in case and was able to cut my charging time by 30% using the 15A tail on the Tesla UMC. In the end I was fully charged and ready to go 7 hours earlier than I’d originally expected.

Oopsie at 10pm

The problem with a 24-hour Laundromat as a charging point is that sometimes people want to use the machines. What I didn’t realise is that there were only a handful of 15A outlets and hence when I “borrowed” one for 16 hours, there was always a chance someone would come along, need to use the Dryer and then unplug my car.

My Tesla notified me of an interruption to the charging at about 10pm on the second night. I went downstairs and found the cable had been unplugged and the dryer was drying someones clothes just happily. I quickly reconnected it to the adjacent outlet and we were back charging again. No harm, no foul.

Around Town

We were in Monto. Population 1,200 people. There are ZERO Electric Cars in town and most EVs never trek the highway since there are no charging points between Kingaroy and Biloela. So driving the Tesla turned a LOT of heads around town and when I was pulled over outside of Gayndah by the police for a roadside Random Breath Test (RBT) the officer commented he’d never pulled over a Tesla before.

When in Monto one of the locals was asking about the car, calling it a “City Electricity Car” which sounded so weird to my ears, but I’ve since been calling the Tesla “My Electricity Car”. I meant, it kinda is…

Of course there were other travellers from afar that knew what a Tesla was and some kids at Cania Gorge car park did some poses and a dance for SentryCam that was hilarious, so that was cute. I digress…

Side Trip

There’s some interesting things to see in and around Monto, but the most popular place to visit is a 20 minute drive North of town. The Cania Gorge is a rugged but beautiful sandstone gorge carved out by Three Moon Creek over thousands of year and has lots of lookouts, small caves and hanging rocks as well as an abandoned Gold Mine.

We decided to make the trek on Saturday morning and took Tessie up to the Cania Dam, then worked our way back to town stopping at the primary bushwalking hotspot. We were bushwalking for about an hour and a half and saw some inspiring scenery. Well worth the look!

Tesla Model 3 at Cania Gorge

The Drive Back

After a recommendation from the publicans, we were determined to drive an extra 6 kilometres, though it took an extra 18 minutes, to go back via Goomeri, and stopped at the The Goomeri Bakery to grab brunch. There was a strong headwind in the morning so as an extra insurance policy we drove for 200 kilometers (125mi) at 90kph (56mph) then realised we’d reach Gympie with 20% charge which was more than enough.

The rest of the trip was uneventful after a quick 45% charge at the Supercharger we made it home with plenty of range to spare.

The Statistics

There are so many variables that go into range I’m not going to cover those here. To be honest we’re looking at those we can control: starting charge level, running the Heating/Air Conditioning, using the Aero-Wheel covers, tyres inflated to the correct pressure, driving with Chill Mode enabled just choosing to drive as smoothly as possible and for one leg as mentioned, driving at 10kph below the speed limit.

What I can’t control are things like headwind, external temperature and elevation changes. Taking much the same road means this trip can provide many leg by leg averages I can hopefully extrapolate to better predict future trips. The elevation difference is 219m from Home to Monto, hence there’s a penalty on the outbound trip for sure and more care is needed in that regard.

Outbound trip:

  • Home-ish –> Gympie 155Wh/km (107k) +33m Elevation
  • Gympie –> Mundubbera 168Wh/km (201k) +106m Elevation
  • Mundubbera –> Monto 169Wh/km (110k) +80m Elevation
  • Weighted average: 163Wh/km

Homebound trip:

  • Monto –> Goomeri 146Wh/km (248k/200k @90kph)
  • Goomeri –> Gympie 132Wh/km (76k)
  • Gympie –> Home 140Wh/km (119k)
  • Weighted average: 140Wh/km

The homebound trip was marred by headwinds on the long first leg to Goomeri that, had I not travelled at 90kph, would have been much worse energy consumption, though it probably still would have been fine.

Outback Irritations

The Australian “Outback” is difficult to define as to where it starts and where it ends, and does it really matter anyway? Suffice it to say when you’re out on those highways where there’s limited mobile phone service it means that many of the features city-dwellers take for granted just don’t work once you’re away from major civilisation. Here are some realities that hit you.

AM Radio

For those that don’t know, the problem with AM Radio is that the variable speed drives, inverters and power electronics that comprise the drive train in an Electric Vehicle create a lot of broad-spectrum EM noise. Unfortunately a large amount of this is at lower frequencies across the AM Radio band to the point at which Tesla gave up trying to have an AM Radio in their cars. In Australia the outback is well covered by AM Radio stations even to this day, with ABC, Radio National and some commercial stations well catered for. Whether you like their content or not isn’t the point…Teslas need not apply.

FM Radio

Tesla have a “simple” and easy FM Radio interface where you have a “Favourites” list of Radio Stations you can build yourself, and a “Direct Tune” option. When you’re in range of cellular data (so far as I can tell) another option appears (it can not be summoned you just have to wait for it to appear) called “Stations” which is a list of FM stations that the Tesla believes it can find or that should be there based on your location.

Under Direct Tune there is no way to ‘Seek’ for an FM Station. Knowing a handful of FM Station frequencies along the road (thanks mostly to roadside billboard signage) we were able to tune directly to some stations and Favourite them for quick access, even when they didn’t appear in the Tesla generated “Stations” list (when it bothered to appear).

It would be better for the driver to decide what level of FM drop-out is acceptable rather than an algorithm in the Tesla. Adding a simple seek button like every other FM Radio I’ve ever used in a car, would be easy to implement. You press and hold for 2 seconds, the FM Radio seeks up or down the band until it finds a signal then pauses and waits for a few seconds. You then select that station and it stops there, or you don’t and it then keeps scanning until it finds something else. The FM Carrier threshold to be set to a reasonably faint level because I’d rather listen to choppy FM sometimes than silence.

I found myself searching the internet for radio station frequencies that might be listenable along that road and pre-entered them before we hit the road but honestly, that’s a horrible hack and this is exceptionally easy for Tesla to fix with a software update.

Satellite Radio

SiriusXM is regrettably not offered in Australia, despite the fact it does actually work in Australia. Tesla don’t offer it here on the basis that SiriusXM don’t offer it here which makes sense, so I don’t blame Tesla for that. Honestly it would be absolutely worth the money if you’re doing lots of outback travel in Australia to have a Satellite radio service. With the progression of Starlink there’s probably a superior option in Satellite internet, but that’s an expensive option and whilst some have rigged up a Starlink antenna inside their Model 3 in Australia already, it’s not exactly discrete or compact and not really convenient - at least not yet. Maybe someday…

TeslaMate Telemetry

I was really interested in pouring over the telemetry data from Tessie when I got home as I’d set up my own private TeslaMate VM in late 2021 after I bought my Tesla. I was partly surprised but partly not then I found data gaps between locations on the road trip that loosely aligned with my recollection of cellular coverage gaps. The extract below clearly shows those gaps in Telemetry Data.

TeslaMate BlackSpots

Hence it’s clear that the Tesla data streaming API has a limited buffer and once you’ve been out of reception for a certain amount of time, the data falls off a cliff and dissolves into nothing, and it never makes it to the Cloud (and hence TeslaMate). Good to know.

Conclusion

Unsurprisingly this is first time since owning my Tesla that I’ve felt range concern. (Don’t call it Anxiety - I know what Anxiety feels like and this wasn’t that…)

That said it’s also the first time I’ve pushed the limits of my car in terms of range and it performed exceptionally well. I found that the Tesla trip graph wasn’t the best indicator and that the much liked A Better Route Planner was too pessimistic. That said it’s highly configurable and in terms of power consumption it can be adjusted. This trip has allowed me to tweak the Model 3 default in ABRP from 160Wh/km to 150Wh/km as its predictive default making future trip predictions more accurate.

Of course the whole trip could have been much less stressful with a few DC Fast Chargers installed at mid-points on the trip. There are plenty of reasonably sized towns on that road: Goomeri, Mundubbera, Eidsvold for example. That’s simply a matter of time. The QESH (Queensland Electric Super Highway) Phase 3 is currently under construction but none of their chargers are destined for the Burnett Highway.

Some councils are taking it into their own hands with partnerships with industry like the new Kingaroy charger for example. More destination chargers will eventually start appearing making trips on the Burnett easier for overnighters or even part-day trippers, but for me at least one thing is for sure: touring in the Model 3 was very comfortable, very relaxing and now I know I can make those legs without too much difficulty - I’m more confident in trips like that again in the future.

]]>
Photography 2022-06-13T06:00:00+10:00 #TechDistortion
Tesla User Interface https://techdistortion.com/articles/tesla-user-interface https://techdistortion.com/articles/tesla-user-interface Tesla User Interface I’ve written about my new Model 3, but wanted to pull together some thoughts about the Tesla User Interface (if I can call knobs, buttons, switches and a screen in a car a User Interface…pardon my Engineering parlance…) now that I’ve driven my Tesla for nearly 8,000ks (5,000mi) over the past 4 months.

V10 to V11

The changes made between V10 and V11 I’ll cover off in future maybe in another article. Most of the changes haven’t been improvements overall and I’ll cover those changes that are specifically detrimental below. Suffice it to say, most users/drivers didn’t like the V11 upgrade very much. For me, I’d only spent two months using V10 so re-learning where most things were wasn’t a big change for me. Others that had been using their V10 interface for years…were much more annoyed and I can understand why.

Easy Access Is Subjective

Having designed HMI Screens for two decades in critical plant control systems, my view on what information is critical vs what information is nice-to-have is very different from most people it’s fair to say. I’ve read so many peoples opinions about whether tap-tap-swipe to get to a function or not is just too hard and if glanceable information is missing. Frankly it’s opinionated and most people are too influenced by their own biases to be objective and to be honest with themselves.

Before we tackle that, let’s think about Tesla’s perspective, because it makes a big difference in understanding some of their choices.

Tesla: We expect you to be using AutoPilot

Not all the time, but most of the time. Yes, it’s true that AutoPilot on city streets is a bad idea - too many variables, obstacles, disjointed lines and things to account for but it’s getting better every month. On the highway or freeway it’s pretty solid and reliable.

In a non-touchscreen vehicle people can use tactile feel and use spatial awareness to locate the buttons, knobs and switches without taking their eyes off the road. Of course in recent years many vehicles have introduced touchscreens with CarPlay/Android Auto or their own entertainment system, as well as voice controls on some mid/high-end models. Whilst Smartphones have taken most of the blame in recent years for driver inattention there’s no denying that touchscreen entertainment consoles in cars also play a role in driver distraction. One might wonder if it’s safe to have any screens in a vehicle without Auto-lane Keep Assist (or whatever name you like) in the vehicle as well, but I digress.

For this discussion we’ll exclude the entertainment system-like functionality as it’s clearly a heavier touch than other interactions. For that matter we should define the four categories for functionality:

  • Glanceable Information: The driver averts their eyes from the road briefly to quickly locate a key piece of information then returns to watching the road in between 1 to 2 seconds
  • Glanceless Controls: The driver is able to activate a control or function without taking their eyes off the road
  • Light Touch Controls: The driver averts their eyes from the road briefly to locate and activate a control or function then returns to watching the road in between 2 to 5 seconds
  • Heavy Touch Controls: The driver averts their eyes from the road to locate and activate a complex control or complex function then returns to watching the road in between 5 to 15 seconds

Let’s look at examples of each below, in no order of priority of preference.

Glanceable Information

  • Speed
  • Indicator (Left/Right/Hazard) Status
  • Headlight State
  • Fuel/Charge Level

I’m excluding non-automatic, and non-electric vehicle items like Oil Temperature, Oil Pressure, Engine Warning, tachometer and so on. Also worth noting that some items like Windscreen wiper state is directly observable without any indication on a dashboard, hence that’s excluded as well.

About Binnacles

In non-Tesla vehicles this information is almost always located in a Binnacle or Instrument Cluster if you prefer. It’s directly behind the steering wheel only requiring the driver to drop their gaze and look through a hole in the steering wheel. This isn’t perfect either since we can’t glance through the wheel at certain points of wheel rotation (when turning) and not all cars have good tilt/telescope and/or seat height/depth adjustment to suit every person. Due to my height in some cars I’ve driven, there’s simply no physical way to adjust the steering column and seat so I can see all of the Binnacle. The top is sometimes obscured and I’ve had to duck my head forward and down slightly to read the speedometer particularly at higher speeds.

An alternative to a Binnacle is a heads-up display, which super-imposes information that only the driver can see, on the windscreen directly in front of them. I drove a Prius-V for a few years that had this and I really loved it, however having something in your direct line of sight is not always ideal. Given the costs of implementing HUDs have dropped these remain an attractive alternative to a Binnacle that serve much the same function.

In a Model S and X there is a Binnacle, but on the 3 and Y there isn’t and no Teslas have HUDs. I have yet to find a seating position or steering wheel angle where the screen is obscured from view, but this visually clear sightline is offset by requiring not just a dip of vision, but also a glance to the side towards the center of the vehicle. Ultimately this takes slightly longer but in normal use isn’t much of an additional safely concern and with AutoPilot running, it’s no concern at all.

Glanceless Controls

  • Indicate Left/Right
  • Headlights High/Low Beam
  • Windscreen Wipers (Once/On/Speed Controls)
  • Drive Mode (Forward/Reverse/Park)
  • Cruise Control (Activate/Set/Resume/Increase/Decrease)
  • Sound System Volume (Skip/Pause/Stop Playback)
  • Steering
  • Accelerating/Braking
  • Honking the horn…

Light Touch Controls

  • Air-conditioning settings (Temperature, Fan, Recycled Air, Demisting)
  • Door Locks (Individual/Group)
  • Seat Heating
  • Radio / Music / Stereo basic changes
  • Report Issue / Dashcam Record clip
  • Navigating to a Favourite / Calendar location

Heavy Touch Controls

  • Sound system setting adjustments
  • Navigation to a specific location requiring search
  • Pretty much everything that isn’t required explicitly for driving

To be fair there’s very little good reason for messing with heavy touch controls when you’re driving, even with AutoPilot. The risk is just too great. Some items require the car to be stopped to adjust and parked with the Park Brake applied for others - which feels like the right call. I don’t believe there are any truly heavy-touch control items of concern in the current Tesla UI.

Why AutoSteer is so Important

#RantBegins

Auto Lane-Keep Assist or AutoPilot/AutoSteer functionality is very important due to gaze-affected steering. There have been studies that look into the correlation of gaze and eye-position and induced movement of the steering wheel. In other words - we tend to steer in the direction we’re looking. The longer we look away from in front of us, the more we will drift in the direction we’re looking.

Before LKA/AP-AS was possible the only option was to ensure controls were tactile and binnacles were clear and concise to minimise this. With LKA/AP-AS technology, this is less of an issue provided it’s actually able to be used and is in use. The alternative position to think about though is, if AutoPilot isn’t working (poor conditions, dirty cameras/sensors) or disabled by the driver, then the only means to accessing information will then become standard tactile controls and a binnacle.

If we suppose AutoPilot isn’t always in use, the lack of a Binnacle and minimal tactile controls should result in a higher incidence of accidents with Tesla vehicles, particularly Model 3 and Y and yet there is not. Beyond the possibility that these are non-issues (could be…) a possible reason for this is that the higher cost of entry for these vehicles precludes ownership/access to younger drivers. It has been shown conclusively that younger drivers do not handle distraction as well as more experienced drivers. Therefore the marginally higher risk in non-AutoPilot situations is likely offset by the older driver demographic.

Projecting forward in time from here requires a lot of faith and assumption. If we assume that AutoPilot improves to the point where it can drive as well as or better than a human in all situations, then the interface deficiencies become a non-issue. Maybe even get rid of the Steering Wheel and the stalks? Hmm. The problem is that in that extreme scenario AutoPilot then becomes a requirement to use the vehicle in which case it is forced always on and the car can not move without it. Many people will never choose that option, therefore some balance needs to be found between full autonomy and catering for human attention to the road under non-AutoPilot operation.

#RantOver

Where Tesla’s Interface Falls Short

Much of the complaining for users is more centered around Heavy Touch controls or items that don’t directly affect safety or driveability or common usability of the vehicle. That said there are a few exceptions that fall under Glanceless Controls and Light Touch Controls that aren’t often covered that are currently not ideal:

Headlights: In every other vehicle I’ve driven, I can turn on and off headlights, set low and high beam and auto-highbeam (where fitted) using the stalk. In the Model 3 you can only turn lights off (if the Tesla decides they should be on in the first place) by using the touchscreen. Pulling the left stalk towards you flashes High Beam, whereas pushing it away from you and releasing toggles between Low and High Beam.

Headlight Suggestions: Pulling the left stalk towards you is common in many cars for flashing High Beams so should be retained, but add Pull and Hold for 2 seconds to toggle between High and Low Beam. Pushing forwards on the left stalk toward could toggle Auto-highbeam on and off, and Push forwards and Hold for 2 seconds could toggle the headlights on and off. Note: Normal cars don’t try to overload all of this functionality into two switch actions, they employ turn-dial positions and switches specifically for each independent control for a reason.

Wipers: Currently the end button on the left stalk has two depressed depths: Shallow preforms a single wipe, full depth performs a cleaning spray and wipe function. There is no way of changing the speed or cycle via the stalk, though pressing the button at all brings up a pop-up on the Touchscreen, this now becomes a Light Touch control for something you don’t want to be taking your eyes off the road in heavy rain conditions. It might be fine if the auto-wipers worked reliably but they don’t and given this impacts driver visibility it’s very important.

Wipers Suggestions: A single shallow press and release could cycle between OFF->Intermittent Slow->Int Fast->Continuous Slow->Cont Fast->OFF. A full depth press could single wipe with a full depth press and Hold for 2 seconds to trigger a cleaning wipe.

Demisting: In V10 demisting controls weren’t associated with a stalk, but rather on the bottom edge of the Touchscreen, easily available if needed. Considering the urgency or not employing these as they affect driver visibility it is very odd they were removed from the bottom bar in V11.

Demisting Suggestion: Put the controls permanently on the bottom bar for quick access.

Non-retentive Assumptions

If I could put my finger on one gripe about Tesla’s UI…it’s that many functions and settings are non-retentive. When you stop the car and get out, then get back in and drive off again later they reset to the factory defaults. There is no retention of my previous settings. Headlight for example…stop assuming that the light level outside require headlights. If I want to turn them off, when I do, they should stay in that state until I turn them on again.

Not-So-Auto

Nothing is perfect, but when things are designed to work in an Automatic mode, that mode needs to work in a pattern that is discernible, consistent and relatively well otherwise people can’t/won’t use it.

  • Auto-Highbeam: It does not detect on-coming vehicles very well, particularly on bends. Hence when a human would pre-dip their headlights seeing an on-coming vehicle in the distance, the Tesla will leave them on high beam until it’s effectively head-on. Not good.
  • Auto-Wipers: We had a wet summer and I had many, many opportunities to observe the behaviour of the wipers. I liken it to game of drip-chicken. How many drips need to accumulate on the windscreen before the automatic wipe wipes them away…and will it do so before it becomes a safety hazard for bad forward visibility for the driver? It’s almost fun, if it wasn’t dangerous. I often lose this game of chicken as I have to tap with single wipe several times to avoid an accident through poor visibility before the auto-wipe wakes up and does something autonomously.
  • Auto-Steer Speed Limiting: This one might sound needy but it’s simple: When in full AutoPilot, the Tesla attempts to read the speed sign and when the Tesla passes in-line with that sign, it will automatically adjust the speed limit up (if it was set higher as it’s Max set-speed previously in the drive) or down accordingly. Going up is fine, but going down isn’t, since it only drops your speed once you’ve passed the sign, meaning that until the cruise-control lag slows the car down you’ve been speeding for a hundred meters or so when you’re dropping from 100 to 80 kph for example. The fix is simple - it’s basic physics. You have depth perception (and a GPS) in a Tesla so you could calculate the distance between the current position and the upcoming speed sign change position, so figure out based on the deceleration curve when to start slowing down so you reach the speed limit when you actually pass the sign. In Australia the police will intentionally set up speed traps in those spots trying to catch drivers out…it’s a speeding ticket waiting to happen.
  • Emergency Lane Departure Avoidance: I have had this try to auto-steer me back into my current lane on a three-lane divided freeway when I was just changing lanes. Clearly the system thought I was drifting out of my lane because I wasn’t changing lanes quickly enough, but wrestling with the wheel at 100 kph is pretty scary - literally fighting with the car. Goodness knows what the other drivers behind me thought was going on!
  • Mirror Auto-Dim: I’m used to being able to dim my rear-view mirror manually, using the tried-and-true manual prism toggle lever. These days though auto-matic is the rage and now we apply an electro-chromatic coat which turns darker when a voltage is applied to it. Tesla’s can automatically dim the side and rear view mirrors when it determines that it’s dark enough outside. How about a manual switch? Can I please dim my mirrors when I want to? I’ve found myself avoiding looking in the mirrors at Twilight because of this and relying on the cameras instead. Not safe or ideal.

Conclusion

If we accept the addition of AutoPilot features easily offsets the marginal increase in delay for glanceable information then this is a tradeoff worth making for an otherwise unobstructed view of that information. The controls on the steering column (left and right hand stalks and thumb control wheels/joysticks) satisfy most of the key requirements for glanceless controls. They fall short however on: wiper speed control and headlight control and I suspect this is due to over-confidence by Tesla in their automatic headlight and automatic wiper controls.

Why Change Tesla, why? I’ve been thinking about what would/could motivate Tesla to modify their User Interface and thinking specifically about the Demisting features. One of the temptations would be to use end user real-world data to draw conclusions about the use regularity of certain features. If so I can absolutely understand how something like the demister button would be moved. In the real world, most people would rarely use this as the conditions need to be right for it to be required. The problem with that though is the thinking behind a big Red Button emergency stop. Sure we don’t push the big red button often, but it’s there in case you need it in a hurry - which is when you’ll REALLY, REALLY want to use it! Quick access therefore can not only be reserved for items of high regularity but to reduce the high risk of unwanted outcomes.

Of course I have no evidence that this is what Tesla have done or is doing, but if not I struggle with the thinking behind some of the V10 to V11 changes. Beyond this I think there’s a hard limit as to how many more functions could or should be pushed off physical controls and on to a touch screen provided the intention is to have a human driver in control of the vehicle AT ANY POINT. One user interface can serve both purposes but a little bit of tactile controls makes it safer in both use cases.

All of that said, the only good thing is that unlike most other car manufacturers, a fix to many of the above is only a software update away.

]]>
Cars 2022-02-28T06:00:00+10:00 #TechDistortion
My Cars Through Time https://techdistortion.com/articles/my-cars-through-time https://techdistortion.com/articles/my-cars-through-time My Cars Through Time For something completely different I felt like doing a bit of a personal retrospective of different cars I’ve owned over the last 28 years for anyone that’s interested (probably not many).

1 Ford Laser 1996 - Oct 1999 Ford Laser

My first car was a hand-me-down from my mother. My grandparents had passed away and there was an excess of vehicles, leaving me with an Automatic Red 1988 Ford Laser Hatchback. This car did the round-trip between Rockhampton and Sydney three times during 1999 when I worked at COM10. This is one of only a small number of photos I have of the Laser.

To drive it was sluggish, handled very imprecisely and was the entry level model with no air-conditioning and manual windows. I did fit a few things though in my teenage years: a decent stereo, a cruise control aftermarket kit, a CB Radio and later my IC-706 Amateur Radio (2M band whip shown in the photo). It was a good, reliable car though that I serviced myself where I could, changing the oil several times. Ultimately though I took a permanent job with Nortel in Calgary and returned the Laser to my mother, and it then went to my other sister, Alana for her to use.

2 Subaru Impreza Oct 2000 - Feb 2001 Subaru Impreza

With my Canadian Permanent Residency and Alberta drivers licence in hand, I was able to buy a car finally in September, 2000. Heavily influenced by my environment I chose a car with All-Wheel Drive presenting excellent grip in the snowy conditions for a quarter of the year in Calgary. The WRX model that I wanted wasn’t available in Canada at that point, so I settled with a Manual 2000 Subaru Impreza 2.5RS Coupe in Blue, since the Red Subaru offered at that time wasn’t a very nice Red.

It had the usual Canadian fittings, like a block heater (note the cable under the number plate), but I supplied my own decorative front number plate since in Alberta only the rear plate is required by law. The photo was taken in the car park at Lake Louise. It was a joy to drive this car, though I owned it only for a very short period (about four months). It had a sunroof and excellent heating as I owned it through a Calgarian winter, I only drove with with the windows down and sunroof open a handful of times, rugged up to the nines.

When Nortel sacked half of it’s global employees (including me) I returned home, and since the car wouldn’t fit in my pocket, and was for a LHD country, I had to sell it at a mind-blowing loss. It wasn’t as upsetting as selling my Kawai though. I left the country with effectively nothing, having lost my entire redundancy payout in the debt gap from selling those two items.

3 MGF Aug 2001 - Aug 2003 MGF

Returning to Australia was both a relief and traumatic, with the job hunt in Brisbane taking four months to land a job at Boeing Australia. With a few months to lick my wounds I set about investing in a very different car. My father had rebuilt a White MGB of which I had a fond memory of driving in, sitting on the battery box, and MG had re-opened (again) and in 1997 released the MG(F). I test drove one on the Gold Coast and couldn’t afford a new model, so picked a used Manual 1997 1.8L VT in Amaranth (call it Purple) Pearlescent finish, and grabbed a Hard Top to go with it for longer road trips (so much quieter and air-conditioning worked better with the hardtop on)

The MGF was a mid-engine car and whilst it wasn’t the fastest it was so low to the ground and with near-perfect weight distribution it cornered almost like it was on rails. So much fun to drive but not without its quirks. The soft-top leaked (common problem with convertibles), the bonnet-boot (Frunk if you’re American) was spatially challenged, the boot (Trunk) was poorly insulated behind the engine and got REALLY hot (cold shopping items need not apply) and the transmission was a wire-pulley system, not a direct linkage which made gear changes feel sloppy and imprecise. Top it off with needing 98RON fuel which was sometimes hard to find, it wasn’t cheap to insure, maintain or run.

Despite this, I took the MG on a road trip from Brisbane to Rockhampton, and when I met the woman I would marry we fitted a luggage rack and drove to Tasmania and back in it for our Honeymoon. It was truly the most fun car I’ve ever driven, despite its flaws, and I still miss it today. That said, the Hydragas suspension and tyres cost $2,250AUD to replace in early 2003 and with my first child on the way, it was a wholly family incompatible car. The repayments were fine when I was single…but I had to make the grown-up (but soul-crushing) decision to sell the MGF, and get a car that was cheaper to own, operate and maintain.

4 Suzuki Sierra Aug 2003 - 2007 Suzuki Sierra

The Suzi was not the car I had intended. My budget was strict and tight, and whilst I’d wanted a 4WD, hard-top, automatic, air-conditioned car which would be more appropriate in my new job at MPA Engineering that required a lot more driving as part of it, it’s not what happened. My new father-in-law got a great deal on a Blue 1995 Manual Suzuki Sierra, soft-top. It needed a new engine, had no air-conditioning and the soft-top was ripped and leaked in many places, however it was within the budget and he helpfully took care of everything for me through a private sale.

The car barely held 110kph on the freeway and was so light it got blown all over the road at speed. The gearbox felt like a truck, it was clunky, rigid, and not at all what I was used to in my last two cars. That said, take this thing on the sand, off-road, and it was amazing! Once I’d mastered the art of running it at high revs in low gears in the soft sand, there was nothing it couldn’t handle. The only time it got bogged in soft sand, we all jumped out and a simple push got it going again - it was so light.

During it’s life I invested in a new canvas soft top that didn’t leak and it excelled in fuel economy. The company was required to reimburse people for kilometers travelled in their own vehicle and the economy was so good, I was actually making a decent amount of money from driving my own vehicle. As I rose through the ranks to Senior, got my RPEQ and was sent on still more jobs I was given a company car that would spell the end for the Suzi. The garage could only fit two cars: the family car and the work car and as our driveway was still unsealed at that point, we parked the Suzi on the grass.

It sat there for too long, undriven and unloved and eventually was sold to someone to try and get it to run again. It left on a car carrier. Despite its flaws, it was a unique car to drive and took me places on Moreton Island and Bribie Island I’ve never been able to get back to.

5 Daewoo Matiz Jan 2010 - May 2013 Daewoo Matiz

From approximately 2006 to 2009 MPA Engineering provided me with a company vehicle, so I didn’t need my own. With the Suzi long gone and my career changing I had to once again buy a car. Taking a pay cut to get into KBR I had even less money than the Suzi. It led me to a Manual 2005 Daewoo Matiz in Silver. In time I came to refer to it as the “Silver Bullet” which was a name laced with no small measure of cold hatred. It travelled somewhat slower than a bullet, for example. Daewoo were already out of business and support locally was scarce but I had no budget for much else.

The Air-conditioning barely worked and it couldn’t maintain 110kph on the freeway, with less power than even the Suzi I was lucky to get 105kph without a headwind. The car was tiny inside and out; cramped and the gears were sloppy and inconsistent. It had manual windows and when the passenger was sitting next to me, our shoulders would touch, meaning three people on the back row was not possible despite the number of seat belts fitted.

I hated this car so much, I never took a single direct photo of it, hence the photo above isn’t of my Silver Bullet, but rather some other unfortunate persons. The role at KBR put me on the NPI Stage 2 project and with it came use of a project vehicle which I gladly accepted and used for nearly 18 months leaving the Silver Bullet in the driveway, though now we’d managed enough money to seal it with asphalt at least. It’s end came in April of 2013 when I was driving on the highway at 100kph when the steering stopped being fully responsive.

The steering column has an upper and lower collet bearing, which Daewoo thoughtfully made out of plastic. When they both broke within a few weeks of each other, the steering arm flexed and bent in the column leading to an effective deadspot of about +/-25 degrees from center. Practical upshot: to drive in a straight line you needed to turn the wheel left to beyond 25 degrees then back right to beyond 25 degrees to try and keep it straight. Needless to say the car was no longer safe to drive (if it ever was) and I was lucky to get $500 trade-in for it off my next car, which I insisted would be new.

6 Honda Jazz May 2013 - CURRENT Honda Jazz

With a much healthier budget this time I researched a lot and test drove multiple cars but fell in love with Hondas - specifically the Jazz. The dealership had a floor stock of the previous years model, a Manual Black 2012 Honda Jazz Vibe Hatchback. I really wanted Red (still) but couldn’t afford to wait or to stretch the budget any further. What struck me about the car was how responsive its steering was compared to every other car I’d driven. (No, the Silver Bullet doesn’t count) It had good acceleration for its class and handled nicely in the corners. It was nicely finished inside and had a surprising amount of space given it’s small stature.

Originally intended to be a run-about car between home and the train station, I spent my budget on something that was nicer to drive, rather than an Automatic but worse handling car. In time I came to regret the decision to get a manual, after several incidents on public transport I changed to driving into the city in 2016 and did so until the COVID19 pandemic, 5 days a week, every week. The commuting in traffic in a manual was unbelievably punishing.

That said…I still have it! It’s running exceptionally well and my oldest son is learning to drive a manual and it will be his car once he gets his Provisional licence, hopefully in a few months time. It’s fair to say that the Jazz has been the most reliable and consistent car I’ve ever owned, but with our family holiday dreams being shot to hell by the pandemic, my wife and I decided to focus on home rather than travel and after driving the Jazz for 8-1/2 years it was time for a car I really, really wanted.

7 Tesla Model 3 Nov 2021 - CURRENT Tesla Model 3

Despite the frustration of dealing with Tesla, after three months wait I finally received my 2021 Red Tesla Model 3 SR+ with White Interior. Being somewhat of a nostalgic person (in some instances) I took the Tesla to the same car park where I’d taken the photo above of the MGF some 20 years prior.

I’d been obsessed with electric cars since the EV1 and more recently the Nissan Leaf, however the range wasn’t quite enough for me, living a 120k round trip commute to work each day. The Model 3 I originally christened the Tessalecta, but the family refers to her as Tessie, which has now stuck. The acceleration of 0-100kph in 5.6 seconds makes it the fastest accelerating car I’ve ever owned, pipping the Impreza by 1.5 seconds, but in the end that’s not why I bought it. Its low center of gravity and weight distribution makes it corner like it’s glued to the ground and the Auto-pilot functionality is incredible, though not without it’s issues (don’t try to over-use it on city streets).

The car is the most comfortable car I’ve ever driven or owned and it’s changed my perception of driving from what had become a monotonous, sometimes stressful experience (especially to/from the office) back into something enjoyable. I remember with the MGF taking weekend drives to Rainbow Beach just to enjoy the drive (and the beach) and I recently did that trip again with my wife and daughter and it felt very much today in the Model 3, like it did in the MGF, 20 years ago.

Conclusion

I’m 45 years old now so I know that I’ll probably have another car or two before I hand my licence in, so it’s probably not the end of the story. For now at least the cars I’ve had the pleasure (and otherwise…Silver Bullet I’m looking at you…) of driving has been a reflection of the many different stages of my life, what fit my needs and what my budget could afford at that time.

It’s been a crazy three decades that’s for sure…

]]>
Personal 2022-02-20T20:00:00+10:00 #TechDistortion
Upgrading the Mac Pro RAM https://techdistortion.com/articles/upgrading-the-mac-pro-ram https://techdistortion.com/articles/upgrading-the-mac-pro-ram Upgrading the Mac Pro RAM I’ve been enjoying my 2013 Mac Pro immensely and wrote about it here seven months ago, and two months ago I upgraded the SSD, with the further thought that someday:

"…I can upgrade…and go to 64GB RAM for about $390 AUD…"

Today I did exactly that, although I’m not normally drawn-in by Boxing Day sales that start the week before Christmas Day and end on the last day of December (that’s not a “Day” kids…that’s two weeks…SIGH) but I fell for the deal and spent $321AUD on the RAM I had intended from Day 1 when I bought the secondhand Mac Pro.

I’d done my research (IKR…me?) and it turns out that due to the fact that the Mac Pro 2013 was designed and released before 32GB SDRAM DIMMs were available, whilst you can fit 32GB modules in the Mac Pro, if you do the bus speed clocks down from 1866MHz to 1333MHz due to bandwidth limits on the memory bus. The chances I’m going to push beyond the 64GB RAM mark in my use cases is zilch hence I have no intention of sacrificing speed for space and went with the maximum speed 64GB.

Shutting down and powering off, unlocking the cover and lifting it off reveals the RAM waiting to be upgraded:

Cover off with Original Apple 16GB RAM Fitted

Depressing the release tab at the top of each bank of two RAM modules was quite easy and OWC’s suggested spudger wasn’t necessary. The new RAM is ready to be opened and installed:

The 64GB RAM in its Box

Taking out the RAM was a bit awkward but with a bit of wriggling it came loose okay. I put the old modules in the Top of the packaging, leaving the new modules still in the Bottom of the packaging:

The Old and the New side by side

Fitting the RAM felt a bit strange as too much pressure on insertion starts to close the pivot/lever point of the RAM “sled”. Wasn’t too hard though and here it is now installed ready to be powered on:

New RAM Installed

Once we’ve booted back up again I confirmed that the system now recognised the new RAM:

Original 16GB RAM 16GB RAM Installed

Upgraded to 64GB RAM 64GB RAM Installed

The improvement in performance was quite obvious. I run several Alpine VMs headless for various tasks as well as have multiple Email clients open, five different messaging applications (that’s getting out of hand world at large…please stop…) between home and work needs as well as browsers with lots of tabs open. Unlike with 16GB RAM, once the applications were launched, they stayed in RAM and nothing slowed down at all! It’s just as quick with 20 windows open as it is with 1 open. The RAM Profiler demonstrates the huge difference:

RAM 24hr Profile RAM Profile shows the huge difference

The Purple shaded area is compressed memory, with Red Active and Blue Wired and clearly the vertical scale is a Percentage of maximum therefore the system was peaking at 80% of 16GB (13GB used), but now is peaking at maybe 60% of 64GB (38GB used), with no compression at all. The swap was peaking during 4K video encoding at 15GB which was insane, but now it has yet to pass 0GB.

In summary it’s an upgrade I’ve long wanted to do as I was aware I was pushing this Mac Pro too far with concurrent applications and performance was taking the rather obvious hit as a result. Now I have lots of memory and the Mac Pro is performing better than it ever has. All that’s left is to upgrade the CPU…

Seriously? Well…we’ll see…

]]>
Computer 2022-01-17T19:00:00+10:00 #TechDistortion
Buying A Tesla https://techdistortion.com/articles/buying-a-tesla https://techdistortion.com/articles/buying-a-tesla Buying A Tesla I’ve been closely watching Tesla since their original collaboration with Lotus in the 2010 Tesla Roadster. In Australia, Tesla didn’t open a store until December, 2014 and the Model S started at about $100k at the time with only 2 Superchargers, both in the Sydney area and of no use to me in Brisbane.

Mind you, money stopped me anyway and so I drove my 2012 Honda Jazz somewhat grumpily and waited. (Sorry Jazz…you’re a pretty solid car and have served me well…) My company was increasingly supportive of Electric Vehicles and had obtained a Nissan Leaf for each of their major office locations, and Brisbane was one of them. So in June 2016 I booked the company Leaf and took it for a drive home and back to the office again as a test run - could a Leaf be in my future?

Nissan Leaf in Garage

Certainly it was much more affordable and it was fun to drive but when the round trip between the Brisbane CBD and my home took me down to only 12km of range remaining, I decided that it was too short on range to meet my needs once the battery would ultimately degrade with long term use.

Nissan Leaf 12k’s Left

Again, I grumbled and watched Tesla from afar. The Model S was a hit, and was followed by the Model X with it’s technical problems (those Falcon Wing doors…really?) and finally the Model 3. Tesla had opened their first store in Brisbane in mid-2017 but it wasn’t until the end of 2018 that the Model 3 was available to be viewed - and then it was the Left-Hand Drive model and wasn’t allowed to be driven in Australia at that point - so no test drives were permitted.

Despite that I couldn’t contain myself any longer and took my sons with me to the Tesla dealership and test “sat” in the S, the X and the 3 - even if the steering wheel of the Model 3 was on the other side! It cemented in my mind that whilst the X was the family favourite its price made it perpetually out of my reach, but the 3 was the far nicer vehicle inside. Cleaner, simpler and helpfully it was also the cheapest!

And so my heart was set on the Model 3, but with two other car loans still in play, I had to wait just a little bit longer. A few of my friends had received their Model 3 locally in 2019 and Tesla now had test drives available, but on advice from the only Model S owner I knew I refrained and didn’t test drive a Model 3 out of fear that it would only make me want one even more…

He wasn’t wrong…

Fast-forward to 2021 and now with my eldest child having a drivers licence, we needed a third car and with both existing cars now paid out, I was finally able to seriously look at the Model 3. I test drove a Standard Range Plus with FSD installed on the 1st of September, 2021. I was allowed to drive it for 45 minutes and fell in love with it on the drive. The budget couldn’t stretch to the Performance or Long Range models, FSD was out of the question too, but might be enough to get the White Interior (I loved the look and feel of it) as well as the lovely Red paint. I’d been wanting a Red car for 20 years. (That’s another story)

Important Tangent

My daughter was in her final year of High School and like many of her friends they were starting to organise their Grade 12 formal…dresses, make-up, hair and of course, the “car” that would drop them off. My wife and daughter were very excited about the possibility of dropping her at the formal in a shiny new, Red Tesla Model 3 and after my test drive we saw the website reporting 1-4 weeks expected delivery, and decided that given how well it drove and it would be easily delivered within the 11 weeks I needed to make the formal, that I placed my order the night of the test drive. Two birds with one stone…as they say.

Back to the Tesla Bit

Finance was approved within a day and the Tesla app and website changed from 1-4 weeks expected delivery, to showing “We’re preparing your Invoice.” On the 17th the App changed from its blank entries to listing the 8 instructional videos. There was no doubt I’d entered the infamous Tesla Reservation Black Hole. I’d read about it, but when you’ve been excited about owning an EV, specifically a Tesla and most recently the Model 3, it was approaching 10 years of mounting anticipation. I thought it was supposed to get easier when you got older to deal with this sort of thing, but apparently it really hasn’t. So had begun what I thought would only be a few weeks wait. How wrong I was…

Tesla First Estimate

The Tesla representative I was assigned was not the best communicator. He didn’t return several of my calls and I originally had called once each week to see if there was an update, but on week three his tone made it clear that so far as checking on updates for where my car was, in his words…he “wouldn’t be doing that.” Realising that I was becoming “that guy” I decided there was no point in pressing and instead returned to habitually reloading the website and the app in the hopes of a change of status.

3 weeks. Still nothing.

4 weeks. Still nothing.

5 weeks. Still nothing, although my electrician had mounted the wall charger and completed the 3-phase power upgrade, but the charger still wasn’t wired in. Didn’t matter - no car to plug it in to…yet!

6 weeks, still nothing, though my electrician finished the wiring for the HPWC so that was some kind of progress, but still no car to plug it in to.

Time Out for a Second

It’s worth noting that the website claimed a 1-4 week wait when I ordered, and a 1-3 week wait on the 2nd week of September.

Tesla Second Estimate

It wasn’t a performance model or a long range either. Then I came across a growing list of videos from other Australian recent Model 3 buyers reporting that the website time estimates were essentially complete fiction. It was never up to date…even when the notification came through on their phone saying their delivery was ready, payment had been received and their delivery appointment was set, it didn’t always show up on the website.

Additionally I learned that even once a Tesla hits the shores in Oz, it still takes 2-3 weeks before you’re able to even pick it up hence when the site indicates 1-4 weeks, it means it will be 1-4 weeks before you actually get the chance to book a time to pick it up - not actually pick it up. So realistically even if I got a message saying I can book a pick up time, it will be another 2-3 weeks before I can actually pick it up. (Yay) So at this point it’s looking more likely that I’ll get the car late October, or the first week of November which would be just in time for the formal.

You might forgive me (or not) for my rant as an impatient child to an extent, to which I see that side. Then again I was also feeling the pressure of living up to the promise I thought was safe to make to my daughter based on conversations with the Tesla representative and the Tesla website. I also knew, even then, that there were those that reserved a Model 3 multiple YEARS before their Model 3 was even delivered. Although that was for a vehicle that wasn’t shipping to anyone, anywhere, when they placed a reservation.

I suspect (and likely will never know) that the problem I created for myself inadvertently was choosing an entry level Model 3 with a White Interior and Red Paint. Truth is that if you are REALLY strapped for cash, you’re likely to order the fully entry level, White paint, Black Interior, stock-standard Model 3 Standard Range Plus - for which I believe that the order time might even have genuinely been 1-4 weeks. Even if you ordered a Long Range or Performance model, with standard colours, you’d probably get one sooner as these are higher margin and Tesla have been known for prioritising higher margin vehicles.

Designing The Website

I think about how I would have developed the website and if it was possible to separate the quantity of ordered combinations by exterior and interior colours then I would. However to test my theory I tweaked the colours, both interior and exterior and sure enough the delivery estimates NEVER changed. Knowing that Tesla don’t generally make your car to order in a manner of speaking, they seem to batch them in every combination based in part on the prior quarters order demand, it’s clear that I just didn’t pick a popular combination and that Tesla don’t break down their supply/demand by every combination. Hence their website delivery estimates aren’t based on anything other than the base model and don’t account for options and any delays they might therefore incur.

Back to waiting I guess, though by the 5th week I’d just given up on the website now knowing it was effectively full of sh!t.

Tracking “Ship"ments Literally

Running low on whatever patience I had left, I was interested to find some articles linked on Reddit and a Twitter account called @vedaprime that claimed to track Teslas as they moved around the world, including to Australia. Unfortunately his “service” used the VIN number that was often associated with orders in the past when shipments came from the USA, however from China he indicated the VIN wasn’t as reliably extracted from the website as it had been in the past. I did learn a few interesting things though.

As of the time of publishing this article, Tesla ship all Model 3 Standard Range Plus models from Shanghai to Australia on a limited number of vehicle carrier ships most commonly on the primary route: Shanghai–>Brisbane–>Port Kembla (near Sydney). Despite Tesla having sales and delivery centers in Queensland (in Brisbane too) they do ALL of their inbound Quality Assurance (QA) in Port Kembla.

Once coming off the ship at Port Kembla, each car needs to be inspected and once it passes quarantine, customs and QA inspections, it waits its turn at AutoNexus for a car-transporter (semi-trailer connected to a prime mover aka a big truck) bound for Brisbane. The ships dock in several ports but only unload Teslas in southern NSW for the East Coast and Tasmania, and Brisbane isn’t as large a market as Sydney or Melbourne so gets less transporter trucks as a result.

Matching the VIN then progresses down the list of the first Reservation Number (RN) to match the configuration, then it’s attached to the RN, and assigned to the buyer. The whole process can take 1-2 weeks to QA all of the cars coming in from the docks with shipment sizes varying from a few hundred to well over a thousand - that’s a lot of cars to QA! Once the VIN is matched, then if Mr VedaPrime can find it, he can track the vehicle, but by then it should be imminently on its way to the buyer.

So I began searching for ships that fit the criteria. Using VedaPrime’s last 12 months of public shipping notifications on Twitter, I narrowed down the search to ships that had left Shanghai bound from Brisbane and eventually Port Kembla, and finally came across one that fit - the Morning Crystal. Departed Shanghai on the 26th of September, due to arrive in Brisbane on the 7th of October, then in Port Kembla most likely 9th of October. Assuming a 1-2 week QA delay then a 1 week delivery to Brisbane, the most likely date for a delivery would be the last week of October, about 9 weeks after placing an order but still within my 11 week limit.

Well then…I guess that means I’ll keep waiting then. Of course that assumed that my vehicle was on that ship. If it wasn’t, there were no current alternate candidates for at least another 2 weeks, possibly more.

Back to the story

7 weeks…still nothing. The first ship that I had my hopes pinned on (Morning Crystal) came and went without a word and the site now reported a 2–5 week delivery delay. The next candidate ship was the Passama, on the 19th of October in Port Kembla, but it also came and went without any Teslas aboard. I did however receive a call from Tesla, but from a different salesperson, informing me that my previous salesperson was no longer working for Tesla and he was taking over from him. Okay then. Great.

8 weeks and finally something changed on the website - there was now an estimated delivery date range of between 17th November to 1st December, 2021 and the VIN was now embedded in the Website HTML. A few days later and my final invoice notification arrived by EMail at 7am, though didn’t appear on the website until later that day. As I was financing my car it was advised I would get forms to sign shortly, and I did mid-morning. Submitted them and…back to waiting again.

VedaPrime’s Patreon

At this point I chose to join VedaPrime’s Patron and Discord group as I had a VIN now, and he claimed he could track it, or would do his best to do so. I’d reached the limit of what I could discern easily with my own knowledge and investigation on the public internet and Lord Veda (a nickname given to him by a popular Tesla YouTuber) clearly knew much more than I did about Tesla order tracking.

Now I’d seen suggestions about Tokyo Car and Morning Clara as potential candidate ships that could be carrying my Model 3. Tokyo Car was docked in Noumea, bound for Auckland then to Port Kembla (due 6th November) and Morning Clara was still in Shanghai, due to arrive on the 19th of November. So…back to waiting some more.

9 weeks…and my app and website began showing an estimated delivery schedule of between the 7th and 21st of November. There was mounting evidence that my car was in fact on the Toyko Car ship. With the 7th of November coming and going, I called my new Tesla representative to see if he had more information, and he didn’t. Of course. I’d given up calling Tesla about anything at this point. At this stage I’d called them five times in total following the order. They were generally unwilling or unable to help anyway, so there was no point in bothering them. I was learning far more from the VedaPrime Discord than from Tesla themselves.

10 weeks…and my app narrowed the dates down to between the 11th and 20th of November. Okay. We were cutting this really close. The morning of Friday the 19th of November was my latest possible chance if I was going to make the formal.

Then on the 11th of November, the text I had been waiting for arrived: and I was offered the choice of a Delivery appointment at either 10:00am, 1:00pm or 3:00pm on Monday the 22nd or Tuesday the 23rd of November. My youngest son had a school awards ceremony I would not miss which wrote off Monday almost entirely (not to mention an afternoon of meetings I couldn’t skip) leaving Tuesday morning as my sole option - so I booked 10am Tuesday as my pick up date.

Tesla Delivery Confirmed

My car had in fact, been on the ship Tokyo Car and was now landed in Australia. Hooray…of a sort because unfortunately…

It was over

So much for having that car for the formal. I reached out to Tesla, one last time, and left a message to which they texted back they would let me know if it could be delivered sooner but it was no use. They wouldn’t.

During the 11th week the finance finally cleared, funds cleared and on Friday that week as I was picking up my kids from school the call came in from the Tesla delivery center - we were good to go for Tuesday. I also received an Email and replied to that Email asking if Tesla could pick me up from the nearest train station but never got a response.

Why did I ask that?

I knew that on Tuesday morning at 10am, I had no convenient way of getting to the delivery center as my eldest daughter was away at schoolies all week, my wife was working, my mother no longer drives, my sister was working as were my friends in various locations, all too far away. So it was either public transport or a Taxi/Uber. Unfortunately for me I chose to live in the middle of nowhere, meaning the cheapest Uber would cost me $120 AUD one-way. The cheapest Taxi would be closer to $190 AUD. The Tesla home delivery option requires you to live over 250km from the nearest dealership so I didn’t qualify for that either. The closest a train got me was still a 45 minute walk and the bus connections to the trains were terrible. So it was going to be a combination of Train + Taxi in the end. Oh well…what can you do?

Pick-up Day…at LONG last

Tesla’s showroom in Brisbane is in the classy end of Fortitude Valley (yes, there is a classy end you Valley-haters…) near other dealerships like Porsche, Ferrari, Lamborghini and many others. Space is at a premium though and as such they will show you the cars at the showroom, you pick up test drives from there and they do have some limited servicing facilities, but their delivery center is far from the CBD of Brisbane.

It’s located in a somewhat run-down steel warehouse with a chipped concrete floor with a corrugated iron roof held up by exposed girders. A quintessential warehouse. The only way you know it’s a Tesla delivery center from the outside is a lone rectangular black sign partly obscured by trees along the roadside. The bigger issue was it was in Hendra and the nearest train station was a decent walk away.

Warehouse Sign A Lone Sign lets you know it’s the Delivery Center

That morning it was raining and so I decided to suck it up, take the train as close as I could and then get a Taxi from there - I don’t like using Uber on principle. (That’s another story)

Waiting for the Train in the rain

When I got to Northgate the rain had stopped and looking at the rain pattern on the radar I estimated I had about 60 minutes before the next wave of rain hit so I decided to save my Taxi money (about $45 from there) and walked to Hendra instead.

Walking to Hendra

To be honest, it really was quite a pleasant walk in the end. (Maybe I was too excited about the destination to care at that point?)

I arrived 45 minutes before the scheduled time and apologised for being early. Lord Veda had highly recommended getting there early so I think that was good advice.

Warehouse In Front of the Delivery Center

I was greeted by a very pleasant ex-Apple employee who now worked for Tesla and said: “You must be John! Yours is the only Red one going out today and it’s lucky because I literally just finished setting up your car.” I’d already spotted it, as I figured out the number plate from the Qld Transport site the previous day by searching the VIN.

Spotted Mine! Mine was the Red one in the far back right of this photo

Some photos, set up and giggling to myself later and I was off. Not before she insisted on taking a photo of me with the car and waving me off. As I was leaving it was 9:45am and still no other owners had shown up. I had quite literally…beaten the rush.

United with my car at last! United with my car at last!

I drove to Scarborough and my old favourite spot on Queens Beach where I once took photos of my car 20 years earlier and took some photos in about the same spot…then went home. Later in the day I picked up my kids from swimming and that’s pretty much it.

I finally had my dream EV.

Tesla at Queens Beach

The Minor Details

There are a few things I wasn’t 100% clear on until the delivery day. Firstly, you do get the UMC Gen-2 Mobile Charger with the two tails (10A and 15A) which is a single phase unit, delivering a maximum of 3.5kW. The Model 3 also came with cloth floor mats, and a 128GB USB Thumb Drive in the glovebox for sentry mode and other things. It did NOT come with a Mennekes Type 2 cable for connecting to BYO cable charging points which was disappointing and it didn’t come with a Tyre Repair kit. I was aware that Tesla’s don’t have a spare tyre so had pre-purchased a repair kit when I bought my HWPC.

In 2019 in Australia, all new Teslas came with a HPWC as well, but that was long since un-bundled. The car also comes with a free month of Premium Connectivity after which it’s $9.95 AUD/month, which I’ll be keeping after the free month ends.

My original sales assistant incorrectly informed me it didn’t come with car mats, so I ordered some. Now I needed to return them. Oh well. I’ll also need a Type 2 cable - there are too many of those chargers around to NOT have one of those in the boot, just in case.

Unicorns

Tesla are constantly tweaking their cars - from the motor to the battery to software and even the occasional luggage hook or seal. They don’t wait for model years most of the time and so it becomes an interesting lottery of sorts and they get themselves caught in knots a bit when they advertise something on the website but then they change it after you order it and it’s built to a different standard. In the Tesla fan-lingo they call those the “Unicorns”.

When I ordered mine the website stated: 508klm Range WLTP, 225kph Top Speed, 5.6s 0-100kph time. The current website however now says: 491klm Range WLTP, 225kph Top Speed, 6.1s 0-100kph time and to add more confusion the compliance sticker adds: 556klm Range.

What had happened is Tesla increased the size of the LFP battery pack mid-cycle from 55kWh to 60kWh (usable). At the same time Tesla changed the motor to one that was slightly less powerful, though it’s unclear why…it was likely due to either efficiency or cost reduction reasons. We may never know. The motor change though didn’t happen until late October which approximately coincided with the website specification change. This meant that there were three builds that had the more powerful motor but also had a larger battery.

The VIN ranges where this happened were those within my build range hence my vehicle is one of a few hundred Unicorn SR+ models. Lucky me?

Conclusion: Order to Delivery Day

The final time from Test Drive and Order to pick up was 11 weeks and 6 days, 8 days shy of three months. Others that received their cars a week before me, some had ordered in early October and only waited 6 weeks from order to pick-up. In one of those “there’s no way I could have known at the time” situations, I’d just ordered at the beginning of a build cycle for Q4 2021, I’d ordered a low-demand combination as well, so I had to wait the longest of almost everyone in my production batch of cars. Oh well…I have it now…so these three months can now be a fading memory…

My obsessing over a new car like this is something I’ve never done before. I’ve been trying to figure out why this was so different to my other experiences. Options include: I’m getting less patient and/or more entitled in my old age; The ordering process was more akin to ordering a tech product from Apple’s online store than any car purchase I’d ever experienced; or the information provided by the manufacturer was in fact worse, than having no information at all.

I honestly don’t think I’m getting less patient with age…more entitled though? Maybe. I think the difference is the contrast with a traditional car purchase. Traditionally sales people from Toyota, Mitsubishi, Honda and Subaru, were well versed with delivery times, standard delays and set realistic expectations up front or at least they certainly presented the situation more honestly than Tesla did.

Tesla appeared to be up-front in their estimates via their website, but it was fundamentally misleading and their sales people were generally unhelpful. Perhaps it was because the Tesla inventory system was not optimised to provide accurate information by specific build sub-types, production batches and such, to enable sales staff and customers to set realistic expectations. Either way it was exceedingly frustrating and had Tesla indicated up front I mightn’t have the car until late November, I would have made other arrangements for my daughters formal and let it be.

Conclusion: Delivery Day

The delivery experience was, quite frankly, the worst of any car purchase I’ve ever had in most respects…but it’s a subtle thing.

I’ve spoken with other owners that had a basic 10 minute run-down of pairing their phone and being shown the basics and shoved “out the door” so to speak. For me, I’d arrived early and they were busy getting everything ready for everyone else, so that’s on me, but if not for that it would have been 10 minutes, got it, great, now out you get, on to the next customer.

Also, when you put down a significant amount of money for your dream car and you show up to a dodgy-looking warehouse that’s hard to get to and treated a bit like a number, dealing with four different people from start to finish, it feels unprofessional and you feel like you don’t matter very much - you’re an imposition not a customer.

Tesla have a LOT to learn from existing car purchasing experiences from pretty much every other manufacturer.

Warehouses There’s some nice Electric Vehicles in this bunch of warehouses…seriously!

Warehouse Laneway The front door is down a dodgy laneway and isn’t signed anywhere

I’ve bought Honda, Subaru, Toyota and Mitsubishi between multiple countries and having a common point of contact from start to finish was consistent throughout. They all spent significant time with me or my wife walking us through every feature of the vehicle and were all in nicely presented showrooms when we picked them up. And yes, they even had a tacky red bow on the bonnet, because, why not? It’s not every day you buy a car. So why shouldn’t you make that a special experience for the buyer?

Maybe the problem is the model of existing dealers and the profit they need to make over the car’s actual price, requiring more salespeople, service departments and larger parcels of real-estate to house it all. If you are to believe the Tesla approach of being leaner, minimal up-sells, less salespeople and smaller showrooms, well then I should be getting more car for my dollar. Maybe I am? It’s hard to be sure. Or maybe it’s that Tesla have pushed their own leaner sales-model too far and the best experience lays somewhere in-between.

Tesla are finally making a lot of money after nearly going bankrupt in 2018. Maybe Tesla should reinvest some of that into customer service.

Conclusion: Lord Veda’s Patreon

I witnessed VedaPrimes Patreon start at $170 AUD/month and then rocket to $1,700 AUD/month over the two months I was a Patron. Unlike TEN though, once people have their cars they tend to drop off, so it varies significantly from month to month. In the end he was unable to find my VIN at any stage in the process. My car was transported on a smaller carrier and slipped past his radar. Either way though the value for me wasn’t the VIN tracking - it was the Discord.

In the Discord I met a lot of people that were hopefuls like me. We shared our frustrations, our knowledge of charging, 3rd party apps, tips and tricks and of course, talked about Charlie the Unicorn in relation to naming our new cars…when they actually arrived. It was a blast actually and without people sharing the hundreds of tidbits of information, from different Tesla salespeople, known VINs and such, I suspect Veda wouldn’t be able to paint as meaningful a picture for the broader group. In essence, the groups collective knowledge is a huge part of the VedaPrime services’ value.

That said I now have to bow out of the group at this point and am grateful for the friendships and discussions we had during our long wait for our vehicles to arrive.

Final Thoughts

My advice for anyone buying a Tesla:

  1. Don’t trust the website about delivery times
  2. Don’t believe a word the salespeople tell you about when it’s arriving until you’ve had a booking text message
  3. Teach yourself how to use the car through the videos because Tesla don’t want to spend their time teaching you on delivery day.

Despite these things, there’s one thing Tesla have going for them that might make you forgive all of that.

They make some of the best cars in the world.

And I love mine already.

Afterword

This post was written as I went and has taken three months to finish. I know it’s a bit long, but it captures all the threads I pulled, all the investigations I did as well as the final result. If nothing else it’s a point of reference for anyone interested in what ordering a Model 3 in Australia was like in 2021.

My daughter went to the formal in a Mitsubishi Eclipse Cross Aspire PHEV. It was also Red. She was happy with that and returned safely from Schoolies having had a great time.

I did NOT call my Tesla “Charlie”. Sorry Discord gang. I just couldn’t…

]]>
Cars 2021-11-28T15:00:00+10:00 #TechDistortion
Upgrading the Mac Pro SSD https://techdistortion.com/articles/upgrading-the-mac-pro-ssd https://techdistortion.com/articles/upgrading-the-mac-pro-ssd Upgrading the Mac Pro SSD I’ve been enjoying my 2013 Mac Pro immensly and wrote about it here five months ago, with the thought that someday:

"…I can upgrade the SSD with a Sintech adaptor and a 2TB NVMe stick for $340 AUD…"

Last week I did exactly that. Using the amazing SuperDuper! I cloned my existing Apple SSD (SM0256F) 256GB SSD to a spare 500GB USB 3.0 external SSD I had left over from my recent Raspiblitz SSD upgrade. With that done I acquired a Crucial P1 M.2 2280 NVMe SSD for a good price from UMart for $269 with $28 for the Sintech Adaptor for a total upgrade cost of $297 AUD.

Shutting down and powering off, unlocking the cover and lifting it off reveals the SSD waiting to be upgraded:

Cover off with Original Apple 256GB SSD Fitted

Then using a Torx T8 bit, remove the holding screw at the top of the SSD and pulling the SSD ever to slightly towards you then wriggle it side to side, holding it at the top and the SSD should come away. Be warned: the Heatsink makes it heavier than you think, so don’t drop it! The Mac Pro now appears very bare down there:

No SSD Fitted Looks Wrong

Next we take the Sintech adaptor and gently slide that into the Apple Custom SSD socket, converting the socket to a standard M.2 NVMe slot. Make sure you push it down until it’s fully inserted - the hole should be clearly visible in the top notch. It should fit perfectly flush with that holding screw.

Sintech Adaptor sitting in place

The M.2 NVMe then slots into the Sintech adaptor but it sticks out at an odd angle you can see below. This is normal:

NVMe SSD sits in at an angle initially

Finally we re-secure the 2TB SSD and Sintech adaptor with the Torx screw and we’re fitted ready to replace the lid.

2TB SSD Fitted

Once we’ve booted back up again I booted to my SuperDuper clone (holding the Option key on boot), then did a fresh install of Monterey. With some basic apps loaded it was time to test, and the results are striking to say the least - beyond the fact I now have 2TB of SSD but the speeds:

Drive Size Read Speed Write Speed
256GB 1,019 454
2TB 1,317 1,186
Diff +1.3x +2.6x

Original SSD Top View of the Mac Pro

New SSD Top View of the Mac Pro

You do notice the improvement in performance in day to day tasks although I think when I retested this compared to five months ago, my 5GB file test was up against about 20GB of spare space on the 256GB SSD at that time, which impacted the write testing as it worked around available blocks on the drive.

A final note about the SSD regarding the heatsink. The Apple SSD heatsink is heavily bonded to the drive and is extremely difficult to remove. There’s no question that the SSD would benefit from fitting a heatsink to it, however the amount of heat dissipation in the NVMe drive relative to the GPUs and CPU is small in comparison. In my testing I couldn’t see a significant temperature change under heavy load, with it rising less than 10 degrees Celcius from Idle to maximum.

In summary it’s an upgrade I’ve long wanted to do as I was getting sick of swapping out larger files to the NAS and a USB drive. Now I have lots of high speed access storage space for editing photos and videos. Now…how’s my memory pressure going…

]]>
Sport 2021-11-08T06:00:00+10:00 #TechDistortion
RaspiBlitz SSD Upgrade https://techdistortion.com/articles/raspiblitz-ssd-upgrade https://techdistortion.com/articles/raspiblitz-ssd-upgrade RaspiBlitz SSD Upgrade I’ve been running my own node now for nearly 9 months and when it was built, the build documentation recommended a 512GB SSD. At the time I had one laying around so I used it, but honestly I knew this day was coming as I watched the free space get eaten up by the blockchain growth over time. I’m also not alone in this either with forums filled with comments about needing to upgrade their storage as well.

The blockchain will only get bigger, not smaller and fortunately the cost of storage is also dropping: the 500GB drive cost about $300 AUD six years ago, and the 1TB same brand similar model today cost only $184 AUD. In future upgrading to a 2TB SSD will probably cost $100 or less in another five years or so time.

This update is going to take a few hours, so during that time obviously your node will be offline. It can’t be helped.

My goals:

  • If possible, use nothing but the RaspiBlitz hardware and Pi 4 USB ports (SPOILER: Not so good it seems…)
  • Minimal Risk to the existing SSD to allow an easy rollback if I needed it
  • Document the process to help others

ATTEMPT 1

  1. Shutdown all services currently running on the RaspiBlitz

Extracted from the XXshutdown.sh script in the Admin Root Folder:

sudo systemctl stop electrs 2>/dev/null
sudo systemctl stop lnd 2>/dev/null
sudo -u bitcoin bitcoin-cli stop 2>/dev/null
sleep 10
sudo systemctl stop bitcoind 2>/dev/null
sleep 3
[Only use this if you're using BTRFS]: sudo btrfs scrub start /mnt/hdd/
sync
  1. Connect and confirm your shiny new drive
sudo blkid

The following is a list of all of the mounted drives and partitions: (not in listed order)

  • sda1: BLOCKCHAIN Is the existing in-use SSD for storing the configuration and blockchain data. That’s the one we want to clone.
  • sdb1: BLITZBACKUP Is my trusty mini-USB channel backup drive. Some people won’t have this, but really should!
  • sdc1: Samsung_T5 Is my new SSD with the default drive label.
  • mmcblk0: mmc = Micro-Memory Card - aka the MicroSD card that the RaspiBlitz software image is installed on. It has two partitions, P1 and P2.
  • mmcblk0p1: Partition 1 of the MicroSD card - used for the boot partition. Better leave this alone.
  • mmcblk0p2: Partition 2 of the MicroSD card - used for the root filesystem. We’ll also leave this alone…

If you want more verbose information you can also try:

sudo fdisk --list
  1. Clone the existing drive to the new drive:

There’s a few ways to do this, but I think using the dd utility is the best option as it will copy absolutely everything from one drive to the other. Make sure you specify a bigger blocksize - the default of 512bytes is horrifically slow, so I used 64k for mine.

sudo dd if=/dev/sda1 of=/dev/sdc1 bs=64k status=progress

In my case, I had a nearly full 500GB SSD to clone, so even though USB3.0 is quick and SSDs are quick, this was always going to take a while. For me it took about three hours but I finally got this error:

dd: writing to '/dev/sdc': Input/output error
416398841+0 records in
416398840+0 records out
213196206080 bytes (213 GB, 199 GiB) copied, 10896.5 s, 19.6 MB/s

Thinking about it, the most likely cause was a dip in power on the Raspiblitz. The tiny little device was trying to drive three USB drives and most likely there was a momentary power dip driving them all, and that was all it took to fail.

ATTEMPT 2

Research online suggested it would be much more reliable to use a Linux distro to do this properly. I had no machines with a host-installed Linux OS on it, so instead I needed to spin up my Virtual Box Ubuntu 19.04 VM.

It was safe enough to power off the RaspiBlitz at this point, so I do that then disconnect both drives from the Pi, then connected them to the PC.

To get VirtualBox to identify the drives I needed to enable USB 3.0 and then add the two drives to the USB filter, reboot the VM and then ran the above but now under Virtual Box.

499975847936 bytes (500 GB, 466 GiB) copied, 4783 s, 105 MB/s
7630219+1 records in
7630219+1 records out
500054069760 bytes (500 GB, 466 GiB) copied, 4784.58 s, 105 MB/s

This time it completed with the above output after about 1 hour and 20 minutes. Much better!

If you want to confirm all went well:

sudo diff -rq sda1 sdc1

An FDISK check now yields this error:

GPT PMBR size mismatch (976773167 != 1953525167) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
  1. Resizing the new drive Step 1

In my case I started with a 500GB drive and I moved to a 1TB drive. Obviously you can use whatever size drive you like (presumably bigger) but to utilise that additional space, you’ll need to resize it after you clone it.

sudo gdisk /dev/sdb
x (Expert Menus)
e (Move GPT Table to end of the disk)
m (Main Menus)
w (Write and Exit)
Y (Yes - do this)

All this does is shift the GPT table away from the current position in the middle of the disk to the end - without doing this you can’t resize it.

  1. Resizing the new drive Step 2

There’s a few ways to do this step, but in Ubuntu there’s a nice GUI tool that makes it really simple. Starting from the Ubuntu desktop install GParted from the Ubuntu Software library, then open it.

GParted Before GParted After

Noting the maximum size and leaving the preceding space alone, I adjusted the New size entry to 953,838 leaving 0 free space following. Select Resize/Move then Apply all operations (Green Tick in the top corner) and we’re done.

  1. Move the new drive back to the RaspiBlitz and power it on.

Hopefully it starts up and works fine. :)

Conclusion

I left this far too long and far too late. Much later than I should have. My volume was reporting only 3GB free space and 100% utilisation which is obviously not the right approach. I’d suggest people think about doing this when they hit 10% remaining and not much more than that.

The Bitcoin/Lightning also hammers your SSD, shortening its life so swapping out for an identically sized drive would follow all steps except 4 & 5 and should work fine as well.

Whilst this whole exercise had my node offline for 36 hours end to end, there were life distractions, sleep and a learning curve inbetween. It should really only take about 2-3 hours for a similar sized drive.

Good luck!

]]>
Podcasting 2021-10-16T16:00:00+10:00 #TechDistortion
Managing Lightning Nodes https://techdistortion.com/articles/managing-lightning-nodes https://techdistortion.com/articles/managing-lightning-nodes Managing Lightning Nodes Previously I’ve written about my Bitcoin/Lightning Node and more recently about setting up my RaspiBlitz.

It’s been five months since then. I’ve learned a lot and frankly the number of websites that actually provide information on how to manage your Lightning Node have a lot of assumed knowledge. So I’d like to share how I manage my node lately with a few things I learned along the way that will hopefully make things easier for others to avoid the mistakes I made.

The latest version of RaspiBlitz incorporates the lovely Lightning Terminal which incorporates Loop, Pool and Faraday into a simple web interface. So we’ll need that installed before we go further. Log into your Raspiblitz via Terminal and when you’re in the web interface, enable both of the below if you haven’t already:

  • (SERVICES) LIT (loop, pool, faraday)
  • (SERVICES) RTL Web interface

List of Services Install LIT from the Additional Services Menu

Updated Interface You should see LIT in the User Interface Main Menu Now

LIT Lightning Terminal note the port and your IP Address to Log in

Initial Funding

When you start adding funds to your node, if you don’t live in the USA, you’re not big on options. In the USA, you can use Strike but otherwise there aren’t any direct Fiat–>Lightning services I’ve found to date. That’s okay but to set up your node you’ll need to buy BitCoin and face the on-chain transaction fee.

The best option I have found is MoonPay and you simply select BTC, (you can change the Default Currency through the Hamburger menu on the top-right if you like), select the value in your Fiat Currency of choice or BitCoin amount, then after you continue, enter your BitCoin/Lightning Node’s BitCoin address (NOT the Lightning Address please…) and then your EMail. Following the verification EMail, enter your payment details and it will process the transaction and your BitCoin shows up.

Previously I’ve used apps that use MoonPay integration like BlueWallet and Breez, but that’s a problem because if you do buy BitCoin, it ends up on your mobile device’s BitCoin Wallet and it’s stuck. You need to then do another on-chain transaction which will cost you more in fees. By using MoonPay directly to your own node’s BitCoin address, you only have to deal with that once.

FYI: A $50 AUD transaction cost me $8.12 AUD in fees, though this is essentially flat so doubling that to $100 AUD and you’re up for $8.14 AUD in fees therefore if you’re setting up a node for the first time, be aware it makes sense to add as much as you can manage to get started. More about that later.

Another FYI: MoonPay has a KYC (Know Your Customer) cut-off value and this is the equivalent of $118USD (0.00271BTC at time of publishing) which requires Identification before they’ll process the transaction. If you’re concerned about this, then you can make multiple transactions but that’ll obviously cost more in fees. And about those fees, you don’t get the option to set the fee in sats/vB…more about that next.

Timing Is Everything

BitCoin isn’t like banking whereby transaction fees are fixed (mind you, Fiat transaction fees are often buried so deep you’ll never see them but believe me they’re there…) as in they don’t vary over time. (Insert joke about Fiat bank fees always going up over time, but I digress…)

BitCoin is totally different. Simplistically your fees are based on transaction backlog for the current block against the current mining fee. The more demand, the bigger backlog, the higher the fees. This is a simplification, but the details are quite dry but feel free to read up if you care.

Fees are typically referred to in sats/vB (Satoshis per virtual-Byte) which you can read about here and the differences between bytes and virtual bytes here. It’s a SegWit thing. Anyhow, the lower the number, the less your fees will be for your on-chain transaction.

The mechanism for setting your level of impatience for any on-chain transaction is the Fee in sats/vB. If you’re impatient then set a really high number, if you’re in no hurry then set a low number. To get an idea of the historical and current view of the fees, have a look at Mempool.space.

MemPool MemPool Shows Lots of Information About Block Transactions at a High Level

Fees are quite low at the moment so for transactions where you can set this, 1 sat/vB will see your transaction processed quite cheaply and very quickly - most likely even in the current block (10 minutes).

So Now You Have BitCoin

How does it feel now you have BitCoin on your Node? For me? Me’h - it’s a thing maybe I’m just used to it now, but you are effectively your own bank at this point. If you want to avoid losing money in on-chain fees then you need to stick to lightning transactions wherever you can where the fees are measured usually between 1 and 10 sats. BitCoin on-chain transactions all incur fees and using Lightning requires a Channel - multiple actually. To open a channel you need an on-chain transaction. To close a channel, you need an on-chain transaction. While that channel is open though, there are not on-chain fees at all.

To review - there are five transaction types people get charged on-chain fees for:

  1. Converting from Fiat to BitCoin
  2. Converting from BitCoin to Fiat
  3. Opening a Lightning Channel
  4. Closing a Lightning Channel
  5. A BitCoin transaction (i.e. purchasing something with BitCoin)

To be clear, these are all technically just a BitCoin on-chain transaction - it’s just the end purpose that I’m referring to.

Choose The Node, Choose The Channel Limits

There are two factors to consider when opening a channel to a new node: how well connected is it; and can I afford the minimum channel size?

A good resource to find the best connected node is 1ML but there’s a huge amount of information so finding the most relevant information isn’t always easy. In short, the best place to start is to think about where you’re intending to send sats to or to receive them from, simply because the more direct the connection to the node, the less fees and the more likely the transaction will succeed.

For incoming sats, in the world of podcasting, LNPay, Breez and Sphinx.

For outgoing sats, I personally use BitRefill to buy gift cards as a method to convert to Fiat from time to time. Another example of this is Fold.

However there’s an issue. There’s no indication on 1ML and no other way to easily determine the minimum channel size unless you attempt to open a channel with that node first. You first need enough sats on-chain for you to initiate an open channel request, and then if that throws an error it will tell you the minimum channel size. Thus you can only really determine this by interrogating, and poking the node. (Sigh)

For two I mentioned above, I’ve done the work for you:

  • BitRefill = 0.1 BTC (10M sats)
  • Fold = 0.05 BTC (5M sats)

COUGH

Well…I have 300k or so to play with, so I guess not.

The next best option is a node that’s connected to the one you want, which you can trace through 1ML if you have the patience.

Other factors to consider when choosing a node to open a channel with:

  • Age: The longer the node has been around, the more likely it is to be around in future
  • Availability: The more available it is the better. It’s annoying when sats are stuck in a channel with a node that’s offline and you can’t access them.
  • TOR: In the IP Address space if you see an Onion address, then TOR accessibility might be useful if you are privacy concerned.

If it’s the first channel you open, your best bet is to pick a big, well connected node as most of these are connected to one of the Lightning Loop Nodes (More on that later).

Channel Size

Since we want to minimise our on-chain fees, we want to try this “Lightning” thing everyone is raving about, so we open a channel. Since we don’t want to be endlessly opening and closing channels it’s best to open the biggest channel that you can afford. In order to use Loop In and Out, you must have at least 250k sats (about $105USD at time of writing) and if you want to quickly open channels and build a node with a lot of Inbound liquidity I’d recommend starting with at least 300k or more, as we know we’ll lose some as we Loop Out and open new channels. (More on that later)

The other issue with smaller channels is that they get clogged easily. When you want to spend any sats and all you have are a bunch of small channels, if the amount you’re trying to spend requires a little bit from each channel then all it takes is for one channel to fail and the transaction will fail overall. The routing and logic continues to improve but larger channels make spending and receiving sats so much easier and keeping your node balance above 250k sats lets you use Loop.

I made the mistake early on of not investing enough when opening channels so I had lots of small channels. It was a huge pain when I was trying to move around even moderate amounts (100k sats).

Circular Rebalance

Circular rebalancing is a nice feature you can use when you have two or more channels. It allows you to move local sats from the selected channel into the local sats of the destination channel - or you can think of it as receiving a sats balance increase from the other channel. The Ride The Lightning web interface is my favourite web UI for circular rebalancing.

Ride The Lightning Channels View Ride The Lightning Channels View

Rebalance Channel Step One Rebalance a Channel Step One

Rebalance Channel Step Two Rebalance a Channel Step Two

Behind the scenes it’s simply an Invoice from one channel to another channel. It gets routed outside through other Lightning Nodes and in the example above, there are 5 hops at a cost of 1011 milli-Sats (round that down to 1 sat).

Using this method you can shuffle sats between your channels for very few sats which can be handy if you want to stack your sats in one channel, distribute your sats evenly to balance your node for routing and so on.

Balancing Your Node

There are three ways you can “balance” your node:

  1. Outbound Priority (Spending lots of sats)
  2. Inbound Priority (Receiving lots of sats)
  3. Routing

For the longest time I was confused by the expression, “You can set up a routing node” insofar as what the hell that meant. It’s not a special “type” of node, it just means you keep all of your channels as balanced as possible - meaning your Inbound and Outbound balances are equal. Obviously to achieve a routing node it’s necessary to have 50% of the value of your channels in total in your node, otherwise it would be perfectly balanced.

Keeping in mind that “balancing” a node actually refers to the channels on that node being predominantly balanced or biased for one of the above three options. I suppose there should be a fourth option that describes my node best: “confused”.

Loop In

In Lightning you can move on-chain BitCoin into a channel that you want to add Local balance to changing it to Lightning sats you can spend via Lightning. Why would you do this?

Let’s say you’ve bought some new BitCoin and it’s appeared on your node - it’s not Lightning Sats yet so you can only spend it on-chain (high fees = no good). You already have a bunch of mostly empty channels and you don’t want to open a new channel: this is when you could use Loop In.

Loop In Loop In Interface in Lightning Terminal

Loop In only works for a single channel at a time, and with the 250k minimum, that channel must have at least that many sats of available capacity for Loop In to work.

Loop works by using a special looping Node (series of Nodes probably) maintained by Lightning Labs. At this time they enforce a 250k minimum to a 6.56M maximum per loop in a transaction. The concept is simple: reduce on-chain fees by grouping multiple loop transactions together. Your transaction attracts a significantly lower fee than if you were to open a new channel with your BitCoin balance and you don’t disturb the channels you already have.

Loop Out

Like Looping In, Out works the other way around. It some ways it’s far more useful as you can use Looping Out to build a series of channels cyclicly (more on that shortly).

Whilst Looping In carries the same 250k minimum, Loop Out is limited to your available Local capacity, though still can not exceed 6.56M maximum per loop out a transaction.

Loop Out Loop Out Interface in Lightning Terminal

Loop Out Loop Out can Manually Select Specific Channels if there’s Liquidity

Loop Out Loop Out of 340k sats from two channels

Loop Out Loop Out showing a fee of 980 sats

Loop Out Processing the Loop Out

If the Loop Out fails, you can try to rebalance your channels to put your sats into a highly connected node prior to the loop out, or you could lower the amount and try again until it succeeds. You can adjust the Confirmation Target and send it to a specific BitCoin destination if you want (if you leave that blank, it defaults to the node you’re initiating the Loop Out from which is normally what you’d do).

If you want to keep the fees as low as possible, you should set the number of block confirmations to a larger number. By default I believe it’s 9 blocks (not completely sure) which cost me 980 sats in my example, but by setting this higher it should drop the fees however I did not test enough times to confirm this myself.

Once it completes your node will report those sats now against your on-chain balance, ready for BitCoin spending directly should you wish to.

If you stack your sats into a single channel, you can also use the RTL interface, under Channels select the right-hand side drop down and select “Loop Out”. Again, a minimum 250k sats are required.

Loop Out RTL Looping Out via Ride The Lightning

Stack a Channel, Loop It Out, Open New Channel, Repeat

If you’re building your Node from scratch and you’ve started with a single channel that you opened with your initial BitCoin injection, then there’s a technique you can use to build your single channel node into a well connected node with many channels.

The process:

  1. Stack a Channel (Once you have 2 or more Channels)
  2. Loop It Out
  3. Open New Channel
  4. Repeat

The whole process could take multiple days to complete for multiple channels and it will consume some of your sats in the process, but you’re essentially shuffling around the same sats and re-using them to open more channels to improve your nodes connectivity.

Maintenance

Operating a node isn’t a full time job, but it’s also not a set and forget thing either. I had an issue with my DuckDNS not updating the dynamic address after a power outage at home. I noticed that there hadn’t been many streaming sats coming in for a week when I checked and found the error and corrected it. Another time I noticed I’d had a large number of transactions pass through my node and my channels were pegged and skewed and no routing was occurring. So I rebalanced my channels.

Sometimes I’ve had people open channels and then every balance/re-balance I attempted failed. Others open a channel and their end is highly unreliable trapping a lot of sats in the channel. When I need/want to use them I have to wait until they’re online again.

My observation has been that there are many people tinkering with BitCoin Lightning, and they tend not to put much money into it. That’s fine - I can’t really judge that since that’s how I started out. However these are the sorts of people that aren’t tending to their node, ensuring it’s online, ensuring it’s well funded and hence are most likely to have poor availability.

I originally allowed channels of only 30k sats, but have since increased this to 250k sats minimum channel size. Since doing this I’ve had less nuisance channels be opened and have had to prune far fewer channels. The message is: it’s not set and forget, in the same way your bank account isn’t either. If you care about your money, check your transactions.

That’s it

I think that’s it for now. Hopefully the things I’ve learned are helpful to somebody. Whilst a lot of the above is a simplification in some dimensions, I realise I still have a lot to learn and it’s a journey. Whether you think BitCoin and Lightning are the future or just a stepping stone along the way, one thing I believe for certain: it’s a fascinating system that’s truly disrupting the financial sector in a way that hasn’t previously been possible and it’s fun to learn how it works.

Many thanks to Podcasting 2.0, RaspiBlitz and both Adam Curry and Dave Jones for their inspiration.

]]>
Podcasting 2021-07-31T21:00:00+10:00 #TechDistortion
Retro Mac Pro Part 2 https://techdistortion.com/articles/retro-mac-pro-part-2 https://techdistortion.com/articles/retro-mac-pro-part-2 Retro Mac Pro Part 2 I wrote previously about why I invested in a Mac Pro and I realised I didn’t describe how I’d connected everything up, in case anyone cares. (They probably don’t but whatever…)

New Desk Configuration with 3 4K Displays

The Mac Pro 2013 has three Thunderbolt 2 buses and due to the bandwidth restrictions for 4K UHD 60Hz displays, you can only fully drive one 60Hz display per Thunderbolt 2 bus. Hence I have the two 60Hz monitors connected via Mini-DisplayPort to DisplayPort cables, one to each Bus.

The third monitor is an interesting quandry. I’d read that you can’t use a third monitor unless you connect it to the HDMI port, and it’s only HDMI 1.4 therefore it can only output 30Hz at 4K. However that’s not entirely true. Yes, it is HDMI 1.4 but that’s not the only way you can connect a monitor. By using a spare Mini-DisplayPort to HDMI Cable you can connect a monitor directly to the third Thunderbolt bus and it lights up the display, also at 30Hz.

I suspect that Apple made a design choice with the third Thunderbolt 2 bus, such that it’s also connected to the two Gigabit Ethernet ports and HDMI output. Therefore whatever remaining bandwidth would be available by limiting video output to 30Hz at 4k, allows the other components the bandwidth they require. In my case it’s annoying but not the end of the world, given the next best option was about four times the price.

Top View of the Mac Pro

Seeing as how I have a perfectly good TS3+ dock, and that Apple still sell Thunderbolt 2 cables and a bi-directional Thunderbolt 2 to Thunderbolt 3 adaptor, I’ve connected those to that third Thunderbolt 2 bus, then I drive the third monitor using a DisplayPort to DisplayPort cable from the TS3+ instead. This then allows me to connect anything to the TS3+ that’s USB-C to the Mac Pro and adds a much needed SD Card slot as well.

In order to fit everything on my desk I’ve added a monitor arm on each side for the side monitors which overhang the desk, and placed the Mac Pro behind the gap between the middle and right-hand side monitors. If you need access to the Mac Pro or the TS3+ Dock, simply swing the right hand monitor out of the way.

Moving the Right-hand Side Monitor reveals Mac Pro and TS3+

Since I podcast sometimes, I’ve also attached my Boom Arm behind the Dock and the Mix Pre3 is connected via the powered USB-C output on the TS3+ and it works perfectly. Less interesting are the connections to the hardwired Ethernet, speakers and webcam but that’s pretty much it.

]]>
Technology 2021-07-01T07:00:00+10:00 #TechDistortion
Retro Mac Pro https://techdistortion.com/articles/retro-mac-pro https://techdistortion.com/articles/retro-mac-pro Retro Mac Pro After an extended forced work-from-home mandated due to COVID19, I’ve had a lot of time to think about how best to optimise my work environment at home for optimal efficiency. I started with a sit/stand desk and found that connecting my MacBook Pro 13" via a CalDigit TS3+ allowed me to drive two 4K UHD displays at 60Hz and give me a huge amount of screen real-estate that was very useful for my job.

I retained the ability to disconnect and move into the office should I wish to, though in reality I only spent a total of 37 days physically in the office (not continuously, between various lockdown orders) in the past 12 months. When I was outside the office, I used my laptop occasionally but found the iPad Pro was good enough for most things I wanted to do and its battery life was better, plus I could sign documents - which is a common thing in my line of work.

It all wasn’t smooth sailing though. I found that the MBP was actually quite sluggish in the user interface when connected to the 4K screens, and that the internal fans would spin up to maximum all the time, many times without any obvious cause. I started to remove applications that were running in the background like iStat Menus, Dropbox, and a few others and that helped, but I still noticed that it was also spinning up now during Time Machine backups and Skype, Skype for Business, Microsoft Teams and Zoom.

This was a problem since I spent most of my workday on Teams calls and the microphone was picking up the annoying background grind of the cooling fans in the MBP. For this reason I started thinking about how to resolve the two issues: sluggish graphics and running the laptop hot all of the time, without sacrificing screen real-estate in HiDPi (of which I’d become rather dependent).

So I got to thinking: why am I still using a laptop when I’m spending 90% of my time at my home office desk? I wanted to keep using a Mac, and whilst I missed my 2009 Nehalem Mac Pro, I didn’t miss how noisy it also was, it’s power drain, the fact it was an effective space-heater all year round and frankly wasn’t currently officially supported by Apple1 anyway.

There are only a few currently supported Macs that can drive the amount of screen real-estate I wanted: the Mac Pros (2013, 2019), the iMac 5K (with discrete graphics) and the iMac Pro. There are, as yet, no M1 (Apple Silicon) Macs that can drive more than one external display. Buying a new Mac was out of the question with my budget strictly set at $1,400 AUD (about $1K USD at time of writing) it was down to used Macs. The goal was to get a powerful Mac that I could extend and upgrade as funds permitted. The more recent iMacs weren’t as upgradable and even a used iMac Pro was out of my budget and I won’t find a 2019 Mac Pro used since they’re too new and would also be too expensive (even used).

So call me crazy if you like, but I invested in a used 2013 Mac Pro - a Retro-Mac Pro if you like. It had spent its life in an office environment and for the past two years lay unused in a corner with its previous user leaving the company and they’d long since switched to Mac Minis. It had a damaged power cable, no box and no manuals and apart from some dust was in excellent condition.

I’ve now had a it for just under a week and I’m loving it! It’s the original entry-level model with twin FirePro D300s, 3.7GHz Quad-core Intel Xeon E5 with 16GB DDR3 RAM and a basic 256GB SSD. I can upgrade the SSD with a Sintech adaptor and a 2TB NVMe stick for $340 AUD, and go to 64GB RAM for about $390 AUD, but I’m in no hurry for the moment.

Admittedly the Mac Pro can only drive two of the 4k UHD screens at 60Hz with the third only at 30Hz but that amount of high-DPI screen real-estate is exactly what I’m looking for. Dragging a window between the 60Hz and 30Hz screens is a bit weird, but I have my oldest, cheapest 4K monitor as my static/cross-reference/parking screen anyway so that’s a limitation I can live with.

Yes, I could have built a Hackintosh.

Yes, I could run Windows on any old PC.

I wanted a currently supported Mac.

For those thinking, “But John, there’s Apple Silicon Macs with multi-display support just around the corner” well yes, that’s probably true. But I know Apple. They will leave multi-UHD monitor support only for their highest-end products which will still cost the Earth. So you might ALSO say, “But John, Intel Macs are about to die, melt, burn and become the neglected step-son that was once the golden-haired-child of the Apple family” and that’s true too, but I can still run Linux/Windows/ANYTHING on this thing, for a decade to come long after macOS ceases to be officially supported. That said, the fact you can still apply hacks to the 2009 Mac Pro and run Big Sur, it’s likely the 2013 Mac Pro will be running a slightly crippled but functional macOS for a long time yet, or at least until Apple give up on Intel support for Apple Silicon features, but that’s another story.

And you might also think, “John, why the hell would you buy a Mac that’s had so many reliability problems?” Well I did a lot of research given the Mac Pro 2013’s reputation, and based on what I found the original D300 model was relatively fine with very few issues. The D500 and D700 models had significantly worse reliability as they ran hotter (they were more powerful) and due to the thermal corner Apple built themselves into with the Mac Pro design at that point, ended up being unreliable with prolonged usage, due to excessive heat.

I can report the Mac Pro runs the two primary screens buttery smooth, it is effectively silent and doesn’t ever break a sweat. Being a geek however subjective measurements aren’t enough. The following GeekBench 5 scores for comparison:

Metric Mac Pro Score MacBook Pro Score % Difference
CPU Single-Core 837 1,026 - 22.5%
CPU Multi-Core 3,374 3,862 - 14.4%
OpenCL 20,482 / 21,366 8,539 + 239%
Metal 23,165 / 23,758 7,883 + 293%
Disk Read (MB/s) 926 2,412 - 260%
Disk Write (MB/s) 775 2,039 - 263%

By all measurements above my Macbook Pro should be the better machine, and you’d hope so being 5 years newer than the Mac Pro 2013. My usage to date however hasn’t shown that - almost the opposite, which begs the question - for my use case where screen real-estate matters the most, the graphics power from a discrete FirePro is far more valuable than a significantly faster SSD. Not only that but with the same amount of RAM you’d think the Macbook Pro would perform as well, however it’s using an integrated graphics chipset, hence sharing that RAM and driving two 4K screens was killing its performance, whereas the Mac Pro doesn’t sacrifice any of its RAM and maintains full performance even when driving those screens.

I don’t often encode video in Handbrake anymore or audio but when I do the Mac Pro isn’t quite as fast but it’s pretty close to the Macbook Pro or certainly good enough for me. The interesting and surprising thing to note is that a 7 year old desktop machine was a better fit for my needs at the price than any current model on offer by Apple.

I’m looking forward to many years of use out of a stable desktop machine, noting that whilst my use-case was a bit niche, it’s been an effective choice for me.


  1. An officially support Mac is one where Apple releases an Operating System version that will install without modification on that model of Mac. ↩︎

]]>
Technology 2021-07-01T06:00:00+10:00 #TechDistortion
Podcasting 2.0 Phase 3 Tags https://techdistortion.com/articles/podcasting-2-0-phase-3-tags https://techdistortion.com/articles/podcasting-2-0-phase-3-tags Podcasting 2.0 Phase 3 Tags I’ve been keeping a close eye on Podcasting 2.0 and a few weeks ago they finalised their Phase 3 tags. As I last wrote about this in December 2020, I thought I’d quickly update on thoughts on each of the Phase 3 tags:

  • < podcast:trailer > Is a compact and more flexible version of the existing iTunes < itunes:episodeType >trailer< /itunes:episodeType > tag. The Apple-spec isn’t supported outside of Apple, however more importantly you can only have one trailer per podcast, whereas the PC2.0 tag allows multiple trailers and trailers per season if desired. It also is more economical than the Apple equivalent, as it acts as an enclosure tag, rather than requiring an entire RSS Item in the Apple Spec.
  • < podcast:license > Used to specify the licence terms of the podcast content, either by show or by episode, relative to the SPDX definitions.
  • < podcast:alternateEnclosure > With this it’s possible to have more than one audio/video enclosure specified for each episode. You could use this for different audio encoding bitrates and video if you want to.
  • < podcast:guid > Rather than the using the Apple GUID guideline, the PC2.0 suggests using UUIDv5 using the RSS feed as the seed value.

In terms of TEN, I’m intending to add Trailer in future and I’m considering Licence as well, but beyond that probably not much else for the moment. I don’t see that GUID adds much for my use case over my existing setup (using the CDATA URL at time of publishing) and since my publicly available MP3s are already 64kbps Mono, Alternate Enclosure for low bitrate isn’t going to add any value to anyone in the world. I did consider linking to the YouTube videos of episodes where they exist however I don’t see this as beneficial in my use case either. In future I could explore an IPFS stored MP3 audio option for resiliency, however this would only make sense if this became more widely supported by client applications.

It’s good to see things moving forward and whilst I’m aware that the Value tag is being enhanced iteratively, I’m hopeful that this can incorporate client-value and extend the current lightning keysend protocol options to include details where supporters can flag “who” the streamed sats came from (if they choose to). It’s true that customKey/Value exist however they’re intentionally generic for the moment.

Of course, it’s a work in progress and it’s amazing that it works so well already, but I’m also aware that KeySend as it exists today, might be deprecated by the AMP aka Atomic-Multipath Payment protocol, so there may be some potential tweaks yet to come.

It’s great to see the namespace incorporating more tags over time and I’m hopeful that more client applications can start supporting them as well in future.

]]>
Podcasting 2021-06-13T16:30:00+10:00 #TechDistortion
Pushover and PodPing from RSS https://techdistortion.com/articles/pushover-and-podping-from-rss https://techdistortion.com/articles/pushover-and-podping-from-rss Pushover and PodPing from RSS In my efforts to support the Podcasting 2.0 initiative, I thought I should see how easy it was to incorporate their new PodPing concept, which is effectively a distributed RSS notification system specifically tailored for Podcasts. The idea is that when a new episode goes live, you notify the PodPing server and it then adds that notification to the distributed Hive blockchain system and then any apps can simply watch the blockchain and this can trigger the download of the new episode in the podcast client.

This has come predominantly from their attempts to leverage existing technology in WebSub, however when I tried the WebSub angle a few months ago, the results were very disappointing with many minutes, hours passing before a notification was seen and in some cases it wasn’t seen at all.

I leveraged parts of an existing Python script I’ve been using for years for my RSS social media poster, but stripped it down to the bare minimum. It consists of two files, checkfeeds.py (which just creates an instance of the RssChecker class) and then the actual code is in rss.py.

This beauty of this approach is that it will work on ANY site’s RSS target. Ideally if you have a dynamic system you could trigger the GET request on an episode posting event, however since my sites are statically generated and the posts are created ahead of time (and hence don’t appear until the site builder coincides with a point in time after that post is set to go live) it’s problematic to create a trigger from the static site generator.

Whilst I’m an Electrical Engineer, I consider myself a software developer of many different languages and platforms, but for Python I see myself more of a hacker and a slasher. Yes, there are better ways of doing this. Yes, I know already. Thanks in advance for keeping that to yourself.

Both are below for your interest/re-use or otherwise:

from rss import RssChecker

rssobject=RssChecker()

checkfeeds.py

CACHE_FILE = '<Cache File Here>'
CACHE_FILE_LENGTH = 10000
POPULATE_CACHE = 0
RSS_URLS = ["https://RSS FEED URL 1/index.xml", "https://RSS FEED URL 2/index.xml"]
TEST_MODE = 0
PUSHOVER_ENABLE = 0
PUSHOVER_USER_TOKEN = "<TOKEN HERE>"
PUSHOVER_API_TOKEN = "<TOKEN HERE>"
PODPING_ENABLE = 0
PODPING_AUTH_TOKEN = "<TOKEN HERE>"
PODPING_USER_AGENT = "<USER AGENT HERE>"

from collections import deque
import feedparser
import os
import os.path
import pycurl
import json
from io import BytesIO

class RssChecker():
    feedurl = ""

    def __init__(self):
        '''Initialise'''
        self.feedurl = RSS_URLS
        self.main()
        self.parse()
        self.close()

    def getdeque(self):
        '''return the deque'''
        return self.dbfeed

    def main(self):
        '''Main of the FeedCache class'''
        if os.path.exists(CACHE_FILE):
            with open(CACHE_FILE) as dbdsc:
                dbfromfile = dbdsc.readlines()
            dblist = [i.strip() for i in dbfromfile]
            self.dbfeed = deque(dblist, CACHE_FILE_LENGTH)
        else:
            self.dbfeed = deque([], CACHE_FILE_LENGTH)

    def append(self, rssid):
        '''Append a rss id to the cache'''
        self.dbfeed.append(rssid)

    def clear(self):
        '''Append a rss id to the cache'''
        self.dbfeed.clear()

    def close(self):
        '''Close the cache'''
        with open(CACHE_FILE, 'w') as dbdsc:
            dbdsc.writelines((''.join([i, os.linesep]) for i in self.dbfeed))

    def parse(self):
        '''Parse the Feed(s)'''
        if POPULATE_CACHE:
            self.clear()
        for currentfeedurl in self.feedurl:
            currentfeed = feedparser.parse(currentfeedurl)

            if POPULATE_CACHE:
                for thefeedentry in currentfeed.entries:
                    self.append(thefeedentry.get("guid", ""))
            else:
                for thefeedentry in currentfeed.entries:
                    if thefeedentry.get("guid", "") not in self.getdeque():
#                        print("Not Found in Cache: " + thefeedentry.get("title", ""))
                        if PUSHOVER_ENABLE:
                            crl = pycurl.Curl()
                            crl.setopt(crl.URL, 'https://api.pushover.net/1/messages.json')
                            crl.setopt(pycurl.HTTPHEADER, ['Content-Type: application/json' , 'Accept: application/json'])
                            data = json.dumps({"token": PUSHOVER_API_TOKEN, "user": PUSHOVER_USER_TOKEN, "title": "RSS Notifier", "message": thefeedentry.get("title", "") + " Now Live"})
                            crl.setopt(pycurl.POST, 1)
                            crl.setopt(pycurl.POSTFIELDS, data)
                            crl.perform()
                            crl.close()

                        if PODPING_ENABLE:
                            crl2 = pycurl.Curl()
                            crl2.setopt(crl2.URL, 'https://podping.cloud/?url=' + currentfeedurl)
                            crl2.setopt(pycurl.HTTPHEADER, ['Authorization: ' + PODPING_AUTH_TOKEN, 'User-Agent: ' + PODPING_USER_AGENT])
                            crl2.perform()
                            crl2.close()

                        if not TEST_MODE:
                            self.append(thefeedentry.get("guid", ""))

rss.py

The basic idea is:

  1. Create a cache file that keeps a list of all of the RSS entries you already have and are already live
  2. Connect up PushOver (if you want push notifications, or you could add your own if you like)
  3. Connect up PodPing (ask @dave@podcastindex.social or @brianoflondon@podcastindex.social for a posting API TOKEN)
  4. Set it up as a repeating task on your device of choice (preferably a server, but should work on a Synology, a Raspberry Pi or a VPS)

VPS

I built this initially on my Macbook Pro using the Homebrew installed Python 3 development environment, then installed the same on a CentOS7 VPS I have running as my Origin web server. Assuming you already have Python 3 installed, I added the following so I could use pycurl:

yum install -y openssl-devel
yum install python3-devel
yum group install "Development Tools"
yum install libcurl-devel
python3 -m pip install wheel
python3 -m pip install --compile --install-option="--with-openssl" pycurl

Whether you like “pycurl” or not, obviously there are other options but I stick with what works. Rather than refactor for a different library I just jumped through some extra hoops to get pycurl running.

Finally I bridge the checkfeeds.py with a simply bash script wrapper and call it from a CRON Job every 10 minutes.

Job done.

Enjoy.

]]>
Technology 2021-05-25T08:00:00+10:00 #TechDistortion
Fun With Apple Podcasts Connect https://techdistortion.com/articles/fun-with-apple-podcasts-connect https://techdistortion.com/articles/fun-with-apple-podcasts-connect Fun With Apple Podcasts Connect Apple Podcasts will shortly open to the public but for podcasters like me, we’ve been having fun with Apple’s first major update to their podcasting backend in several years, and it hasn’t really been that much fun. Before talking about why I’m putting so much time and effort into this at all, I’ll go through the highlights of my experiences to date.

Fun Times at the Podcasts Connect Mk2

Previously I’d used the Patreon/Breaker integration but that fell apart when Breaker was acquired by Twitter and the truth was that very, very few Patrons utilised the feature and the Breaker app was never big enough to attract any new subscribers. The Breaker audio integration and content has since been removed even though the company had the service taken over (to an extent) as it was one less thing for me to upload content to. In a way…this has been a bit déjà-vu and “here we go again…” 1

The back-catalogue of ad-free episodes as well as bonus content between Sleep, Pragmatic, Analytical and Causality adds up to 144 individual episodes.

For practically every one I had the original project files which I restored and re-exported in WAV format then uploaded them via the Apple Podcasts updated interface. (The format must be WAV or FLAC and Stereo, which is funny for a Mono podcast like mine and added up to about 50GB of audio) It’s straight-forward enough although there were a few annoying glitches that after using it for 10 days were still unresolved. Each of the key issues I encountered: (there were others but some were resolved at time of writing this so I’ve excluded those)

  1. Ratings and Reviews made a brief appearance then disappeared and still haven’t come back (I’m sure they will at some point)
  2. Not all show analytics time spans work (Past 60 days still doesn’t work, everything is blank)
  3. Archived shows in the Podcast-drop-down list appear but don’t in the main overview even when displaying ‘All’
  4. The order you save and upload audio files, changes the episode date such that if you create the episode meta-data, set the date, then upload the audio the episode date defaults to todays date. It does this AFTER you leave the page though, so it’s not obvious, but if you upload the audio THEN set the date it’s fine.
  5. The audio upload hit/miss ratio for me was about 8 out of 10, meaning for every 10 episodes I uploaded, 2 got stuck. What do I mean? The episode WAV file uploads, completes and then the page shows the following:

Initial WAV Upload Attempt

…and the “Processing Audio” never actually finishes. Hoping this was just a back-log issue with high end user demand I uploaded everything and came back minutes, hours then days later and finally after waiting five days I set about to try to unstick it.

Can&rsquo;t Publish! Five Days of Waiting and seeing this I gave up waiting for it to resolve itself…

The obvious thing to try: select “Edit” and delete then re-upload the audio. Simple enough, keeps the meta-data intact (except the date I had to re-save after every audio re-upload) then I waited another few days. Same result. Okay, so that didn’t work at all.

Next thing to try, re-create the entire episode again from scratch! So I did that for the 30 episodes that were stuck. Finally I see this (in some cases up to an hour later):

Blitz

And sure enough…

Blitz

Of course, that only worked for 25 episodes out of the 30 I uploaded a second time. I then had to wash-rinse-repeat for the 5 that had failed for a second time and repeated until they all worked. I’d hate to think about doing this on a low-bandwidth connection like I had a decade ago. Even at 40Mbps up it took a long time for the 2GB+ episodes of Pragmatic. The entire exercise has probably taken me 4 work-days of effort end to end, or about 32 hours of my life. There’s no way to delete the stuck episodes either so I now have a small collection of “Archived” non-episodes. Oh well…

Why John…Why?

I’ve read a lot of differing opinions from podcasters about Apples latest move and frankly I think the people most dismissive are those with significant existing revenue streams for their shows, or those that have already made their money and don’t need/want income for their show(s). Saying that you can reduce fees by using Stripe and your own website integration, by using Memberful, Patreon, or more recently by streaming Satoshis (very cool BTW), all have barriers to entry for the Podcast creator that can not be ignored.

For me, I’m a geek and I love that stuff so sure, I’ll have a crack at that (looks over at the Raspberry Pi Lightning Node on desk with a nod) but not everyone is like me (probably a good thing on balance).

So far as I can tell, Apple Podcasts is currently the most fee-expensive way for podcasters to get support from listeners. It’s also a walled garden2, but then so is Patreon, Spotify/Anchor (if you’re eligible and I’m not…for now), Breaker, and building your own system with Memberful or Stripe website integration requires developer chops most don’t have so isn’t an option. By far the easiest (once you figure out BitCoin/Lightning and set up your own Node) is actually streaming Sats, but that knowledge ramp is tough and lots of people HATE BitCoin. (That’s another, more controversial story).

Apple Podcasts has one thing going for it: It’s going to be the quickest, easiest way for someone to support your show coupled with the biggest audience in a single Podcasting ecosystem. You can’t and shouldn’t ignore that, and that’s why I’m giving this a chance. The same risks apply to Apple as to all the other walled gardens (Patreon, Breaker, Spotify/Anchor etc): you could be kicked-off the platform, they could stop supporting their platform slowly, sell it off or shut it down entirely and if any of that happens, your supporters will mostly disappear with it. That’s why no-one should rely on it as the sole pathway for support.

It’s about being present and assessing after 6-12 months. If you’re not in it, then you might miss out on supporters that love your work and want to support it and this is the only way they’re comfortable doing that. So I’m giving this a shot and when it launches for Beta testing will be looking for any fans that want to give it a try so I can tweak anything that needs tweaking, and will post publicly when it goes live for all. Hopefully all of my efforts (and Apples) are worth it for all concerned.

Time will tell. (It always does)


  1. Realistically if every Podcasting-walled-garden offers something like this (as Breaker did and Spotify is about to) then at some point Podcasters have to draw a line of effort vs reward. Right now I’m uploading files to two places, and with Apple that will be a third. If I add Spotify, Facebook, Breaker then I’m up to triple my current effort to support 5 walled gardens. Eventually if the platform isn’t popular then it’s not going to be worth that effort. Apple is worth considering because its platform is significant. The same won’t always be true for the “next walled garden” whatever that may be. ↩︎

  2. To be crystal clear, I love walled gardens as in actual GARDENS, but I don’t mean those ones, I mean closed ecosystems aka ‘walled gardens’, before you say that. Actually no geek thought that, that’s just my sense of humour. Alas. ↩︎

]]>
Technology 2021-04-30T20:00:00+10:00 #TechDistortion
Causality Transcriptions https://techdistortion.com/articles/causality-transcriptions https://techdistortion.com/articles/causality-transcriptions Causality Transcriptions Spurred on by Podcasting 2.0 and reflecting on my previous attempt at transcriptions, I thought it was time to have another crack at this. The initial attempts were basic TXT files that weren’t time-synced nor proofed and used a very old version of Dragon Dictate I had laying around.

This time around my focus is on making Causality as good as it possibly can be. From the PC2.0 guidelines:

SRT: The SRT format was designed for video captions but provides a suitable solution for podcast transcripts. The SRT format contains medium-fidelity timestamps and are a popular export option from transcription services. SRT transcripts used for podcasts should adhere to the following specifications.

Properties:

  • Max number of lines: 2
  • Max characters per line: 32
  • Speaker names (optional): Start a new card when the speaker changes. Include the speaker’s name, followed by a colon.

This is closely related to defaults I found using Otter.ai but that’s not free if you want time-sync’d SRT files. So my workflow uses YouTube (for something useful)…

STEPS:

  1. Upload episode directly converted from the original public audio file to YouTube as a Video (I use Ferrite to create a video export). Previously I was using LibSyn as part of their YouTube destination which also works.
  2. Wait a while. It can take anywhere from a few minutes to a few hours, then go to your YouTube Studio, pick an episode, Video Details, under the section: “Language, subtitles, and closed captions”, select “English by YouTube (automatic)” three vertical dots, “Download” (NOTE BELOW). Alternatively select Subtitles, and next to DUPLICATE AND EDIT, select the three dots and Download, then .srt
  3. If you can only get the SBV File: Open this file, untitled.sbv in a raw text editor, then select all, copy and paste it into: DCMP’s website, click Convert, select all, then create a new blank file: untitled.srt and paste in the converted format.
  4. If you have the SRT now, and don’t have the source video (eg if it was created by LibSyn automatically, I didn’t have a copy locally) download the converted YouTube video using the embed link for the episode to: SaveFrom or use a YouTube downloader if you prefer.
  5. Download the Video in low-res and put all into a single directory.
  6. I’m using Subtitle Studio and it’s not free but it was the easiest for me to get my head around and it works for me. Open the SRT file just created/downloaded then drag the video for the episode in question onto the new window.
  7. Visually skim and fix obvious errors before you press play (Title Case, ends of Sentences, words for numbers, MY NAME!)
  8. Export the SRT file and add to the website and RSS Feed!

NOTE: In 1 case out of 46 uploads it thought I was speaking in Russian for some reason? The auto-translation in Russian was funny but not useful, but for all others it correctly translated automatically into English and the quality of the conversion is quite good.

I’ve also flattened the SRT into a fixed Text file, which is useful for full text search. The process for that takes me two steps:

  1. Upload the file to Happy Scribe and select “Text File” as the output format.
  2. Open the downloaded file in a text editor, select all the text and then go to Tool Slick’s line merge tool, pasting the text into the Input Text box, then “Join Lines” and select all of the Output Joined Lines box and paste over what you had in your local text file.
  3. Rename the file and add to the website and RSS Feed!

As of publishing I’ve only done the sub-titles in SRT and TXT formats of two episodes, but I will continue to churn my way through them as time permits until they’re all done.

Of course you could save yourself a bit of effort and use Otter, and save yourself even more effort and don’t proof-read the automatically converted text. If I wasn’t so much of a stickler for detail, I’d probably do that myself but it’s that refusal to just accept that, that makes me the Engineer I am I suppose.

Enjoy!

]]>
Podcasting 2021-03-30T06:00:00+10:00 #TechDistortion
Building A Synology Hugo Builder https://techdistortion.com/articles/building-a-synology-hugo-builder https://techdistortion.com/articles/building-a-synology-hugo-builder Building A Synology Hugo Builder I’ve been using GoHugo (Hugo) as a static site generator on all of my sites for about three years now and I love it’s speed and its flexibility. That said a recent policy change at a VPS host had me reassessing my options and now that I have my own Synology with Docker capability I was looking for a way to go ultra-slim and run my own builder, using a lightweight (read VERY low spec) OpenVZ VPS as the Nginx front-end web server behind a CDN like CloudFlare. Previously I’d used Netlify but their rebuild limitations on the free tier were getting a touch much.

I regularly create content that I want to set to release automatically in the future at a set time and date. In order to accomplish this Hugo needs to rebuild the site periodically in the background such that when new pages are ready to go live, they are automatically built and available to the world to see. When I’m debugging or writing articles I’ll run the local environment on my Macbook Pro and only when I’m happy with the final result will I push to the Git repo. Hence I need a set-and-forget automatic build environment. I’ve done this on spare machines (of which I current have none), on a beefier VPS using CronJobs and scripts, on my Synology as a Virtual machine using the same (wasn’t reliable) before settling on this design.

Requirements

The VPS needed to be capable of serving Nginx from folders that are RSync’d from the DropBox. I searched through LowEnd Stock looking for deals for 256GB of RAM, SSD for a cheap annual rate and at the time got the “Special Mini Sailor OpenVZ SSD” for $6 USD/yr which was that amount of RAM and 10GB of SSD space, running CentOS7. (Note: These have sold out but there’s plenty of others around that price range at time of writing)

Setting up the RSync, NGinx, SSH etc is beyond the scope of this article however it is relatively straight-forward. Some guides here might be helpful if you’re interested.

My sites are controlled via a Git workflow, which is quite common for website management of static sites and in my case I’ve used GitHub, GitLab and most recently settled on the lightweight and solid Gitea which I also self-host now on my Synology. Any of the above would work fine but having them on the same device makes the Git Clone very fast but you can adjust that step if you’re using an external hosting platform.

I also had three sites I wanted to build from the same platform. The requirements roughly were:

  • Must stay within Synology DSM Docker environment (no hacking, no portainer which means DroneCI is out)
  • Must use all self-hosted, owned docker/system environment
  • A single docker image to build multiple websites
  • Support error logging and notifications on build errors
  • Must be lightweight
  • Must be an updated/recent/current docker image of Hugo

The Docker Image And Folders

I struggled for a while with different images because I needed one that included RSync, Git, Hugo and allowed me to modify the startup script. Some of the hugo build dockers out there were actually quite restricted to a set workflow like running up the local server to serve from memory or assumed you had a single website. The XdevBase / HugoBuilder was perfect for what I needed. Preinstalled it has:

  • rsync
  • git
  • Hugo (Obviously)

Search for “xdevbase” in the Docker Registry and you should find it. Select it and Download the latest - at time of writing it’s very lightweight only taking up 84MB.

XDevBase

After this open “File Station” and start building the supporting folder structure you’ll need. For me I had three websites: TechDistortion, The Engineered Network and SlipApps, hence I created three folders. Firstly under the Docker folder which you should already have if you’ve played with Synology docker before, create a sub-folder for Hugo - for me I imaginatively called mine “gohugo”, then under that I created a sub-folder for each site plus one for my logs.

Folders

Under each website folder I also created two more folders: “src” for the website source I’ll be checking out of Gitea, and “output” for the final publicly generated Hugo website output from the generator.

Scripts

I spent a fair amount of time perfecting the scripts below. The idea was to have an over-arching script that called each site one after the other in a never-ending loop with a mandatory wait-time between the loops. If you attempt to run independent dockers each on a timer and any other task runs on the Synology, the two or three independently running dockers will overlap leading to an overload condition the Synology will not recover from. The only viable option is to serialise the builds and synchronising those builds is easiest using a single docker like I have.

Using the “Text Editor” on the Synology or using your text editor of choice and copying the files across to the correct folder, create a main build.sh file and as many build-xyz.sh files as you have sites you want to build.

#!/bin/sh
# Main build.sh

# Stash the current time and date in the log file and note the start of the docker
current_time=$(date)
echo "$current_time :: GoHugo Docker Startup" >> /root/logs/main-build-log.txt

while :
do
	current_time=$(date)
	echo "$current_time :: TEN Build Called" >> /root/logs/main-build-log.txt
	/root/build-ten.sh
	current_time=$(date)
	echo "$current_time :: TEN Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m

	current_time=$(date)
	echo "$current_time :: TD Build Called" >> /root/logs/main-build-log.txt
	/root/build-td.sh
	current_time=$(date)
	echo "$current_time :: TD Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m

	current_time=$(date)
	echo "$current_time :: SLIP Build Called" >> /root/logs/main-build-log.txt
	/root/build-slip.sh
	current_time=$(date)
	echo "$current_time :: SLIP Build Complete, Sleeping" >> /root/logs/main-build-log.txt
	sleep 5m
done

current_time=$(date)
echo "$current_time :: GoHugo Docker Build Loop Ungraceful Exit" >> /root/logs/main-build-log.txt
curl -s -F "token=xxxthisisatokenxxx" -F "user=xxxthisisauserxxx1" -F "title=Hugo Site Builds" -F "message=\"Ungraceful Exit from Build Loop\"" https://api.pushover.net/1/messages.json

# When debugging is handy to jump out into the Shell, but once it's working okay, comment this out:
#sh

This will create a main build log file and calls each sub-script in sequence. If it ever jumps out of the loop, I’ve set up a Pushover API notification to let me know.

Since all three sub-scripts are effectively identical except for the directories and repositories for each, The Engineered Network script follows:

#!/bin/sh

# BUILD The Engineered Network website: build-ten.sh
# Set Time Stamp of this build
current_time=$(date)
echo "$current_time :: TEN Build Started" >> /root/logs/ten-build-log.txt

rm -rf /ten/src/* /ten/src/.* 2> /dev/null
current_time=$(date)
if [[ -z "$(ls -A /ten/src)" ]];
then
	echo "$current_time :: Repository (TEN) successfully cleared." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Repository (TEN) not cleared." >> /root/logs/ten-build-log.txt
fi

# The following is easy since my Gitea repos are on the same device. You could also set this up to Clone from an external repo.
git --git-dir /ten/src/ clone /repos/engineered.git /ten/src/ --quiet
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Repository (TEN) successfully cloned." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Repository (TEN) not cloned." >> /root/logs/ten-build-log.txt
fi

rm -rf /ten/output/* /ten/output/.* 2> /dev/null
current_time=$(date)
if [[ -z "$(ls -A /ten/output)" ]];
then
	echo "$current_time :: Site (TEN) successfully cleared." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not cleared." >> /root/logs/ten-build-log.txt
fi

hugo -s /ten/src/ -d /ten/output/ -b "https://engineered.network" --quiet
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Site (TEN) successfully generated." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not generated." >> /root/logs/ten-build-log.txt
fi

rsync -arvz --quiet -e 'ssh -p 22' --delete /ten/output/ bobtheuser@myhostsailorvps:/var/www/html/engineered
success=$?
current_time=$(date)
if [[ $success -eq 0 ]];
then
	echo "$current_time :: Site (TEN) successfully synchronised." >> /root/logs/ten-build-log.txt
else
	echo "$current_time :: Site (TEN) not synchronised." >> /root/logs/ten-build-log.txt
fi

current_time=$(date)
echo "$current_time :: TEN Build Ended" >> /root/logs/ten-build-log.txt

The above script can be broken down into several steps as follows:

  1. Clear the Hugo Source directory
  2. Pull the current released Source code from the Git repo
  3. Clear the Hugo Output directory
  4. Hugo generate the Output of the website
  5. RSync the output to the remote VPS

Each step has a pass/fail check and logs the result either way.

Your SSH Key

For this work you need to confirm that RSync works and you can push to the remote VPS securely. For that extract the id_rsa key (preferably generate a fresh key-pair) and place that in the /docker/gohugo/ folder on the Synology ready for the next step. As they say it should “just work” but you can test if it does once your docker is running. Open the GoHugo docker, go to the Terminal tab and Create–>Launch with command “sh” then select the “sh” terminal window. In there enter:

ssh bobtheuser@myhostsailorvps -p22

That should log you in without a password, securely via ssh. Once it’s working you can exit that terminal and smile. If not, you’ll need to dig into the SSH keys which is beyond the scope of this article.

Gitea Repo

This is now specific to my use case. You could also clone your Repo from any other location but for me this was quicker easier and simpler to map my repo from the Gitea Docker folder location. If you’re like me and running your own Gitea on the Synology you’ll find that repo directory under the /docker/gitea sub-directories at …data/git/respositories/ and that’s it. Of course many will not be doing that, but setting up external Git cloning isn’t too difficult but beyond the scope of this article.

Configuring The Docker Container

Under the Docker –> Image section, select the downloaded image then “Launch” it, set the Container Name to “gohugo” (or whatever name you want…doesn’t matter) then configure the Advanced Settings as follows:

  • Enable auto-restart: Checked
  • Volume: (See below)
  • Network: Leave it as bridge is fine
  • Port Settings: Since I’m using this as a builder I don’t care about web-server functionality so I left this at Auto and never use that feature
  • Links: Leave this empty
  • Environment –> Command: /root/build.sh (Really important to set this start-up command here and now, since thanks to Synology’s DSM Docker implementation, you can’t change this after the Docker container has been created without destroying and recreating the entire docker container!)

There’s a lot of little things to add here to make this work for all the sites. In future if you want to add more sites then stopping the Docker, adding Folders and modifying the scripts is straight-forward.

Add the following Files: (Where xxx, yyy, zzz are the script names representing your sites we created above, aaa is your local repo folder name)

  • docker/gohugo/build-xxx.sh map to /root/build-xxx.sh (Read-Only)
  • docker/gohugo/build-yyy.sh map to /root/build-yyy.sh (Read-Only)
  • docker/gohugo/build-zzz.sh map to /root/build-zzz.sh (Read-Only)
  • docker/gohugo/build.sh map to /root/build.sh
  • docker/gohugo/id_rsa map to /root/.ssh/id_rsa (Read-Only)
  • docker/gitea/data/git/respositories/aaa map to /repos (Read-Only) Only for a locally hosted Gitea repo

Add the following Folders:

  • docker/gohugo/xxx/output map to /xxx/output
  • docker/gohugo/xxx/src map to /xxx/src
  • docker/gohugo/yyy/output map to /yyy/output
  • docker/gohugo/yyy/src map to /yyy/src
  • docker/gohugo/zzz/output map to /zzz/output
  • docker/gohugo/zzz/src map to /zzz/src
  • docker/gohugo/logs map to /root/logs

When finished and fully built the Volumes will look something like this:

Volumes

Apply the Advanced Settings then Next and select “Run this container after the wizard is finished” then Apply and away we go.

Of course, you can put whatever folder structure and naming you like, but I like keeping my abbreviations consistent and brief for easier coding and fault-finding. Feel free to use artistic licence as you please…

Away We Go!

At this point the Docker should now be periodically regenerating your Hugo websites like clockwork. I’ve had this setup running now for many weeks without a single hiccup and on rebooting it comes back to life and just picks up and runs without any issues.

As a final bonus you can also configure the Synology Web Server to point at each Output directory and double-check what’s being posted live if you want to.

Enjoy your automated Hugo build environment that you completely control :)

]]>
Hugo 2021-02-22T06:00:00+10:00 #TechDistortion
Building Your Own Bitcoin Lightning Node Part Two https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node-part-two https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node-part-two Building Your Own Bitcoin Lightning Node Part Two Previously I’ve written about my Synology BitCoin Node Failure and more recently about my RaspiBlitz that was actually successful. Now I’d like to share how I set it up with a few things I learned along the way that will hopefully make things easier for others to avoid the mistakes I made.

Previously I suggested the following:

  • Set up the node to download a fresh copy of the BlockChain
  • Use an External IP, as it’s more compatible than TOR (unless you’re a privacy nut)

Beyond that here’s some more suggestions:

  • If you’re on a home network behind a standard Internet Modem/Router: change the Raspberry Pi to a fixed IP address and set up port forwarding for the services you need (TCP 9735 at a minimum for Lightning)
  • Don’t change the IP from DHCP to Fixed IP until you’ve first enabled and set up your Wireless connection as a backup
  • Sign up for DuckDNS before you add ANY Services (I tried FreeDNS but DuckDNS was the only one I found that supports Let’s Encrypt)

Let’s get started then…

WiFi First

Of course this is optional, but I think it’s worth having even if you’re not intending to pull the physical cable and shove the Pi in a drawer somewhere (please don’t though it will probably overheat if you did that). Go to the Terminal on the Pi and enter the following:

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

Then add the following to the bottom of the file:

network={
ssid="My WiFi SSID Here"
psk="My WiFi Password Here"
}

This is the short-summary version of the Pi instructions.

Once this is done you can reboot or enter this to restart the WiFi connection:

sudo wpa_cli -i wlan0 reconfigure

You can confirm it’s connected with:

iwgetid

You should now see:

wlan0     ESSID:"My WiFi SSID Here"

Fixed IP

The Raspberry Pi docs walk through what to change but I’ll summarise it here. Firstly if you have a router to connect to the internet, likely it’s one of the standard subnets like 192.168.1.1 and it’s your gateway, but to be sure from the Raspberry Pi terminal (after you’ve SSH’d in) type:

route -ne

It should come back with a table with Destination 0.0.0.0 to a Gateway, most likely something like 192.168.1.1 as Iface (Interface) Eth0 for hardwired Ethernet and wlan0 for WiFi. Next type:

cat /etc/resolv.conf

This should list the nameservers you’re using - make a note of these in a text-editor if you like. Then edit your dhcpcd.conf. I use nano but you can use vi or any other linux editor of your choice:

sudo nano /etc/dhcpcd.conf

Add the following (or your equivalent) to the end of the conf: (Where xxx is your Fixed IP)

interface eth0
static ip_address=192.168.1.xxx
static routers=192.168.1.1
static domain_name_servers=192.168.1.1  fe80::9fe9:ecdf:fc7e:ad1f%eth0

Of course when picking your Fixed IP on the local network, make sure your DHCP allocation has a free zone above or below which it’s a safe space. On my network I only allow DHCP between .20 and .254 of my subnet but you can reserve any which way you prefer.

Once this is done reboot your Raspberry Pi and confirm you can connect via SSH at the Fixed IP. If you can’t, try the WiFi IP address and check your settings. If you still can’t, oh dear you’ll need to reflash your SD card and start over. (If that happens don’t worry, your Blockchain on the SSD will not be lost)

Dynamic DNS

If you’re like me you’re running this on your home network and you have a “normal” internet plan behind an ISP that charges more for a Fixed IP on the Internet and hence you’ve got to deal with a Dynamic IP address that’s public-facing. #Alas

There are many Dynamic DNS sites out there, but finding one that will work reliably, automatically, with Let’s Encrypt isn’t easy. Of course if you’re not intending to use public-facing utilities that need a TLS certificate like I am (Sphinx) then you probably don’t need to worry about this step or at least any Dynamic DNS provider would be fine. For me, I had to do this to get Sphinx to work properly.

DuckDNS allows you to sign in with credentials ranging from Persona, to Twitter, GitHub, Reddit and Google: pick whichever you have or whichever you prefer. Once logged in you can create a subdomain and add up to 5 in total. Take note of your Token and your subdomain.

In the RaspiBlitz menu go to SUBSCRIBE and select NEW2 (LetsEncrypt HTTPS Domain [free] not under Settings!) then enter the above information as requested. When it comes to the Update URL leave this blank. The Blitz will reboot and hopefully everything should just work. When you’re done the Domain will then appear on the LCD of your Blitz at the top.

You won’t know if your certificates are correctly issued until later or if you want you can dive into the terminal again and manually check, but that’s your call.

Port Forwarding Warning

Personally I only Port Forward the following that I believe is the minimum required to get the Node and Sphinx Relay working properly:

  • TCP 9735 (Lightning)
  • TCP 3300 & 3301 (Sphinx Relay)
  • TCP 8080 (Let’s Encrypt)

I think there’s an incremental risk in forwarding a lot of other services - particularly those that allow administration of your Node and Wallet. I also use an Open VPN to my household network with a different endpoint and I use the Web UIs and Zap application on my iPhone for interacting with my Node. Even with a TLS certificate and password per application I don’t think opening things wide open is a good idea. You may see that convenience differently, so make your own decisions in this regard.

Okay…now what?

As a podcaster and casual user of your Lightning Node, not everything in the Settings and Services is of interest. For me I’ve enabled the following that are important for use and monitoring:

  • (SETTINGS) LND Auto-Unlock
  • (SERVICES) Accept KeySend
  • (SERVICES) RTL Web interface
  • (SERVICES) ThunderHub
  • (SERVICES) BTC-RPC-Explorer
  • (SERVICES) Lightning Loop
  • (SERVICES) Sphinx-Relay

Each in turn…

LND Auto-Unlock

In lightning’s LND implementation, the Wallet with your coinage in it is automatically locked when you restart your system. If you’re comfortable with auto-unlocking your wallet on reboot without you explicitly entering your Wallet password then this feature means a recovery from a reboot/power failure etc will be that little bit quicker and easier. That said, storing your wallet password on your device for privacy nuts is probably not the best idea. I’ll let you balance convenience against security for yourself.

Accept KeySend

One of the more recent additions to the Lightning standard in mid-2020 was KeySend. This feature allows anyone to send an open Invoice to any Node that supports it, from any Node that supports it. With the Podcasting 2.0 model, the key is using KeySend to stream Sats to your nominated Node either per minute listened or as one-off Boost payments showing appreciation on behalf of the listener. For me this was the whole point, but for some maybe they might not be comfortable accepting payments from random people at random times of the day. Who can say?

RTL Web interface

The Ride The Lightning web interface is a basic but handy web UI for looking at your Wallet, your channels and to create and receive Invoices. I enabled this because it was more light-weight than ThunderHub but as I’ve learned more about BitCoin and Lightning, I must confess I rarely use it now and prefer ThunderHub. It’s a great place to start though and handy to have.

ThunderHub

By far the most detailed and extensive UI I’ve found yet for the LND implementation, ThunderHub allows everything that RTL’s UI does plus channel rebalancing, Statistics, Swaps and Reporting. It’s become my go to UI for interacting with my Node.

BTC-RPC-Explorer

I only recently added this because I was sick of going to internet-based web pages to look at information about BitCoin - things like the current leading block, pending transactions, fee targets, block times and lots and lots more. Having said all of that, it took about 9 hours to crunch through the blockchain and derive this information on my Pi, and it took up about 8% of my remaining storage for the privilege. You could probably live without it though, but if you’re really wanting to learn about the state of the BitCoin blockchain then this is very useful.

Lightning Loop

Looping payments in and out is handy to have and a welcome addition to the LND implementation. At a high level Looping allows you to send funds to/from users/services that aren’t Lightning enabled and reduces transaction fees by reusing Lightning channels. That said, maybe that’s another topic for another post.

Sphinx-Relay

The one I really wanted. The truth is that at the time of writing, the best implementation of streaming podcasts with Lightning integration is Sphinx.

Sphinx started out as a Chat application, but one that uses the distributed Lightning network to pass messages. The idea seems bizarre to start with but if you have a channel between two people you can send them a message attached to a Sat payment. The recipient can then send that same Sat back to you with their own message in response.

Of course you can add fees if you want to for peer to peer but that’s optional. If you want to chat with someone else on Sphinx, so long as they have a Wallet on a Node that has a Sphinx-Relay on it, you can participate. Things get more interesting if you create a group chat, that Sphinx call a “Tribe” at which point you can “Stake” an amount to post on the channel with a “Time to Stake” both set by the Tribe owner. If the poster posts something good, the time to stake elapses and the Staked amount returns to the original poster. If the poster posts something inflammatory then the Tribe owner can delete that post and those funds are claimed by the Tribe owner.

This effectively puts a price on poor behaviour and conversely poor-acting owners that delete all posts will find themselves with an empty Tribe very quickly. It’s an interesting system for sure but has led to some well moderated conversations in my experiences thus far even in controversial Tribes.

In mid/late 2020 Sphinx integrated Podcasts into Tribe functionality. Hence I can create a Tribe, link a single Podcast RSS Feed to that Tribe and then anyone listening to an episode in the Sphinx app and Tribe will automatically stream Sats to the RSS Feed’s nominated Lightning Node. The “Value Slider” defaults to the Streaming Sats suggested in the RSS Feed, however this can be adjusted by the listener on a sliding bar all the way down to 0 if they wish - it’s Opt in. The player itself is basic but works well enough with Skip Forwards and Backwards as well as speed adjustment.

Additionally Sphinx has apps available for iOS (TestFlight Beta), Android (Sideload, Android 7.0 and higher) and desktop OSs including Windows, Linux and MacOS as well. Most functions exist on all apps however I find myself sometimes going back to the iOS app to send/receive Sats to my Wallet/Node which isn’t currently implemented on the MacOS version. (Not since I started my own Node however) You can of course host a Node on Sphinx for a monthly fee if you prefer, but this article is about owning your own Node.

One Last Thing: Inbound Liquidity

The only part of this equation that’s a bit odd (or was for me at the beginning) is understanding liquidity. I mentioned it briefly here, but in short when you open a channel with someone the funds are on your own side, meaning you have outbound liquidity. Hence I can spend Lightning/BitCoin on things in the Network. That’s fine. No issue. The problem is when you’re a Podcaster you want to receive payments in streaming Sats, but without Inbound Liquidity you can’t do that.

The simplest way to build it is to ask, really, really nicely for an existing Lightning user to open a channel with you. Fortunately my Podcasting 2.0 acquaintance Dave Jones was kind enough to open a channel for 100k Sats to my node, thus allowing inbound liquidity for testing and setting up.

In current terms, 100k isn’t a huge channel but it’s more than enough to get anyone started. There are other ways I’ve seen including pushing tokens to the partner on the channel when it’s created (at a cost) but that’s something that I need to learn more about before venturing more thoughts on it.

That’s it

That’s pretty much it. If you’re a podcaster and you’ve made it this far you now have your own Node, you’ve added your Value tag to your RSS feed with your new Node ID, you’ve set up Sphinx Relay and your own Tribe and with Inbound Liquidity you’re now having Sats streamed to you by your fans and loyal listeners!

Many thanks to Podcasting 2.0, Sphinx, RaspiBlitz, DuckDNS and both Adam Curry and Dave Jones for inspiration and guidance.

Please consider supporting each of these projects and groups as they are working in the open to provide a better podcasting future for everyone.

]]>
Podcasting 2021-02-16T06:00:00+10:00 #TechDistortion
Building Your Own Bitcoin Lightning Node https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node https://techdistortion.com/articles/build-your-own-bitcoin-lightning-node Building Your Own Bitcoin Lightning Node After my previous attempts to build my own node to take control of my slowly growing podcast streaming income didn’t go so well I decided to bite the bullet and build my own Lightning Node with new hardware. The criteria was:

  1. Minimise expenditure and transaction fees (host my own node)
  2. Must be always connected (via home internet is fine)
  3. Use low-cost hardware and open-source software with minimal command-line work

Because of the above, I couldn’t use my Macbook Pro since that comes with me regularly when I leave the house. I tried to use my Synology, but that didn’t work out. The next best option was a Raspberry Pi, and two of the most popular options out there are the RaspiBolt and RaspiBlitz. Note: Umbrel is coming along but not quite as far as the other two.

The Blitz was my choice as it seems to be more popular and I could build it easily enough myself. The GitHub Repo is very detailed and extremely helpful. This article is not intended to just repeat those instructions, but rather describe my own experiences in building my own Blitz.

Parts

The GitHub instructions suggest Amazon links, but in Australia Amazon isn’t what it is in the States or even Europe. So instead I sourced the parts from a local importer of Rasperry Pi parts. I picked from the “Standard” list:

Core Electronics

  • $92.50 / Raspberry Pi 4 Model B 4GB
  • $16.45 / Raspberry Pi 4 Power Supply (Official) - USB-C 5.1V 15.3W (White)
  • $23.50 / Aluminium Heatsink Case for Raspberry Pi 4 Black (Passive Cooling, Silent)
  • $34.65 / Waveshare 3.5inch LCD 480x320 (The LCD referred to was a 3.5" RPi Display, GPIO connection, XPT2046 Touch Controller but they had either no stock on Amazon or wouldn’t ship to Australia)

Blitz All the parts from Core Electronics

UMart

  • $14 / Samsung 32GB Micro SDHC Evo Plus W90MB Class 10 with SD Adapter

On Hand

Admittedly a 1TB SSD and Case would’ve cost an additional $160 AUD, which in future I will extend probably to a fully future-proof 2TB SSD but at this point the Bitcoin Blockchain uses about 82% of that so a bigger SSD is on the cards for me, in the next 6-9 months time for sure.

Total cost: $181.10 AUD (about $139 USD or 300k Sats at time of writing)

Blitz The WaveShare LCD Front View

Blitz The WaveShare LCD Rear View

Assembly

The power supply is simple: unwrap, plug in to the USB-C Power Port and done. The Heatsink comes with some different sized thermal pads to sandwich between the heatsink and the key components on the Pi motherboard and four screws to clamp the two pieces together around the motherboard. Finally lining up the screen with the outer-most pins on the I/O Header and gently pressing them together. They won’t sit flat against the HeatSink/case but they don’t have to, to connect well.

Blitz The Power Supply

Blitz The HeatSink

Blitz The Raspberry Pi 4B Motherboard

Burning the Image

I downloaded the boot image from the GitHub repo, and used Balena Etcher to write it on my Macbook Pro. Afterward you insert that into the Raspberry Pi, connected up the SSD to the motherboard side USB3.0 port, connect up an Ethernet cable and then power it up!

Installing the System

If everything is hooked up correctly (and you have a router/DHCP server on your hardwired ethernet you just connected it to) the screen should light up with the DHCP allocated IP Address you can reach it on with instructions on how to SSH via the terminal, like “ssh admin@192.168.1.121” or similar. Open up Terminal, enter that and you’ll get a nice neat blue-screen with the same information on it. From here everything is done via the menu installer.

If you get kicked out of that interface just enter ‘raspiblitz’ and it will restart the menu.

Getting the Order Right

  1. Pick Your Poison For me I chose BitCoin and Lightning which is the default. There are other Crypto-currencies if that’s your choice then set your passwords and please use a Password manager with at least 32 characters - make it as secure as you can from Day One!
  2. TOR vs Public IP Some privacy nuts run behind TOR to obscure their identity and location. I’ve done both and can tell you that TOR takes a lot longer to sync and access and will kill off a lot of apps and makes opening channels to some other nodes and services difficult or impossible. For me, I just wanted a working node that was as interoperable as possible so I chose Public IP.
  3. Let the BlockChain Sync Once your SSD is formatted, if you have the patience then I recommend syncing the Blockchain from scratch. I already had a copy of it that I SCP’d across from my Synology and it saved me about 36 hours but it also caused my installer to ungracefully exit and it took me another day of messing with the command line to get it to start again and complete the installation. In retrospect, not a time saver end to end but your mileage may vary.
  4. Set up a New Node Or in my case, I recovered my old node at this point by copying the channel.backup over but for most others it’s a New Node and a new Wallet and for goodness sake when you make a new wallet; KEEP A COPY OF YOUR SEED WORDS!!!
  5. Let Lightning “Sync” It’s actually validating blocks technically but this also takes a while. For me it took nearly 6 hours for both Lightning and Bitcoin blocks to sync.

Blitz The Final Assembled Node up and Running

My Money from Attempt 2 on the Synology Recovered!

I was able to copy the channel.backup and wallet.dat files from the Synology and was able to successfully recover my $60 AUD investment from my previous attempts, so that’s good! (And it worked pretty easily actually)

In order to prevent any loss of wallet, I’ve also added a USB3.0 Thumb Drive to the other USB3.0 port and set up “Static Channel Backup on USB Drive” which required a brief format to EXT4 but worked without any real drama.

Conclusion

Building the node using a salvaged SSD cost under $200 AUD and took about 2 days to sync and set up. Installing the software and setting up all the services is another story for another post, but it’s working great!

]]>
Podcasting 2021-02-12T06:00:00+10:00 #TechDistortion
BitCoin, Lightning and Patience https://techdistortion.com/articles/bitcoin-lightning-and-patience https://techdistortion.com/articles/bitcoin-lightning-and-patience BitCoin, Lightning and Patience I’ve been vaguely aware of BitCoin for a decade but never really dug into it until recently, as a direct result of my interest in the Podcasting 2.0 team.

My goals were:

  1. Minimise expenditure and transaction fees
  2. Use existing hardware and open-source software
  3. Setup a functional lightning node to both make and accept payments

I’m the proud owner of a Synology, and it can run docker, and you can run BitCoin and Lightning in Docker containers? Okay then…this should be easy enough, right?

BitCoin Node Take 1

I set up the kylemanna/bitcoind docker on my Synology and started it syncing to the Mainnet blockchain. About a week later and I was sitting at 18% complete and averaging 1.5% per day and dropping. Reading up on this and the problem was two-fold: validating the blockchain is a CPU and HDD/SSD intensive task and my Synology had neither. I threw more RAM at it (3GB out of the 4GB it had) with no difference in performance, set the CPU restrictions to give the Docker the most performance possible with no difference and basically ran out of levers to pull.

I then learned it’s possible to copy a blockchain from one device to another and the Raspberry Pi’s sold as your own private node come with the blockchain pre-synced (up to the point they’re shipped) so they don’t take too long to catch up to the front of the chain. I then downloaded BitCoin Core for MacOS and set it running. After two days it had finished (much better) and I copied the directories to the Synology only to find that the settings on BitCoin Core were to “prune” the blockchain after validation, meaning the entire blockchain was no longer stored on my Mac, and the docker container would need to start over.

Ugh.

So I disabled pruning on the Mac, and started again. The blockchain was about 300GB (so I was told) and with my 512GB SSD on my MBP I thought that would be enough, but alas no, as the amount of free space diminished at a rapid rate of knots, I madly off-loaded and deleted what I could finishing with about 2GB to spare and the entire blockchain and associated files weighed in at 367GB.

Transferring them to the Synology and firing up the Docker…it worked! Although it had to revalidate the 6 most recent blocks (taking about 26 minutes EVERY time you restarted the BitCoin docker) it sprang to life nicely. I had a BitCoin node up and running!

Lightning Node Take 1

There are several docker containers to choose from, the two most popular seemed to be LND and c-Lightning. Without understanding the differences I went with the container that was said to be more lightweight and work better on a Synology: c-Lightning.

Later I was to discover that more plugins, applications, GUIs, relays (Sphinx for example) only work with LND and require LND Macaroons, which c-Lightning doesn’t support. Not only that design decisions by the c-Lightning developers to only permit single connections between nodes makes building liquidity problematic when you’re starting out. (More on that in another post someday…)

After messing around with RPC for the cLightning docker to communicate with the KyleManna Bitcoind docker, I realised that I needed to install ZMQ support since RPC Username and Password authentication were being phased out in preference for a token authentication through a shared folder.

UGH

I was so frustrated at losing 26 minutes every time I had to change a single setting in the Bitcoin docker, and in an incident overnight both dockers crashed, didn’t restart and then took over a day to catch up to the blockchain again. I had decided more or less at this point to give up on it.

SSD or don’t bother

Interestingly my oldest son pointed out that all of the kits for sale used SSDs for the Bitcoin data storage - even the cheapest versions. A bit more research and it turns out that crunching through the blockchain is less of a CPU intensive exercise and more of a data store read/write intensive exercise. I had a 512GB Samsung USB 3.0 SSD laying around and in a fit of insanity decided to try connecting it to the Synology’s rear port, shift the entire contents of the docker shared folders (that contained all of the blocks and indexes) to that SSD and try it again.

Oh My God it was like night and day.

Both docker containers started, synced and were running in minutes. Suddenly I was interested again!

Bitcoin Node Take 2

With renewed interest I returned to my previous headache - linking the docker containers properly. The LNCM/Bitcoind docker had precompiled support for ZMQ and it was surprisingly easy to set up the docker shared file to expose the token I needed for authentication with the cLightning docker image. It started up referencing the same docker folder (now mounted on the SSD) and honestly, seemed to “just work” straight up. So far so good.

Lightning Node Take 2

This time I went for the more-supported LND, and picked one that was quite popular by Guggero, and also spun it up rather quickly. My funds on my old cLightning node would simply have to remain trapped until I could figure out how to recover them in future.

Host-Network

The instructions I had read all related to TestNet, and advised not to use money you weren’t prepared to lose. I set myself a starting budget of $40 AUD and tried to make this work. Using the Breez app on iOS and their integration with MoonPay I managed to convert about 110k Sats. The next problem was getting them from Breez to my own Node and my attempts with Lightning failed with “no route.” (I learned later I needed channels…d’uh) Sending via BitCoin was the only option. “On-chain” they call it. This cost me a lot of Sats, but I finally had some Sats on my Node.

Satoshi’s

BitCoin has a few quirky little problems. One interesting one is that a single BitCoin is worth a LOT of money - currently 1 BTC = $62,000.00 AUD. So it’s not a practical measure and hence BitCoin is more commonly referred to in Satoshi’s which are 1/100,000,000th of a BitCoin. BitCoin is a crypto-currency which is transacted on the BitCoin blockchain, via the BitCoin network. Lightning is a Layer 2 network that also deals in BitCoin but in smaller amounts, peer to peer connected via channels and because the values are much smaller is regularly transacted in values of Satoshi’s.

Everything you do requires Satoshi’s (SATS). It costs SATS to fund a channel. It costs SATS to close a channel. I couldn’t find out how to determine the minimum amount of Sats needed to open a channel without first opening one via the command line. I only had a limited number of SATs to play with so I had to choose carefully. Most channels wanted 10,000 or 20,000 but I managed a find a few that only required 1,000. The initial thought was to open as many channels as you could then make some transactions and your inbound liquidity will improve as others in the network transact.

Services exist to help build that inbound liquidity, without which, you can’t accept payments from anyone else. Another story for a future post.

Anything On-Chain Is Slow and Expensive

For a technology that’s supposed to be reducing fees overall, Lightning seems to cost you a bit up-front to get into it, and anytime you want to shuffle things around, it costs SATS. I initially bought into it wishing to fund my own node and try for that oft-touted “self-soverignty” of BitCoin, but to achieve that you have to invest some money to get started. In the end however I hadn’t invested enough because my channels I opened didn’t allow inbound payments.

I asked some people to open some channels to me and give me some inbound liquidity however not a single one of them successfully opened. My BitCoin and Lightning experiment had ground to a halt, once again.

At first I experimented with TOR, then by publishing on an external IP address, port-forwarding to expose the Lightning external access port 9735 to allow incoming connections. Research into why highlighted that I needed to recreate my dockers but connect them to a custom Docker network and then resync the containers otherwise the open channel attempts would continue to fail.

I did that and it still didn’t work.

Then I stumbled across the next idea: you needed to modify the Synology Docker DSM implementation to allow direct mounting of the Docker images without them being forced through a Double-NAT. Doing so was likely to impact my other, otherwise perfectly happily running Dockers.

UGH

That’s it.

I’m out.

Playing with BitCoin today feels like programming COBOL for a bank in the 80s

Did you know that COBOL is behind nearly half of all financial transactions in 2017? Yes and the world is gradually ripping it out (thankfully).

IDENTIFICATION DIVISION.
   PROGRAM-ID. CONDITIONALS.
   DATA DIVISION.
     WORKING-STORAGE SECTION.
     *> I'm not joking, Lightning-cli and Bitcoin-cli make me think I'm programming for a bank
     01 NUM1 SATSJOHNHAS 0(0).
   PROCEDURE DIVISION.
     MOVE 20000 TO NUM1.
     IF NUM1 > 0 THEN
       DISPLAY 'YAY I HAZ 20000 SATS!'
     END-IF
     *> I'd like to make all of transactions using the command line, just like when I do normal banking...oh wait...
     EVALUATE TRUE
       WHEN SATS = 0
         DISPLAY 'NO MORE SATS NOW :('
     END-EVALUATE.
   STOP RUN.

There is no doubt there’s a bit geek-elitism amongst many of the people involved with BitCoin. Comments like “Don’t use a GUI, to understand it you MUST use the command line…” reminds me of those that whined about the Macintosh in 1984 having a GUI. A “real” computer used DOS. OMFG seriously?

A real financial system is as painless for the user as possible. Unbeknownst to me, I’d chosen a method that was perhaps the least advisable: the wrong hardware running the wrong software, running a less-compatible set of dockers and my conclusion was that setting up your own Node that you control is not easy.

It’s not intuitive either and it will make you think about things like inbound liquidity that you never thought you’d need to know, since you’re geek - not an investment banker. I suppose the point is that owning your own bank means you have to learn a bit about how a bank needs to work and that takes time and effort.

If you’re happy to just pay someone else to build and operate a node for you then that’s fine and that’s just what you’re doing today with any bank account. I spent weeks learning just how much I don’t want to be my own bank - thank you very much, or at least I didn’t want to using the equipment that I had laying about and living in the Terminal.

Synology as a Node Verdict

Docker was not reliable enough either. In some instances I would modify a single dockers configuration file and restart the container only get “Docker API failed”. Sometimes I could recover by picking the Docker Container I thought had caused the failure (most likely the one I modified but not always) by clearing the container and restarting it.

Other times I had to completely reboot the Synology to recover it and sometimes I had to do both for Docker to restart. Every restart of the Bitcoin Container and there would go another half an hour restarting and then the container would “go backwards” and be 250 blocks behind taking a further 24-48 hours of resynchronising with the blockchain before the Lightning Container could then resynchronise with it. All the while the node was offline.

Unless your Synology is running SSDs, has at least 8GB of RAM, is relatively new and you don’t mind hacking your DSM Docker installation, you could probably make it work, but it’s horses for courses in the end. If you have an old PC laying about then use that. If you have RAM and SSD on your NAS then build a VM rather than use Docker, maybe. Or better yet, get a Raspberry Pi and have a dedicated little PC that can do the work.

Don’t Do What I Did

Money is Gone

The truth is in an attempt to get incoming channel opens working, I flicked between Bridge and Host and back again, opening different ports with Socks failed errors and finally gave up when after many hours the LND docker just wouldn’t connect via ZMQ any more.

And with that my $100 AUD investment is now stuck between two virtual wallets.

I will keep trying and report back but at this point my intention is to invest in a Raspberry Pi to run my own Node. I’ll let you know how that goes in due course.

]]>
Podcasting 2021-02-01T12:30:00+10:00 #TechDistortion
Podcasting 2.0 Addendum https://techdistortion.com/articles/podcasting-2-0-addendum https://techdistortion.com/articles/podcasting-2-0-addendum Podcasting 2.0 Addendum I recently wrote about Podcasting 2.0 and thought I should add a further amendment regarding their goals. I previously wrote:

To solve the problems above there are a few key angles being tackled: Search, Discoverability and Monetisation.

I’d like to add a fourth key angle to that, which I didn’t think at the time should be listed as it’s own however having listened more to Episodes 16 and 17 and their intention to add XML tags for IRC/Chat Room integration I think I should add the fourth key angle: Interactivity.

Interactivity

The problem with broadcast historically is that audience participation is difficult given the tools and effort required. Pick up the phone, make a call, you need a big incentive (think cash prizes, competitions, discounts, something!) or audiences just don’t participate. It’s less personal and with less of a personal connection the desire for listeners to connect is much less.

In podcasting as an internet-first application and being far more personal, the bar is set differently and we can think of real-time feedback methods as verbal via a dial-in/patch-through to the live show or written via messaging, like a chat room. There are also non-real-time methods predominantly via webforms and EMail. With contact EMails already in the RSS XML specification, adding a webform submission entry might be of some use (perhaps < podcast:contactform > with a url=“https://contact.me/form"), but real-time is far more interesting.

Real Time Interactivity

In podcasting initially (like so many internet-first technology applications) geeks that understood how it works, led the way. That is to say with podcasts originally there was a way for a percentage of the listeners to use IRC as a Chat Room (Pragmatic did this for the better part of a year in 2014, as well as other far more popular shows like ATP, Back To Work etc.) to get real-time listener interaction during a podcast recording, albeit with a slight delay between audio out and listener response in the chat room.

YouTube introduced live streaming and live chat with playback that integrated the chat room with the video content to lower the barrier of entry for their platform. For equivalent podcast functionality to go beyond the geek-% of the podcast listeners, podcast clients will need to do the same. In order for podcast clients to be pressured to support it, standardisation of the XML tags and backend infrastructure is a must.

The problem with interactivity is that whilst it starts with the tag, it must end with the client applications otherwise only the geek-% of listeners will use it as they do now.

From my own experiences with live chat rooms during my own and other podcasts, people that are able to tune in to a live show and be present (lots of people just “sit” in a channel and aren’t actually present) is about 1-2% of your overall downloads and that’s for a technical podcast with a high geek-%. I’ve also found there are timezone-effects such that if you podcast live during different times of the day or night directly impacts those percentages even further (it’s 1am somewhere in the world right now, so if your listeners live in that timezone chances are they won’t be listening live).

The final concern is that chat rooms only work for a certain kind of podcast. For me, it could only potentially work with Pragmatic and in my experience I wanted Pragmatic to be focussed and chat rooms turned out to be a huge distraction. Over and again my listeners reiterated that one of the main attractions of podcasts was their ability to time-shift and listen to them when they wanted to listen to them. Being live to them was a minus not a plus.

For these reasons I don’t see that this kind of interactivity will uplift the podcasting ecosystem for the vast majority of podcasters, though it’s certainly nice to have and attempt to standardise.

Crowd-sourced Chapters

Previously I wrote:

The interesting opportunity that Adam puts forward with chapters is he wants the audience to be able to participate with crowd-sourced chapters as a new vector of audience participation and interaction with podcast creators.

Whilst I looked at this last time from a practical standpoint of “how would I as a podcaster use this?” concluding that I wouldn’t use it since I’m a self-confessed control-freak, but I didn’t fully appreciate the angle of audience interaction. I think for podcasts that have a truly significant audience with listeners that really want to help out (but can’t help financially) this feature provides a potential avenue to assist in a non-financial aspect, which is a great idea.

Crowd-source Everything?

(Except recording the show!)

From pre-production to post-production any task in the podcast creation chain could be outsourced to an extent. The pre-production dilemma could look like a feed level XML Tag < podcast:proposedtopics > to a planned topic list (popular podcasts currently use Twitter #Tags like #AskTheBobbyMcBobShow), to cut-out centralised platforms like Twitter from the creation chain in the long term. Again, only useful for certain kinds of shows, but could also include a URL Link to a shared document (probably a JSON file), an episode index reference (i.e. Currently released episode is 85, proposed topics for Episode 86, could also be an array for multiple episodes.)

The post-production dilemma generally consists of show notes, chapters (solution in progress) and audio editing. Perhaps a similar system to crowd-sourced chapters could be used for show notes that could include useful/relevant links for the current episode that aren’t/can’t be easily embedded as Chapter markers.

In either case there’s no reason why it couldn’t work the same way as crowd-sourced chapter markers. The podcaster could also have (with sufficient privileges) the administrative access to add/modify remove content from either of these, with guests also having read/write access. With an appropriate client tool this would then eliminate the plethora of different methods in use today: shared google documents being quite popular with many podcasters today, will not be around indefinitely.

All In One App?

Of course the more features we pile into the Podcasting client app, the more difficult it becomes to write and maintain. Previously an excellent programmer, come podcaster, come audiophile like Marco Arment, could create Overcast. With lightning network integration, plus crowd-sourced chapters, shared document support (notes etc) and a text chat client (IRC) the application is quickly becoming much heavier and complex, with fewer developers with the knowledge in each dimension to create an all-in-one app client.

The need for better frameworks to make feature integration easier for developers is obvious. There may well be the need to two classes of app or at least two views: the listener view and the podcaster view, or simply multiple apps for different purposes. Either way it’s interesting to see where the Tag + Use Case + Tool-chain can lead us.

]]>
Podcasting 2021-01-01T12:15:00+10:00 #TechDistortion