TechDistortion Articles https://techdistortion.com/articles en john@techdistortion.com Copyright 2009-2019 2019-07-21T01:45:25+00:00 TechDistortion Articles Sun, 21 Jul 2019 01:45:25 GMT Jony Ive Leaving Apple https://techdistortion.com/articles/jony-ive-leaving-apple https://techdistortion.com/articles/jony-ive-leaving-apple Jony Ive Leaving Apple Apple announced recently that Jony Ive is leaving Apple. He’s worked there for 3 decades and is 52 years old. Having now read and listened to quite a lot of commentary on the subject, I find it odd how much is credited to Jony. Industrial design alone can not create a successful product. Industrial design often takes a back seat to engineering tradeoffs like cost, manufacturability, repairability and so on, however Apple since their comeback in the early 2000s were happy to put a higher priority on their industrial design than during the boring beige box era of the 1990s. This approach played a part their comeback and influenced the design of other products in the same and similar markets for years that followed.

Jony was the leader of a team of industrial designers and many times the ideas they championed, that would have been shot down at other companies, were upheld and supported at Apple. If you were to take the industrial designers at Apple and put them at Dell, there is no doubt in my mind many of their designs never would have seen the light of day. If you were to take the industrial designers at Dell and put them at Apple, there is no doubt in my mind that their designs would have been supported too, and the resulting products would have been perhaps, almost as successful. But not quite.

Jonys influence elevated several of Apples products and Apple could and did support his team in that endeavour. Industrial design and engineering is a symbiotic relationship and at Apple, Jony was in a position where his name carried some of the success of what was and will always be a team effort. His leaving changes little of the design ethics, principles and focus on industrial design at Apple. It’s an opportunity for other great designers at Apple to step up and out of Jonys shadow.

It seems unlikely that in future roles Jony will enjoy the same success in terms of his designed products sold or product revenue or have the backing of what became a unique relationship he had at Apple. That’s okay - he doesn’t need to. Jony has earned the chance to make whatever he wants - to pick and choose and enjoy design for products more on his own terms than he could had he stayed at Apple. It’s an opportunity that few get.

I wish him all the best.

Thanks Jony.

And for those concerned about Apple…don’t be…Apple will be just fine.

]]>
Technology 2019-07-08T21:40:00+10:00 #TechDistortion
Auto Things Shortcut https://techdistortion.com/articles/auto-things-shortcut https://techdistortion.com/articles/auto-things-shortcut Auto Things Shortcut I actually automate a bunch of things in my many workflows, but mostly they’re so specific I tend to think most people wouldn’t be interested. This is an exception.

Workflow don’t call it Workflow now it’s Shortcuts is something I’ve at best dabbled with in the past few years. I’ve found in the past few months as I’ve added multiple steps to post-production of my podcasts, that I was occasionally forgetting a few steps from time to time. I started looking for apps that created convenient templated checklists and came up with no really viable out of the box options that appealed.

I’ve been using Things to plan my life and tasks for many years now and having that on all of my devices (iPhone, Apple Watch, iPad and Macbook Pro) is extremely handy to have so rather than change that, I started digging into automating input into Things. I’d read that Things supported a URL Scheme so dove in.

First I built a test JSON Text string that was static, to prove that it would work:

{"type":"to-do","attributes":{"title":"Episode","when":"today","tags":["Podcasting"],"checklist-items":[{"type":"checklist-item","attributes":{"title":"Publish Notes"}},{"type":"checklist-item","attributes":{"title":"Publish Ad-Free"}},{"type":"checklist-item","attributes":{"title":"Test Item 1"}},{"type":"checklist-item","attributes":{"title":"Test Item 2"}},{"type":"checklist-item","attributes":{"title":"Test Item 3"}},{"type":"checklist-item","attributes":{"title":"Test Item 4"}},{"type":"checklist-item","attributes":{"title": "Test Item 5"}}]}}

The shortcut was really simple: The above JSON in a Text field, wrapped with “things:///json?data=[ JSON ]“, feeding into a URL object, then feeding into Open X-Callback URL, no custom callback or success URL and it worked perfectly.

A few experiments and the order of the keys in the JSON weren’t important, so long as the levels conformed with the URL Scheme it worked like a charm.

Creating a more sophisticated Shortcut was a bit more annoying. I’ve attached the Shortcut image and file but to walk through each section and why:

  • List: Some of the shows I edit
  • Choose from List: Prompt the user to pick one from the list, only one though
  • Set Variable (Podcast): As a programmer, I don’t like Magic Variables since they hide their source when you’re visually reading the Shortcut. This variable is the selected podcast from that List
  • Ask for Input (Episode Number): Ask for what episode we’re working on as a user input
  • Text: Here we combine the “Podcast” variable with the text ‘Episode’ followed by the result from the line above (the episode number)
  • Set variable (To Do Main Title): Save the full main title of the To Do List item for later.
  • Text (list of Carriage Return separated items): Building a checklist underneath a To-Do requires a list of items. This will be the template for every checklist. Add/modify as needed in the Shortcut.
  • Set Variable (Checklist Items): Save that checklist
  • Get Variable (Checklist Items): Use that in the next line.
  • Split Text: Using the new line separator (aka a Carriage Return) we split the text ready to run a repeat for each entry in the checklist
  • Repeat with Each: Cycle through each checklist item from the text field
  • Dictionary (repeat until done): Build the JSON dictionary with type “checklist-item”, and “attributes” with a single text item with the Key “title” and the value “Repeat Item” from the repeat loop (aka the actual line of text for this checklist item)
  • End Repeat: What it says on the tin
  • Set Variable (Checklist Items Dictionary): This is now a complete dictionary of all of our checklist items we’ll embed later into the final JSON dictionary.
  • Dictionary: This is our second-level down of the JSON, where we define the To-Do’s Title (saved from earlier), we set the due date to today, then we set an array for what tags we want applied to it. I use an imaginatively named tag in Things called “Podcasting” which is added as a Text entry in the array. You could add more entries for multiple tags if you like.
  • Set Dictionary Value: Adds the Checklist Items Dictionary we created earlier against the key “checklist-items”
  • Set Variable (Attributes Dictionary): Save this to our Attribute level of the Things JSON
  • Dictionary: This is the top-level of our JSON dictionary, where we simply create the item of “type” “to-do”
  • Set Dictionary Value: Adding in our second-level JSON Dictionary we prepared earlier, the Attributed Dictionary under the key “attributes”
  • Set Variable (JSON Output): The final Dictionary now saved as a Variable
  • Text: Build the final text string and wrap the JSON around the URL Scheme
  • URL: Interpret the above Text as a URL
  • Open X-Callback URL: Call the URL, but I didn’t want any custom callbacks or success URLs, because I just didn’t.

And we’re done. Yes I could tidy up some bits and yes you can use Magic Variables and yes I could embed Variables directly rather than Get Variable every now and then, but never mind that. The code is hopefully more readable than most other examples I came across, tried to follow, then just created it from the ground up to make sense.

Finished To-Do List

Hopefully that’s useful for some one (other than just me) at some point in the future that wants to make checklists from a standard template as a repeatable task in Things.

]]>
Technology 2019-06-22T20:30:00+10:00 #TechDistortion
BrusFri Limitations iOS https://techdistortion.com/articles/brusfri-limitations-ios https://techdistortion.com/articles/brusfri-limitations-ios BrusFri Limitations iOS For several years I’ve been editing all of my podcasts on iOS using the wonderful Ferrite app. For a brief time I trialled Adobe Auditions noise remover and whilst it was excellent the switch to a subscription put it out of reach for me. Switching to Audacity for a time it wasn’t as good however it was passable. Moving to a full iOS workflow at that time was unfortunately not practical so I went without for quite a while.

Then listening to another podcast by Tim Chaten he mentioned the Klevgrand BrusFri audio plugin that was both an independent app for noise reduction in audio, but also an Audio Unit Extension meaning it would work perfectly within Ferrite.

I began using it and have been using it happily for just under a year now, although recently I found some inexplicable audio artefacts when it was used in Ferrite as a plugin. The noise blip lasted about 12 a second, and it was random. I did some research and found that I wasn’t alone - it appeared to be related to the bit depth, sample rate and also potentially the size of the raw audio file Ferrite was using. If you removed the plugin the audio was output correctly, hence it wasn’t a Ferrite problem, but appears to be an Audio Unit Extension integration bug in iOS12 which seems to have been present from that release.

No problem, BrusFri is also an independent app that can be used to import, reduce noise and export the result. Adding this step to the workflow solved the problem and also significantly reduced the Ferrite processing time when exporting the final audio. The noise reducer for 1 hour of raw audio takes about 8 minutes to process in Klevgrand on an iPad Pro 2nd Gen 12.9”.

So far so good, however earlier this year I started to record some longer episodes of Pragmatic. When importing the audio into BrusFri it would spontaneously crash and once it did, attempting to reopen the program would result in an instant crash. The only way to recover from this was to completely delete BrusFri from the device, download and reinstall it again. At this point I began to investigate the app itself and found many others had exactly the same problem.

Not wishing to give up on my amazing noise reducer I iteratively changed every parameter I could think of when pre-converting the raw audio in an attempt to empirically determine the cause of the crash. The aforementioned reinstallation process became somewhat tiring after the 25th time or so.

Turns out the problem stemmed from the length of the audio, not the format. The longest raw audio file I am currently able to import to v1.1.0 of BrusFri is 1hr 40mins. I’ve been working predominantly in WAV but I tried a few other formats and they all appear to work fine provided they’re less than that duration. So for any podcasts that I record, if I want to use non-glitchy noise reduction on the iPad then I need to split the audio into pieces not greater than 1hr 40mins.

Ordinarily I’d throw my hands up in the air and start using my Macbook Pro for noise reduction via Audacity, but it’s not as good as BrusFri and I’ve already paid for BrusFri and it does an amazing job! I’ve contacted the developer however have yet to hear back. Given that the last update to the Noise Reducer was over 1 year ago, I do not hold out much hope for a quick response. I’ll keep you posted.

]]>
Podcasting 2019-06-21T10:10:00+10:00 #TechDistortion
Optimal Interface Part 3: Devices https://techdistortion.com/articles/optimal-interface-part-3-devices https://techdistortion.com/articles/optimal-interface-part-3-devices Optimal Interface Part 3: Devices To be released shortly

]]>
Technology 2019-05-16T20:02:00+10:00 #TechDistortion
Optimal Interface Part 2: Output https://techdistortion.com/articles/optimal-interface-part-2-output https://techdistortion.com/articles/optimal-interface-part-2-output Optimal Interface Part 2: Output To be released shortly

]]>
Technology 2019-05-16T20:01:00+10:00 #TechDistortion
Optimal Interface Part 1: Input https://techdistortion.com/articles/optimal-interface-part-1-input https://techdistortion.com/articles/optimal-interface-part-1-input Optimal Interface Part 1: Input This article is posted in conjunction with Episode 93 of Pragmatic.

I’ve been fortunate in recent years to have tried the vast majority of consumer user interfaces and also the software running on each platform that’s widely regarded as best in class for each interface. I’ve written previously about going Back To The Mac and spoken about using a Microsoft Surface Pro and even tried going Phoneless with just an Apple Watch.

One aspect of my job has been user interface design, conceptualisation and controls and in this series of posts I’d like to explore inputs, outputs and devices in turn, looking at what has worked well and why I think that is as well as what the next inflection points might be.

Part 1: Input

Input to a device from a person must be in a form the person can send to a device and hence has to be via a mechanism we can perform via:

  • Sound
  • Touch
  • Movement
  • Neural

We shall exclude attempts to convey meaningful information utilising smell by projecting a scent of some kind since that’s not a trick most people can do and likewise for taste.

Sound

The first popular device to perform control inputs from sound was the Clapper. “Clap on, Clap off” to turn lights on and off. Spoken word has proven to be significantly more difficult, with many influencing factors: local accents, dialects, languages, speaking speeds, slurring, variable speech volume and most difficult of all: context. The earliest consumer products that were effective were in the early 1990s from Dragon Dictate, that used an algorithmic approach that required training to improve the speed and accuracy of the recognition. Ultimately algorithmic techniques plateaued until machine learning, utilising neural network techniques finally started to improve the accuracy through common language training.

Context is more complex as in human conversation, we infer much from previous sentences spanning minutes or even hours. For speech input to track context requires consistently high recognition accuracy and the ability to associate contexts over long periods of time. The reliability of speech recognition must be consistent and faster than other input methods or people will not use it. Sound commands are also not well suited in scenarios where discretion is advised, nor in noisy environments where isolating a subject is difficult even in a human conversation, let alone for speech detection by software.

Despite improvements the Apple Siri product ‘feature’ remains inaccurate and generally slow to respond. Amazon Alexa, Google Assistant and Microsoft Cortana also offer varying degrees of accuracy with heavier use of Machine Learning in the cloud providing the best results to date at the expense of personal privacy. As computational power improves and both response time and accuracy improves sound will become the preferred input method for entering long form text in draft (once it keeps up to average human speaking rate of about 150 words per minute) since without additional training on a physical keyboard this is faster and more convenient. Also once these things improve it will also be the preferred method for short commands, such as turning home automation devices on or off for example, for scenarios where no physical device is immediately accessible.

Touch

Touch involves anything that a person can physically push, tap, slide across or turn and encompasses everything from dials to mechanical sliders, to keyboards to touch screens. Individual buttons are best for dedicated inputs whereby that button represents a single command or very similar command, with a common example of a button grid being a keyboard.

Broadly touch can be grouped into either direct or in-direct. Examples of direct movement include light pens, resistive and capacitive touch screens. Light pens needed the user to hold them and they were tethered, slow, and weren’t very accurate. Resistive Touchscreens still needed a stylus to be accurate although some could use the edge of their fingernail, however the centre of a finger wasn’t very accurate. It was also not possible to detect more than a single touch point at a time. Capacitive Touch had better finger accuracy and allowed multiple finger touch detection simultaneously which allowed for pinch and other multi-finger gestures. Although no stylus was needed, to achieve high levels of accuracy a stylus was still recommended.

Indirect inputs include keyboards and cursor positioning devices such as mice, trackpads, trackballs and positioning sticks. Keyboards mimicked typewriter keyboards and have remained essentially unchanged from the first terminal computers through personal computers, apart from preferences for some key-switch mechanisms between users little has changed in decades.

Cursor pointing devices allow for precise cursor positioning with the ability to “nudge” a cursor which is not possible without zooming on a touch interface.

Hence for precision pointing, indirect methods are still more accurate than a stylus due to “nudging”. However precision pointing is generally not a strict requirement for most users in most applications. Non-precision pointing therefore for most tasks benefit from the simplicity of direct touch, which is faster and requires no training making direct touch the most accessible method.

For bulk text input, physical keyboards remain the fastest method however training is necessary to achieve this. Keyboards will remain the preferred bulk text data entry method until speech recognition improves noting that the fastest English typing speed record on a computer is 212 wpm in 2005 using a Dvorak simplified keyboard layout. The average typing speed is about 41 words per minute, hence speech recognition that’s any faster than this at a high degree of accuracy will be the preferred dictation method in most use cases.

Movement

Movement requires no physical connection of the body to the input device and includes gestures of different parts of the body. Some early technology like the Playstation Move ball was a recent example where the user held a device that wasn’t tethered to the machine but directly tracked their movement. Other examples are in Virtual Reality systems that use a handheld controllers with gyroscopes and accelerometers for tracking movement of hands and arms.

The most popular natural free-standing movement tracking device so far has been the Microsoft Kinect that was released for both the PC and the XBox. The movement tracking had issues differentiating backgrounds and was thrown off by people walking past, in front of or behind those people it was tracking at that time. The room size and other obstructions also created a challenge for many users whereby in order to use movement tracking reliably couches, chairs and tables needed to be moved or removed in order to accommodate a workable space within which it would function reliably.

This form of movement tracking is useful for individuals or small groups of people in enclosed environments with no thoroughfare, though the acquisition time of precise positioning even with an Xbox One Kinect 2, was still too slow and the Kinect 2 was discontinued in 2017. The newest development kit for the next generation of Kinect is the Azure Kinect which was announced in February 2019.

Current technology is still extremely inaccurate, easily confused and immature with a limited set of standalone use cases. Extremely accurate natural free-standing position tracking is unlikely to be useful as a mass input device, however in conjunction with speech recognition could provide vital contextual information to improve command interpretation accuracy. It also has applications in noisy environments, where an individual is isolated in front of a device such as a television and wishes to change channels with a gesture without using a phyical remote control.

Neural

Brain Computer Interfaces (BCIs) allow interaction through the measurement of brain activity, usually using an Electro-Encephalography (EEGs). EEGs use electrodes placed on the scalp and are cheaper and less intrusive than a Functional MRI (fMRI) that tracks blood flow through different parts of the brain and whilst it is more accurate it is not straightforward.

In the Mid 1990s the first neuroprosthetic devices for humans became available, but they took a great deal of concentration and the results were extremely difficult to reliably repeat. By concentrating intensely on a set thought it was possible to nudge a cursor on the screen in a certain direction, however this wasn’t very useful. In June 2004 Matthew Nagle had the first implant of Cyberkinetics BrainGate to overcome some of the effects of tetraplegia by stimulating the nervous system. Elon Musk invested $27M USD in a company called Neuralink in 2016 that are developing a “neural lace” to interface the brain with a computer system.

It remains extremely dangerous to interface directly with the brain however in order to become useful in future it is necessary to explore since the amount of data we can reliably extract from sensors sitting on our scalp is very limited due to noise and signal loss through the skull. We therefore need implants to directly connect with neurones before we can get data in and out at any rate that will ever be useful enough to overtake our conventional senses.

Attempting to guess how far off that inflection point is at this moment is extremely difficult. That said, when it comes it will come very quickly and some people will decide to have chips implanted and that will allow them to out-perform other people for certain tasks. Once the technology becomes safer and affordable, even then there will always be ‘unenhanced’ people that choose not to have implants however mass adoption might still take a long time depending on rewards vs the risks.

Despite many claims, no one really knows exactly how fast a human can think. Guesstimates are somewhere between 1,000 and 3,000 words per minute as our brains refer to speech however this is very broad. In terms of writing as a task, there’s word-thinking-rate but then when you’re writing something conventionally you will be reading back, reviewing, revising and rewriting as these are key parts of the creative process, otherwise what you end up with is most likely either gibberish or just not worth publishing.

Beyond that there’s an assumption that descrambling our thoughts is possible to do coherently, though more than likely some training will likely be necessary in the same fashion in which we currently have to rephrase our words for a machine to interpret a command initially at least re-ordering our thinking might be required to get a usable result. All this plus multi-lingual people may think words in a specific language or mix languages in their thinking, and how a neural interface could even begin to interpret that is a very long way off and not in our lifetimes most likely.

More in Part 2

Next we’ll look at outputs.

]]>
Technology 2019-05-16T20:00:00+10:00 #TechDistortion
Back To The Mac https://techdistortion.com/articles/back-to-the-mac https://techdistortion.com/articles/back-to-the-mac Back To The Mac It’s been a long series of experiments beginning in the mid-2000s when I moved from Windows Vista to MacOS Tiger, then to the iPad in 2011 running iOS, back to Windows 10 on a Surface Pro 4, back to an iPad Pro in 2016, trying a sole-Apple Watch LTE as my daily device and finally now back to a Macbook Pro Touchbar running Mojave.

Either I’m completely unprincipled in the use of technology, or then again perhaps I’d prefer to think of myself as being one of the few stupid and crazy enough to try every different mainstream technological option before reaching a conclusion. Whilst I admit that Everything is Cyclic it is also a quest for refinement. Beyond that sentiment naturally as the field of technology continues to evolve, whatever balance can be found today is guaranteed not to last forever.

If you want the TL;DR then skip to the Conclusion and be done with it. For the brave, read on…

Critical Mass for Paperless

Ideally computers would replace paper and ink for communicating ideas in smaller groups in person, and replace overhead projectors and whiteboards as well for larger groups, but they haven’t. The question is simply: which is easier?

We are all able to pick up a pencil and write as we are taught to at school and despite typing being an essential skill in the modern world, many people can not touch type, and with keyboards on small glass screens now all non-standard sizes, even that 80s/90s typing skill presents difficulties for skill level equalisation among the populace. (I’m now beating most 15-25yr olds in typing speed tests as they’ve learned on smartphones, away from standardised physical keyboards)

The iPad Pro with the Apple Pencil represented the best digital equivalent of an analogue pen or pencil and hence for nearly 2-12 years now, I have not needed to carry an ink-based pen with me. At all. An an engineer I’m not interested (generally) in sketching and whilst that’s something I can do I’m not particularly good at it, so I use the Apple Pencil to take notes. Unlike an ink pen on paper notes though, I can search through all of my notes easily with handwriting recognition.

The use of iPads for this purpose has increased significantly in our office (no, not entirely because of me though I was the first I am aware of to do that in our office), and it has increased because it is so much better than ink on paper. The amount of photocopier and scanner usage has dropped significantly and it’s only a matter of time before there is a transition away from them altogether. Like the fax machine shortly there will be one photocopier per floor, then one for the building, and then none at all in a matter of a decade.

The paperless office may finally arrive; a few decades behind schedule, but better late than never.

Fighting the Form Factor

A term I’ve come across in programming is “Fighting the Framework” which is meant to illustrate that Frameworks and APIs are written with an intent, with data structures, methods and objects within all cohesively designed around a specific model, view and/or controller, inter-object messaging and so on. If you choose to go around these structures to create your own customised behaviours, doing so represents significantly more work and is often far more error-prone as you are going against the intended use and nature of the frameworks.

I’d like to propose that there are people that love technology that are obsessed with taking devices with a specific form factor and making them “bend” to their will and use them in ways that fundamentally conflict with their design intention. Irrespective of whether you believe pushing the boundaries is a good practice or not, there are limits to: what is possible; what is practical; and what can be expected realistically when you fight the form factor.

Examples include the commentary around the iPad or tablets in general, still “just being a tablet” meaning that they are predominantly intended to be used as consumption devices. Of course that’s a reductive argument since content comes in many forms, written, audible, visual at a very basic level, and within each there are blends of multiple including newspapers, comic books, novels, TV Shows and Movies. The same argument works in reverse whereby according to the currently popular trope, it’s “too hard” to create content on a tablet and therefore it is and can only be a consumption device.

The fundamental structure of the iPad (iOS more specifically) and the constraints of a single viewport, the requirement to cater for the lowest common denominator input device being a human finger makes the form factor difficult to directly copy ideas and concepts from desktop devices which have 20 years or more of trial, error and refinement. As time goes on more examples of innovation in that space will develop for audio (eg Podcast Audio) Ferrite and video Luma Fusion and although these will not satisfy everyone, only a few years ago there were no equivalent applications on iOS at all.

In the end though there is no easy way for the iOS form factor (both physical and operating system) to permit certain important, proven aspects to all a specific class of application designs and use cases. For these unfortunate classes, fighting the form factor will yield only frustration, compromise and inefficiency.

Multiple-Screen

You can’t beat pixels (or points). Displaying information on multiple screens on an iOS device in a way that allows a user to display information side-by-side (or in near proximity if not perfectly aligned) and importantly to visually compare, copy and paste seamlessly between, is a feature that has existed and been taken from granted from desktop computers for decades.

On larger-screened iOS devices this feature has been added (to an extent) with slide-over and side-by-side views, however the copy and paste between the applications isn’t widely supported, comes with several caveats, but most importantly there aren’t enough pixels for a large number of side-by-side review tasks. The larger the documents or files you need side by side, the worse it is on an iPad.

iPads have supported application-specific monitor output which isn’t just a mirror of the iPad screen, however support for this is rare and bound to the application. There’s no generic way to plug in a second, independent monitor and use it for any general purpose. Then again, there’s no windowing system like on the desktop so without a mouse pointer or a touch-interface on the connected screen, how could the user interact with it?

Some have proposed in future multiple iPads could be ‘ganged’ together but apart from this being cost-prohibitive, it’s unlikely for the same reason that ganging iMacs together isn’t supported anymore (Target Display Mode ended in 2014). Beyond this no existing iPad (even if it supports USB-C) can be chained to support more than one additional monitor. If you have a laptop or a desktop currently, most support two additional displays with a combined cost of significantly less than the multiple ganged iPad Pro solution.

Navigation Methods

Scrolling and navigating around large documents is slow and difficult on an iPad with few short cuts, many applications lack search funtionality, loading large files can take a long time and there’s a lot of fast-flick-swiping to get around a document. These issues aren’t an issue on a desktop operating system, with search baked into practically every application, Page Up/Down, scrolling via scrollbars, trackpads and mouse wheels all of which are less obtrusive and overall much faster than flicking for 30 seconds to move a significant number of pages in a document.

Functional Precision

The capacitive touch screen introduced with the iPhone and subsequently with the iPad made multi-touch with our highly inaccurate built-in pointing devices (our fingers) a reality for the masses. As an input method though it is not particularly precise and for that a stylus is required. The Apple Pencil serves that function for those that require additional precision, however pixel-perfect precision is still faster and easier with an indirect positioning mechanism like a cursor.

Conclusion

My efforts to make Windows work the way I needed it to (reliably) weren’t successful and the iPad Pro met a great many of my computing needs (and still does for written tasks and podcast editing). However I was ultimately trying to make the system do what I needed, when it fundamentally wasn’t designed to do that. I was fighting the form factor and losing too much of the time.

Many see working on the iPad Pro exclusively as a challenge, with complex workarounds and scripts to do tasks that would be embedded or straightforward on a Mac. Those people get a great deal of satisfaction by getting those things to work but if we are are truly honest about the time and effort expended to make those edge-cases function, taking into account the additional unnecessary friction or resistance in so doing, they would be better off using a more appropriate device in most cases.

For all of the reasons above I came back to the Mac and purchased a Macbook Pro 13” 2018 model and I have not regretted that choice. I am fortunate the my company has provided a corporate iPad Pro 2, which I use every day as well for written tasks. I feel as though I am no longer fighting against the form factor of my machines, making my days using technology far less stressful and far more productive. Which in the end is what it should be about.

]]>
Technology 2019-05-11T13:45:00+10:00 #TechDistortion
From GitLab to GitHub https://techdistortion.com/articles/from-gitlab-to-github https://techdistortion.com/articles/from-gitlab-to-github From GitLab to GitHub I previously wrote about a new website publishing workflow that used Netlify CDN as the front end, using my own Private GitLab repository, hosted on an OpenVZ 2GB SSD VPS from Hosted Simply. I wanted to have my own fully private repo for projects and to host the websites I maintain for as little expense as possible.

So about that.

After having my VPS shut off by the Hosting company due to high CPU usage, and experiencing multiple build failures and web interface non-responsiveness, I tweaked every setting that I could, disabled everything non-essential to running GitLab, and finally gave in.

Turns out that following GitHubs acquisition by Microsoft in late 2018 they decided to make private repos completely free and announced that in January this year. By that time I’d already built my GitLab instance, but with the issues I was having I decided to switch between the two. Turns out that took all of about one hour, Netlify integrated without complaint and my GitLab is now disabled.

I don’t blame the hosting company for protecting other users in the OpenVZ Shared environment, that’s totally fine. Ultimately the 2GB VPS simply wasn’t enough for the GitLab instance to function on. Looking back there were some updates applied that fixed a bug I was experiencing but bundled with that bug fix was new functionality that caused higher memory and CPU usage. Hence what used to work (just barely) on my VPS, would no longer function reliably without a higher spec.

GitLab has a lot of enterprise-type features that ran in the background and consumed all of the memory with a lot of performance issues on the VPS I had available. If I didn’t mind spending more money I could have reinstalled it (and maybe in future I will do that) but for now GitHub is working much better with Netlify and technically it’s free - so there’s that.

]]>
Technology 2019-03-16T07:45:00+10:00 #TechDistortion
Engineered Space Take 2 https://techdistortion.com/articles/engineered-space-take-two https://techdistortion.com/articles/engineered-space-take-two Engineered Space Take 2 Some time ago I started my own Mastodon server and opened it up for invites. What I learned quickly is that I wasn’t alone and plenty of others were doing the exact same thing. There was no shortage of options for anyone wishing to join the Fediverse, including lots of bigger servers with far more funding than mine. I then learned more about the problems Mastodon faces on a server - there are a LOT of moving parts and the gentleman driving the Mastodon standard was (and still is) having some trouble with direction now its popularity has exploded. For my little server it had only a handful of users and the VPS it was installed on was struggling, with constant delays and timeouts and an overall lack of reliability I started looking for other options. My original instance used the address @chidgey@engineered.space which was associated with Mastodon and part of the spoken outro-ductions of all of my podcast episodes as the way to get in touch with me.

I investigated and fell in love with Pleroma and wrote about how you can Own Your Social Graph late last year, mentioning Pleroma as my now preferred Fediverse server and at the time it easily outperformed Mastodon on a VPS with only 256MB of RAM (Mastodon was slow even with 1.6GB of RAM). I tried it briefly to confirm it’s functionality on a sub-domain: @chidgey@pleroma.engineered.space and after a few weeks tried a backend switch (move Pleroma underneath the original Mastodon address and domain) only to discover that followers wouldn’t and couldn’t be migrated between the servers. Messaging was a complete mess and I was unable to follow or be followed by anyone that had followed me previously. I hence ended up sticking with my “new” pleroma sub-domain in the longer term than I’d planned, and asked people to follow me there instead.

Since I wrote the social graph article there have been a few incidents with Pleroma as they progress towards a formal release. The first was a series of backend changes that meant it would no longer operate as reliably on low-spec VPSs like mine. The second was when the Pleroma team changed the ID data type for posts, which broke a lot of apps and scripts that I had come to rely on for various things (Auto-Posting, iOS apps, MacOS apps). Given how unreliable it had become at that point I decided it was time to shift to a newer, bigger VPS, and to try shifting back to my original domain again.

Now I have a freshly installed Pleroma instance, on my original Mastodon domain: @chidgey@engineered.space and my pleroma-sub-domain will be deactivated by the end of this month. In order for people to do what I’ve done, effectively switch the backend but keep their domain remains impossible to perform without losing followers. Interestingly, it’s the only way I know of to pull that off. The sequence:

  1. Start on Domain X
  2. Create a different Domain Y, then ask followers to follow you there instead
  3. Re-create your original Domain X, then ask followers to follow you there again

There’s currently no option to import, or auto-re-follow if you swap out the server side components that I’m aware of. I have exports of my original timeline posts for both the Mastodon and first Pleroma accounts, but to date I have not been able to successfully import them. On the plus side the broken apps and scripts have now been mostly fixed with everything I need to use back up and running, fast and reliably again.

So in the end, apologies to all, but I’m done shifting servers and instances around. I think that for the broader fediverse these sorts of issues moving servers will inevitably lead to the same problems as EMail addresses. There is no one true EMail address for most people and knowing someones address on the Fediverse will never be as simple as a single siloed solution because it can not be. Coming from a siloed world it’s annoying but a small price to pay for more control over your own social graph.

If you’re looking for me I’m back at @chidgey@engineered.space and you can follow me there on the Fediverse.

]]>
General 2019-02-16T14:00:00+10:00 #TechDistortion
The Need For Speed https://techdistortion.com/articles/the-need-for-speed https://techdistortion.com/articles/the-need-for-speed The Need For Speed After a lot of deliberation and consideration I’ve decided it’s time to push the web-front end further forward for all of my sites. Not happy with just going Static with Hugo, and after many months of pushing local caching as far as I could on NGinx, I’ve finally joined the rest of the web developers from 3 or so years ago. All of my sites are now backed by the Netlify CDN:

Ultimately you just can’t beat a distributed high-performance, low-latency Content Delivery Network. The website tests vary from a 5x to a 11x pageload improvement on average from multiple points around the globe. Locally for me it’s been amazing, but then packets for me generally traverse the Pacific to get to my backyard so that’s not really surprising.

Wishing to have control of my environment (yeah I know) I snagged a OpenVZ 2GB SSD VPS from Hosted Simply for a New Years special of $15USD/yr and built my own Private GitLab repository, then linked that to Netlify. I’m now using a well-known developer workflow with each site it’s own self-contained Git Repository, with the GitLab remote origin mapped to the Netlify CDN with a Webhook for auto-deployment whenever I commit a file to the repo. In addition since it’s Hugo and I want to publish pages into the future, I’ve also added a webhook to trigger a page rebuild periodically.

On the Mac I’m using the passable SourceTree App for Source Control and the awesome Sublime Text 3 for text editing, and on iOS I’m using the excellent Working Copy App with Textastic for text editing. To be honest I feel a lot more in control of what I’m doing now, and being able to run up my changes locally with Hugo, create custom development branches for public/private testing through Netlify and with the ability to rollback changes at the source code level, well, it makes web page maintenance and blogging a lot more like programming.

And as a programmer, I have to say I finally get why so many others do the same workflow. It’s pretty neat :)

]]>
Technology 2019-01-27T22:30:00+10:00 #TechDistortion
Fediverse Series: Definition https://techdistortion.com/articles/fediverse-definition https://techdistortion.com/articles/fediverse-definition Fediverse Series: Definition This third post in a series about the Fediverse focuses on micro-blogging platforms. My first introduction was to Mastodon, then Pleroma and finally most recently to Misskey. Let’s look briefly at each in turn.

Mastodon

Currently the most popular in terms of active users Mastodon (approximately 2,500+ servers) originated in late 2016 and is a complex application that uses a long list of frameworks and components to deliver what is considered to be the best Web-user interface experience at the moment for both end users and adminstators. However scaling the platform remains a concern and it is driven effectively by a single developer. It originally supported OStatus, but in v1.6 about a year after it launched it added ActivityPub support.

Pleroma

Launched informally in 2017 and like Mastodon, originally supported OStatus but later adopted ActivityPub though a tighter subset known as LitePub, in March 2018 and at time of writing, despite there being over 400 instances they are still running pre-v1 software with no formal release to date. Installation however is much simpler than Mastodon and can run on extremely low-capacity low-performance hardware as a result. It has a native web user interface which is similar to Twitter in some aspects, however also comes with the Mastodon-FE (Front-End) and supports the Mastodon v1 API allowing most Mastodon compliant client and server applications to work with it seamlessly.

Misskey

Reaching v1.0 in April 2018, Misskey is developed predominantly with a strong Japanese influence and elegant styling, conforming to the ActivityPub protocol and a very tidy web interface design. It has similar installation requirements to Mastodon though is considered easier to install and maintain and at time of writing has only 40 servers in operation with posts predominantly in Japanese, but gaining in popularity in other regions.

Server vs Instance and Application vs Fediverse

Let’s be clear, a server running the software for any of these three platforms on it, is “an instance” of that software. Hence you can consider an instance to be a server usually, but technically if you’re load-balancing then things get more hazy. Each instance is for a single domain or subdomain so it still makes sense to think of an instance by its domain name and not call it a server (technically).

It’s also better to separate the application names such as Misskey, Pleroma and Mastodon from the Federated protocol they utilise, such as OStatus and ActivityPub. During the OStatus era (which technically we’re still in however OStatus use is on the decline in favour of ActivityPub/LitePub) the term “Fediverse” was coined to describe the network of federated messaging between different platforms and applications using a common protocol. As naming goes it seems to have stuck, despite a suggestion to use IndieWeb and ActivityWeb and alternative naming conventions based on their current protocol names respectively.

What I’ve Installed

Well the Fediverse sees all, including my Mastodon and my Pleroma servers and tells the story. I’ve had no end of problems with my Mastodon server, with its higher VPS specification to run it, problematic upgrades and poor availability I decided to give Pleroma a shot and haven’t regretted it. They recently added web push notifications which was really great and my script authetication issues also work now so my automation scripts are behaving at last. Having said that make no mistake, they aren’t claiming it’s done yet and their current optimistic GitHub tag of v0.9.9 tells the story indirectly, though the Pleroma development team are keen to ensure it’s as solid as possible before touting a 1.0 release.

Under-the-hood Migration

To date swapping the server and messages under-the-hood as it were it isn’t supported. Meaning if you start up an instance using Mastodon with posts/toots/messages from that instance, becuase how messages are represented by the software on the server, it’s currently not possible to take a message list from one server, migrate the entire lot to a Pleroma server running different software on the same domain. I tried this and too many things broke.

I suspect migration may someday be possible but for now at least shifting to a different domain (or in my case, sub-domain) was the next best option.

No More Mastodon: FEDIVERSE

The truth is that I might set up a Misskey server someday, I might set up a blog that federates using Plume, WriteFreely or a Hugo-ActivityPub bridge might be developed, and I want to be able to describe the means to find me, NOT the technology. In modern conversation we might say “Send me an EMail”, we don’t say “Send me an Outlook” or “Send me a Thunderbird” which, well, could be interesting. In the same fashion I no longer intend to tell people to find me on Mastodon, or Pleroma, or whichever platform I’m using since they all Federate. You can find me now, on the Fediverse.

Updates Across the Board

To reflect this I’m adopting the proposed Fediverse iconongraphy on all of my sites, will be updating URLs, podcast intros/outtros you name it to reflect the Fediverse so when you hear me mention it you’ll know what and why. TEN was updated recently to reflect this.

So if you’re looking to get in touch, you can follow me on the Fediverse @chidgey@pleroma.engineered.space, just log into your Fediverse account on any instance of Misskey, Pleroma or Mastodon, type that into the search box and you’ll find me, follow/remote follow me and say ‘Hello’.

Catch you on the Fediverse everyone :)

]]>
Technology 2019-01-01T22:10:00+10:00 #TechDistortion
Fediverse Series: Facebook https://techdistortion.com/articles/fediverse-facebook https://techdistortion.com/articles/fediverse-facebook Fediverse Series: Facebook This second post in a series about the Fediverse (this one, somewhat more tangentially) focuses on the usefulness of Facebook pages as they relate to the future of TEN as it has been used as a Full-Length Blog Link MicroBlogging-page (of sorts). NOTE: I’m not going to be looking at all of the other ways Facebook is a problem, and if you want to look into Fediverse alternatives there are a few including Diaspora

Not wishing to re-hash the entirety of my previous post a quick refresher about Twinkblogs…Links to full-length Blogs posted as Microblog entries that aren’t intended to convey much other than a title and some brief text, drawing potential listeners to the episode in question. In that regard it’s the size of the audience you can reach through that channel that matters the most.

So far as feedback via mentions goes, if you’re interested in comments on your podcast then that’s something worth exploring and whilst Facebook had this functionality I seldom got comments via that page. Any feedback from readers is welcomed to either myself via the feedback form or via the Fediverse directly to me personally.

Federation support may someday include embedded audio and the simplicity of being able to consolidate into a single window is quite appealing. Unfortunately I remain concerned that such functionality is unlikely to be as fully featured or as useful as a dedicate podcast client application. For this reason until future support for federated posting via Hugo with embedded audio becomes a reality, it will remain off the table.

Facebook Page Algorithm

Lifting the mostly uninteresting curtain behind the TEN Facebook Page, the same number of posts occurred in 2018 as 2017. In 2018 only 3 Likes in 12 months, and all but six Notifications I received on the page came from Facebook helpfully suggesting “…people who like Engineered haven’t heard from you in a while…Write a post…” Uh-huh. Thanks. The reach of these posts expressed as a percentage of Like(s) in the month of December averaged 22%. Some 15 months earlier it exceeded 100% regularly.

Early in its life, Facebook encouraged businesses, groups, organisations to host their pages on Facebook for organic growth and a wide distribution. However changes to Facebooks algorithms in the past few years with dozens of weighting factors now used to tweak what people see in their timeline makes trying to get organic visibility essentially impossible unless you want to A) try to game the system (sounds like a full time job) or B) pay $43AUD to reach an additional 3,400 people per day, so claims another ‘helpful’ Notification from Facebook on the page. Uh-huh. No thanks.

Future Plans

Currently when a podcast episode goes up on TEN, an RSS Feed scraper takes a copy of the title, a URL link to the episode, then publishes it to a Mastodon account. From there a second script takes that and re-tweets it to the Engineered_Net Twitter account and Facebook is manually added later. With a significant following on Twitter the Engineered_Net account will remain for the immediate future. However the same can not be said of Facebook.

Based on the above Twinkblog rationale, manual posting requirements (Facebooks API requires regular re-authentication which is annoying), Facebook asking for money to ‘give back’ organic reach, and finally with my move to gradually step away from Facebook, I’ve decided to close the The Engineered Network page on Facebook. All other subscription methods will remain unchanged including RSS to Causality, Analytical and Pragmatic as well as the TEN Master Feed. My recommendation is that people that have Liked TEN on Facebook and use it for show notifications either follow the TEN Twitter account @Engineered_Net or better still, jump on the Fediverse somewhere and following me @chidgey@engineered.space where I’m active every day.

Failing that just subscribe in your podcast player app of choice. There’s PocketCasts on Android and iOS, Overcast on iOS, and Apples Podcast app is also much improved in recent times as well.

Reflecting on podcast distribution for a moment: It’s funny (okay it isn’t…it’s brilliant!) how an open standard like RSS that powers podcast subscription and distribution remains the best option, whilst centralised platforms like Facebook, once they get big, turn-coat on everyone and charge for visibility. Hopefully this explains why so many people are leaving their Facebook pages and highlights some of the risks of using centralised, company controlled sites for notifications and distribution.

]]>
Technology 2019-01-01T20:35:00+10:00 #TechDistortion
Fediverse Series: TechDistortion https://techdistortion.com/articles/fediverse-techdistortion https://techdistortion.com/articles/fediverse-techdistortion Fediverse Series: TechDistortion This first post in a series about the Fediverse focuses on three aspects as they relate to the future of TechDistortion (this blog): Full-Length Blog Link MicroBlogging, WebMentions and Federation support (ActivityPub/LitePub/OStatus).

Twinkblog

Links to full-length Blogs posted as Microblog entries don’t intend to convey much other than a title and some brief text, drawing potential readers to the full article. I mentioned the phenomenon of Twinkblogs 5 years ago, but really it’s an avenue of communicating an article exists, not the content of the article itself. In that regard it’s the size of the audience you can reach through that channel that matters the most.

WebMention

IndieWeb are popularising the WebMention as a method of allowing users to reply to a blog or article with the article then able to aggregate all comments, mentions, reblogs as part of the article. Any WebMention compliant site would allow that interaction to occur creating a common point for all comments in a federated way between users from different accounts on different systems, like Disqus but not centralised and more flexible. If you’re interested in comments on your blog then that’s something worth exploring. I’ve never had comments enabled on TechDistortion in the decade I’ve been writing articles and don’t intend to add them now. Any feedback from readers is welcomed to either myself via the feedback form or via the Fediverse directly to me personally.

Federation

Not all platforms are so text-length restrictive as Twitter (280 characters) and Mastodon (500 characters) with Pleroma allowing administrators to set whatever limit they like. On my Pleroma instance I’ve left it at the default 5,000 characters but might change that at some point in the future. The idea is that using ActivityPub/LitePub a blog could be subscribed to as if it was a regular account on the Fediverse. That seems convenient however scrolling through a 9,000 character long article on a smartphone screen application intended for short posts might not be as clean an experience as a dedicated long-article reading application like Unread (for example). That said, the simplicity of being able to consolidate into a single window is quite appealing. Unfortunately when moving away from Statamic to Hugo, Federation wasn’t a thought I had in mind, and hence since neither supports Federation it will not be explored in the short term.

Future Plans

Currently when a blog entry goes up on TechDistortion, an RSS Feed scraper takes a copy of the title, a URL link to the article, then publishes it to a Mastodon account. From there a second script takes that and re-tweets it to the TechDistortion Twitter account. Currently counting the number of actual people and lists on the TechDistortion Twitter account, there are more real people subscribed to the sites RSS Feed directly and also to both my personal Mastodon and old Twitter accounts.

Based on the above Twinkblog rationale and also with my move to gradually step away from Twitter, I’ve decided to close the TechDistortion Twitter account. I will instead be posting those links only to my personal Fediverse account, which is copied to my ‘old’ personal Twitter account. RSS will always remain for anyone to subscribe to. My recommendation is that people following the blog on Twitter either follow my ‘old’ Twitter account @johnchidgey or better still, jump on the Fediverse somewhere and following me @chidgey@engineered.space where I’m active every day.

In future if a Hugo–>Federation intermediary service is developed I’ll probably look into that, since I really like Hugo ;)

Thanks everyone.

Oh yeah…Happy New Year.

]]>
Technology 2019-01-01T17:30:00+10:00 #TechDistortion
7-11 Slurpees https://techdistortion.com/articles/7-11-slurpee https://techdistortion.com/articles/7-11-slurpee 7-11 Slurpees Being that it’s the middle of summer in my hemisphere, after a hard days work in the yard a nice cold frozen drink is always well received. Recently the pricing war between McDonalds, Hungry Jacks and the old-faithful 7-11 has led us to an over-supply and low prices of frozen drinks. All that’s lovely for consumers, and if you’re keenly interested in the zilch-sugar (sugar-free) options then 7-11 is the way to go (or if large amounts of sugar don’t worry you, I still think 7-11’s Slurpees have more/nicer syrups)

7-11 Slurpees

They offer three primary sizes, but if you actually measure the cost per volume it shows how 7-11 are making their money: they want you to upgrade to the bigger drink.

Name Size (mL) Cost Cost/L
Large 650 $1 $1.53
Super 850 $2 $2.35
Mega 1150 $3 $2.60

Based on the above therefore I’d suggest that if you’re REALLY thirsty, getting x2 Large drinks is the clear winner. Otherwise stick to the $1 size and save your money.

Oh yeah, and don’t drink it too quickly either…

]]>
Technology 2018-12-20T21:30:00+10:00 #TechDistortion
Fediverse: Own Your Social Graph https://techdistortion.com/articles/own-your-social-graph https://techdistortion.com/articles/own-your-social-graph Fediverse: Own Your Social Graph Imagine a world where you could pick and choose what server backend you wanted for your social media (if you want to - like picking a bank to bank with?), pick a social media identity that is truly canonical for all time (you know, like your name is in the real world), and pick whatever application(s) you want to use on your platform of choice so you get to interact the same way no matter who you’re talking to. They’re ALL your choices. Are we there yet?

Nope.

This is the story so far as we all collectively (hopefully) move towards that goal.

In April 2017 I wrote about Engineered Space and recorded an episode of Pragmatic about my experiment with Mastodon. I was attempting to ‘take control’ of my Social Graph and Mastodon held a promise of that.

The reality hasn’t entirely lived up to expectations for me so far, although I still prefer it to Twitter and Facebook. The truth is that currently Mastodon is still a silo of a sort, which I discovered as I attempted to move to a different platform.

One EMail-like social address to rule them all

When I started @chidgey@engineered.space I had a longer-term intention in mind: purchase a domain that I liked, and then with OStatus and now ActivityPub, it should be possible to use whatever standards-compliant backend server setup I wanted, and I should be able to retain the same Fediverse username for all time.

Not only that, I could also then choose whatever front-end client I wanted to and it would connect to the standards-compliant backend server infrastructure I was running.

What’s Wrong With Mastodon?

There’s three issues I have: how it’s having its feature-set prioritised, a lack of testing for upgrades with regular mis-steps, and finally it’s resource-hungry. I was running my instance that had only my account on it and about 10 others with minimal traffic, on a VPS with 1.6GB RAM, a reasonable CPU and if I tried to refresh my timeline it would regularly throw a 502 error. Image posts regularly failed, it would also completely fall over once or twice every week requiring a server reboot to recover with no obvious cause. In short, it became a hassle.

The production guide to install Mastodon is very good though, with plenty of examples for different Linux distros to install it on and it takes a bit of effort requiring Rails, PostgreSQL, Redis, Sidekiq, NodeJS and ElasticSearch (if you want search functionality at all). It also wouldn’t install and run on Centos 6 and whilst I don’t mind admitting that Centos 6 has had its day, sometimes you can snag a cheap VPS that won’t run Centos 7. Upgrading required a series of git pulls, rake commands and database migrations and could take half an hour to fully compile, requiring me to kill the NGINX server or it would never complete.

I was advised to throw more money at the problem. I could upsize my VPS at more expense or I could shift my hosting elsewhere and let someone else deal with it. Altenatively, I could look for a different ActivityPub compliant platform…

Pleroma

Lain walks through what Pleroma is and I won’t repeat that but essentially it’s 90% of what Mastodon is but only requires Elixir and PostgreSQL, it runs on Centos 6 (although you won’t find any Production guides for that) and it’s happily running on a Speedy KVM VPS (DAL-VOL0), 1 E3-1230 3.2Ghz CPU, 256MB ECC RAM, 12GB HDD for $18USD/yr. If it keeps chugging along nicely, I’ll fork out for three years for $36USD ($1/month).

Not only is it cheap to run, it’s quick. I can refresh and refresh and fill gaps in my timeline and it responds in a second or two and never fails. Uploading images works every time now and if you’re like me and you’re not really into the TweetDeck-esque Mastodon FrontEnd (Pleroma offers this front-end option if you really want it though) then it has a far more Twitter-eque Pleroma FrontEnd that I much prefer.

Before you think “John’s ready to marry Pleroma…” stop. It’s not perfect. In fact there’s a few significant drawbacks:

  • There are no dedicated Pleroma client applications I’ve found, but becuase Pleroma also implements the Mastodon API, most Mastodon client applications will mostly work with Pleroma
  • Web Push Notifications aren’t implemented yet (since most Mastodon clients use this for push, that’s annoying) More on this in a minute…
  • Many site layout tweaks are buried in the config.exs file on the server
  • Documentation is generally lacking in a lot of areas if you want to deploy/understand it
  • It’s v0.9 at time of writing (Yes, it’s not ‘officially’ released yet…)

On the plus side some of my favourite Mastodon apps work almost perfectly with it (notifications generally not withstanding):

iOS

MacOS

All of the above notwithstanding, there’s a strong beating of the open-source drum by the development team on Pleroma. Whilst Gargon on Mastodon makes no bones with the fact he wouldn’t mind if Twitter collapsed tomorrow, he supports whatever clients, forks of Mastdon, other projects that support ActivityPub in whatever form they might take. The Pleroma team on the other hand have actively and aggressively shamed non-open source developers trying to get more involved with Pleroma. I’ve seen sole developers that are making apps that are free but closed-source, paid and closed-source, and even federated services like Micro.Blog trying to open up connectivity with Mastodon be shunned all becuase they aren’t open source.

The future of federation will ultimately be a blend of open and closed source software running on servers and clients from different groups, inividuals and companies around the world, all talking on a common standard or sub-set of standards. The fear that one closed-source player will “take over” neglects the nuance that people will vote with their feet and that if a corporation does wrong by their users, they will eventulally abandon that server for another (like many have abandoned Twitter for Mastodon already).

“Open Source” mantra is an idealology, not

Pleroma need to consider their position in the cross-platform game, supporting other standards to improve operability and usability otherwise they will be outgrown by Mastodon and will become irrelevant before they start.

Attempting to Migrate

Mastodon provides the ability to export a user list as a CSV: this worked as expected. Pleroma also imported what it could, but when instances are offline (I discovered I wasn’t the only Mastodon instance that was regularly offline) if Pleroma couldn’t verify that an imported user actually existed it wouldn’t add it to the follows list. Over the duration of a week I successfully added all but 6 of my follower list progressively with the import script in Pleroma smart enough to not create duplicates.

Exporting my “Toot” history proved impossible through the web interface in Mastodon. I tried many times and it failed every single time.

]]>
Fediverse 2018-10-31T07:00:00+10:00 #TechDistortion
Accessibility Driven Opportunism https://techdistortion.com/articles/accessibility-driven-opportunism https://techdistortion.com/articles/accessibility-driven-opportunism Accessibility Driven Opportunism Originally Drafted 13th October, 2016

We’re lazy creatures. That and things cost money. When things take too much effort or cost too much money, we don’t take advantage of them. Only those people with enough spare time or money can do them. I first came across this phenomenon when studying traffic engineering. Widen a freeway and the amount of traffic it conveys will increase to utilise that new capacity. The newly accessible capacity of the road becomes quickly known by local residents that previously took public transport, rode bicycles, walked or just didn’t travel at all, and then they decided to utilise this additional capacity. The opportunity to travel either more directly, in more comfort or more quickly than the alternative drives the opportunistic behaviour to utilise that additional capacity. Theoretically it should be possible to build a freeway with an extremely large number of lanes that has capacity that far outstrips the physical quantity of vehicles that could ever use that route between two set locations, even including for external visitors. The sheer cost of doing so generally precludes this from ever happening on a macro scale but the limit still exists. Hence there’s a point at which increasing accessibility reaches a point of diminished potential such that it is unlikely to ever be exceeded.

A more popular example I came across recently relates to watch bands on an Apple Watch. The watch itself is quite expensive, however unlike many other watches in the world, it may have its bands easily replaced in less than a minute when the wearer needs to exercise, change to a dressier outfit or go off to work. Changing the band changes the appearance, feel and usefulness of the watch without having to have a second watch as was previously the tradition: two watches, one for normal day use and one as a dress watch. Replacing bands on a traditional watch is a cumbersome, frustrating exercise but with this watch in particular that’s no longer the case. As changing the bands becomes more accessible, the possibility of changing bands becomes easier. As cheaper alternative bands become available, this further drives accessible choices for more people. Of course people will eventually reach a limit whereby they have more than enough bands to cater for every circumstance they personally desire, at which point the maximum potential is exceeded once again.

A final example is changing code in mass-deployed devices. When I was starting out my career software updates were handled by physical ROM ICs, that were attached by sockets to the motherboards of the control cards in the field. Changing out the firmware was a manual, slow, annoying task that was very expensive. Many locations didn’t have a network connection of any kind and wireless was very uncommon and even less common for data connectivity so this was just accepted as reality. At time progressed and the internet became what it is today, with mobile data networks becoming wide-spread, there was a more and more accessible data path to end devices for manufacturers. Over the air updates then became the preferred method of fixing problems and this accessibility drove opportunistic updating of end devices. This seems like a good thing at first with manufacturers able to correct problems even ofter their devices had left the factory, however it drove manufacturers and engineering companies down another route: minimally tested software. As the speed to fixing bugs after the device shipped improved, management circles pushed the key features (heavily tested we hope) out the door with the devices quickly, leading to many features being far less tested and requiring future OTA updates to be applied. Provided these were low-impact bugs then that’s probably a good trade off but end users don’t always see it that way.

As always no one complains about good software, they only complain when it breaks and just because you can ship something today less tested with the aim of “fixing it later” doesn’t mean that you should. The opportunity to quickly fix problems is tempting but rigorous testing and qualification will generally save time and money in the long run. The only question to ask to ponder is whether the availability has driven opportunistic thinking and if it has, what opportunistic cost will you incur for it? Opportunity cost cuts both ways.

]]>
2018-10-15T16:45:00+10:00 #TechDistortion
Three Site Strategy https://techdistortion.com/articles/three-site-strategy https://techdistortion.com/articles/three-site-strategy Three Site Strategy After a lot of deliberation and consideration I’ve decided it’s time to refine (slightly) where I keep what on my sites. In the past I’ve maintained two primary web-presences: TechDistortion and TEN. The problem was that I didn’t feel like grouping all of my podcasts together under a single site in 2015 made sense, so I kept older pre-TEN episodes of shows under TechDistortion, with only newer episodes kept on TEN. The other problem was that TD had blog posts on a wide variety of topics including Statamic guides, cartooning (it was a brief fancy for a while), tech-related blog posts and engineering-related blog posts.

Under this grouping, someone visiting TD would find podcasts, articles/posts on a huge variety of topics and a few references to TEN, and someone visiting TEN would find podcasts and the occasional TEN-specific post, but miss some back-catalogues of shows. Based on years of feedback and with the excuse of migrating away from Statamic, I’ve finally finished re-organising my online web miscellaney as follows…

TEN

The Engineered Network TEN will now be the sole repository for all podcasts I’ve ever made, past and present with a new archived section that contains all past episodes of shows long since ended. The hosts and guests list has been extended to include all shows, past and present. I intend to do more with TEN in the future including transcriptions and transcription search which I am determined to complete. (For those receiving the NewsLetter, you already know the sad story there…)

Control System Space

A new site launched in August this year, it’s focus is completely engineering-specific articles called Control System Space. (I’m going through a ‘space’ phase clearly…) In truth it was my first real attempt at a Hugo website and since then I’ve learned a lot. I’ll probably revisit/tweak/refine it in coming months but the intentions behind it are three-fold:

  • Be a repository for professional White-Papers, supporting independent knowledge-sharing in Control Systems Engineering
  • Remove J-O-B “job” related posts from TechDistortion and keep them together in a single place
  • Be a professional-facing outlet that I can direct those to with whom I work with or the greater CSE industry

As a litmus test I posted two articles on LinkedIn, and distributed links within the organisation both in and beyond the Automation Systems Team at work and they were well visited and very well received. In this way engineers that are less interested on my thoughts on Apple or Microsoft will see the most heavily polished, relevant articles for them.

TechDistortion

TD will remain for blog posts however there will be no podcast episodes and no engineering-specific articles there any more. In addition the whole site has been completely redone in a newer darker-high contrast view with all articles merged into a common article feed.

The Future

It’s been an interesting journey from Static (1996) to Dynamic (WordPress 2000s) to Statmic (2013-2018) to Static again (Hugo 2018-?) but with everything I’ve learned along the way, the tools we use aren’t always as important as the content, but with Hugo my life is easier, site maintenance is easier, sites are more responsive and reliable and that should leave more time for content. And now with the content hopefully more logically grouped by type and audience, anyone visiting will be more likely to find exactly what they’re looking for.

]]>
Technology 2018-10-14T16:25:00+10:00 #TechDistortion
LTE Apple Watch App List https://techdistortion.com/articles/lte-apple-watch-app-list https://techdistortion.com/articles/lte-apple-watch-app-list LTE Apple Watch App List With my aforementioned goal to ditch my phone when outside the house and use the watch for as much as possible, I am compiling a list of all of the Apps that I’m using that meet my current needs, and noting gaps where they exist. The configuration I use is a 42mm Stainless Steel Apple Watch Series 3 with LTE enabled and AirPods connected.

Criteria for an Apps usability is based on three criteria:

  • (Create) Can create items on Watch
  • (Modify) Can modify (including delete) items on Watch
  • (Sync) Can sync new/changed items to Cloud via LTE

In addition on the Watch there are three primary methods of data input:

  • (Siri) Siri voice dictation (speech-to-text)
  • (Num) Numeric Keypad (where applicable)
  • (Scr) Scribble finger drawn letters, one by one, on the watch screen

The following table list listed in order of Apple native apps first (denoted with an asterisk *), followed by installed third party apps, followed by notification-initiated interactions with apps not physically installed on the watch.

Function App Create Modify Sync Notes
EMail Mail* Y Y Sometimes Exchange/GMail (non-Apple) EMail generally works but not consistently. Read items don’t reliably sync their read marker status with the Cloud. Moved my work EMail across until Outlook gets LTE capability.
Music Music* N N N Synced playlist music only, with streaming (coming in watchOS 4.1). Possible to add music to a playlist via the iPhone.
Locating Find My Friends* N N N/A Shows map, photos, names, distances but the map sometimes doesn’t load. Huge update and pleasantly surprised how well it works
Messaging iMessage* Y N Y Emotional reactions, replies, scribble, Siri dictation
Navigation Maps* Y N/A N/A Siri can create new navigation requests, provides Turn-by-Turn Steps, Location on Map. No Live Map Navigation, but this is an understandable restriction given GPS and screen power drain.
Digital Wallet Apple Pay* Y N/A N/A Per Series 0, 1 and 2 it works without any wireless connectivity by design
Appointments Calendar* Y (Siri) Y (Delete only) Y Previously used Fantastical due to its configurability however creating Reminders/Events via Fantastical (Siri only) didn’t work over LTE. Can’t use Scribble to create appointments. Can’t modify appointment times on Watch, though can delete.
Calls Phone* Y (Siri/Num) N/A N/A Works via Speaker or either/both AirPods. Possible to pick up calls with AirPods even if they aren’t in your ear when the call comes in.
Weather Weather* N N N/A Locations have to be configured on iPhone first. Previous favourite apps were BeWeather, Rain Parrot, and Weather AU but none work on LTE. Still no app that shows the radar map on the watch that works in Australia. Dark Sky doesn’t work here. Alas.
Reminders Reminders* Y (Siri) N Y Can only create using Siri not via app. Can not modify anything once created and always put in Default reminder list.
Web Search Siri* N N N/A Only basic Siri answers are possible via the Watch. As there is no browser on the watch, there’s no mechanism to get detailed search results returned to the Watch, and you’re directed to the iPhone. On-watch functionality works over LTE (setting timers, music playback etc)
Calculator PCalc Y Y N/A Never required iPhone other than to configure.
Podcasts WatchPlayer N Y (Delete) N Sometimes loses its place between listens, Syncing episodes is annoying. Previously used Overcast but for the moment the Watch playback functionality is being worked on by its developer.
Passwords 1Password N N N/A Doesn’t use data connection. Can’t create logins on the Watch, not sure I want to anyway. Need to set up on the phone first
Digital Wallet Stocard N N N/A Doesn’t use data connection. Can’t create cards on the Watch though could be a useful feature provided no photo is needed. Need to set up on the phone first.
Sleep Tracking Autosleep N N N/A Provides basic report of sleep duration, but requires iPhone to perform sleep analysis. Limited to showing last night, as well as 7 day average.
Notetaking Drafts Y (Siri/Scr) Y (Siri/Scr) N Syncs to iPhone only when in range, however there is no other note-taking app on the Watch as a first-party app and Drafts works well in that respect, except for Cloud sync.
Voice Recording Just Press Record Y Y (Delete) N/A Records audio notes quickly and easily and allows playback via the speaker or AirPods. Only syncs with iPhone when in range.
What’s The Song? Shazam Y N N Can’t ask Siri to identify what song is playing on the Watch, but Shazam works perfectly and more discretely. Syncs the list of Shazam’d songs when iPhone is in range.
Twitter Tweetbot (Not On Watch) N/A N/A N/A Notifications from Tweetbot allow basic reactions like Favouriting and Retweeting.

Biggest misses for me at the moment:

  • Reminders isn’t a good To Do app, and I can’t wait for Things (or similar) to support could sync but knowing they rolled their own Cloud sync this may not happen for a while (if ever)
  • Inability to modify anything about a Reminder or Calendar appointment
  • Composing a basic tweet, mention or direct message not possible (same for Mastodon)
  • Notes absent; even a stripped down text-only version would be fine

With time, developers will update their apps to use direct data interaction with servers rather than via the paired iPhone so the list of third-party apps should get much longer in due course. I’ll endeavour to update this list every few weeks or if a major app update is released.

]]>
2017-10-22T15:00:00+10:00 #TechDistortion
Phoneless https://techdistortion.com/articles/phoneless https://techdistortion.com/articles/phoneless Phoneless I’ve always loved my Apple Watch. When Apple announced LTE in the Series 3 I was initially disappointed that they hadn’t given us always on screens, but also shocked that they’d managed to get energy efficient LTE into the device at all without killing the battery in 5 seconds flat. Truly impressive. Without going into the details of how I’ve routed what to where (it’s convoluted trust me) I’ve upgraded from my 42mm Silver Stainless Steel Series 2 to the equivalent Series 3 model earlier this week, and also linked it to an iPhone.

My goal: ditch my phone when outside the house and use the watch for as much as possible.

An Apple Watch paired with AirPods (or even a single AirPod) is already lighter and more convenient than a phone for phone calls since it’s more discrete and less intrusive. I’ve made phone calls both on AirPods and the speaker and they’re both passable though the AirPods are better, you could live without AirPods in a pinch. In which case, you’ve got a fully waterproof phone on your wrist that you can’t lose, is harder to break/scratch/damage, and with the sound off is totally silent when notifications come through your wrist.

I thought at length over the past month since the announcement about what I use my phone for, exactly. It’s a longer list than I initially thought, but I use my iPhone for:

  • Taking photos (less these days since I bought a DSLR)
  • To Do Lists (Things 2 was my favourite)
  • EMail (Outlook for work, Spark for TEN, Apple Mail for Personal)
  • Music (Apple Music)
  • Find My Friends
  • iMessage
  • Navigation (Sygic/Apple Maps)
  • Passwords (1Password)
  • Stocard (Wallet reduction)
  • Apple Pay
  • Social media (Facebook/Twitter/Mastodon)
  • Autosleep (Sleep Tracking)
  • Checking the Weather (BeWeather, Rain Parrot, Weather AU)
  • Calendar Appointments (Calendar/Fantastical/Outlook)
  • Playing Podcasts (Overcast)
  • Notetaking (Notes)
  • Surfing the Web (Safari)
  • Making/Receiving Phone calls
  • Checking Bank Balances
  • Calculator
  • Light

That’s it. Not a trivial amount, for sure.

Of the above, I can do all of those items now, using the Apple Watch on LTE with no phone nearby, except:

  • Checking Bank Balances (rare thing but could get annoying)
  • Social Media (have stopped using it anyway)
  • Outlook for work (I still get the notifications though, so that’s fine and my work calendar is mapped to Calendar for Fantastical anyway)
  • Spark Mail (Will migrate to Mail)
  • Things (migrated already to Reminders)
  • Playing Podcasts (Reluctantly moving to WatchPlayer, but it works okay)

With time, developers will update their apps to use direct data interaction with servers rather than via the paired iPhone so that list should get shorter in due course.

The main idea here is that at work I’ve gone full iPad Pro anyway, and I’ll have that with me on work days and at home. When I’m out on personal errands I won’t have it, but under those circumstances, the ONLY thing that I’ll miss is web searching, and Siri can help with a small number of those searches, but that’s really the only big hole.

There are other niggly-holes though like having to abandon Overcast for podcast playback, but I know its developer (Marco Arment) is working hard on a solution as we speak (so to speak). Preparing to listen to podcasts now must be done ahead of time, preloaded, and transfer them to the Watch over WiFi (not Bluetooth) unless you’re a masochist and it works okay. (Podcast spontaneity will be on hold for now)

I had to add each song in Apple Music to a monster playlist to force it all onto my Watch but that works fine now and the 16Gb of storage is enough for the vast majority of my music collection I’d want to listen to regularly. It’s easy to add songs via my iPad and it will sync up when I get home plus WatchOS 4.1 will bring streaming to the Watch which will be very nice as well.

I realise that Apple isn’t trying to make the smartphone obsolete, and I and many others are going to use the watch as a standalone device when that’s not really its intent. But really, if it’s going to work for practically everything I need, I’ll leave my iPhone at home, plugged in and just use my Watch for everything else. In time the Watch won’t be tethered to a phone anymore, and apps will all communicate directly to servers rather than via a proxy system. At which point I probably won’t bother with a phone, but that’s probably a few more years away - and that’s okay.

I’m not the first nor will I be the last person to try this, but this is going to be a fun experiment. Let’s see how it turns out…

]]>
2017-10-14T16:05:00+10:00 #TechDistortion
BubbleSort https://techdistortion.com/articles/bubblesort https://techdistortion.com/articles/bubblesort BubbleSort Today, Vic Hudson, Clay Daly and I are launching a new podcast called BubbleSort. Vic has been my most regular co-host on Pragmatic over the past four years and also hosted the wonderful App Story Podcast for 14 episodes in 20142015. Clay Daly is one of the hosts of the wonderful Cybrcast which has been running since 2014.

We all wanted to catch up to discuss what’s happening in the world of technology in a medium that was better than Twitter, Mastodon or Facebook and it turns out you can talk on Skype, press record and share it with anyone else that’s interested and you have a podcast. (Okay, maybe trim out some bits and pieces in post…)

Bubblesort can be found at bubblesort.show and on Twitter at @bubblesortshow.

Bubblesort is not part of TechDistortion nor part of TEN. It is its own standalone collaborative effort. We’re not trying to take the world by storm, we’re not trying to make money. We’re doing it because it’s fun, and if we’re having fun, maybe you will too.

My thanks to Vic for tackling the audio editing and musical score, to Clay for developing the artwork and to both of my co-hosts for making time in their busy schedules to catch up every two weeks or so to make a thoroughly fun and relaxing podcast.

]]>
General 2017-07-11T07:00:00+10:00 #TechDistortion