Total of 276 posts

Herein you’ll find articles on a very wide variety of topics about technology in the consumer space (mostly) and items of personal interest to me.

If you’d like to read my professional engineering articles and whitepapers, they can be found at Control System Space

I have also participated in and created several podcsts most notably Pragmatic and Causality and all of my podcasts can be found at The Engineered Network.

Back To The Mac

It’s been a long series of experiments beginning in the mid-2000s when I moved from Windows Vista to MacOS Tiger, then to the iPad in 2011 running iOS, back to Windows 10 on a Surface Pro 4, back to an iPad Pro in 2016, trying a sole-Apple Watch LTE as my daily device and finally now back to a Macbook Pro Touchbar running Mojave.

Either I’m completely unprincipled in the use of technology, or then again perhaps I’d prefer to think of myself as being one of the few stupid and crazy enough to try every different mainstream technological option before reaching a conclusion. Whilst I admit that Everything is Cyclic it is also a quest for refinement. Beyond that sentiment naturally as the field of technology continues to evolve, whatever balance can be found today is guaranteed not to last forever.

If you want the TL;DR then skip to the Conclusion and be done with it. For the brave, read on…

Critical Mass for Paperless

Ideally computers would replace paper and ink for communicating ideas in smaller groups in person, and replace overhead projectors and whiteboards as well for larger groups, but they haven’t. The question is simply: which is easier?

We are all able to pick up a pencil and write as we are taught to at school and despite typing being an essential skill in the modern world, many people can not touch type, and with keyboards on small glass screens now all non-standard sizes, even that 80s/90s typing skill presents difficulties for skill level equalisation among the populace. (I’m now beating most 15-25yr olds in typing speed tests as they’ve learned on smartphones, away from standardised physical keyboards)

The iPad Pro with the Apple Pencil represented the best digital equivalent of an analogue pen or pencil and hence for nearly 2-12 years now, I have not needed to carry an ink-based pen with me. At all. An an engineer I’m not interested (generally) in sketching and whilst that’s something I can do I’m not particularly good at it, so I use the Apple Pencil to take notes. Unlike an ink pen on paper notes though, I can search through all of my notes easily with handwriting recognition.

The use of iPads for this purpose has increased significantly in our office (no, not entirely because of me though I was the first I am aware of to do that in our office), and it has increased because it is so much better than ink on paper. The amount of photocopier and scanner usage has dropped significantly and it’s only a matter of time before there is a transition away from them altogether. Like the fax machine shortly there will be one photocopier per floor, then one for the building, and then none at all in a matter of a decade.

The paperless office may finally arrive; a few decades behind schedule, but better late than never.

Fighting the Form Factor

A term I’ve come across in programming is “Fighting the Framework” which is meant to illustrate that Frameworks and APIs are written with an intent, with data structures, methods and objects within all cohesively designed around a specific model, view and/or controller, inter-object messaging and so on. If you choose to go around these structures to create your own customised behaviours, doing so represents significantly more work and is often far more error-prone as you are going against the intended use and nature of the frameworks.

I’d like to propose that there are people that love technology that are obsessed with taking devices with a specific form factor and making them “bend” to their will and use them in ways that fundamentally conflict with their design intention. Irrespective of whether you believe pushing the boundaries is a good practice or not, there are limits to: what is possible; what is practical; and what can be expected realistically when you fight the form factor.

Examples include the commentary around the iPad or tablets in general, still “just being a tablet” meaning that they are predominantly intended to be used as consumption devices. Of course that’s a reductive argument since content comes in many forms, written, audible, visual at a very basic level, and within each there are blends of multiple including newspapers, comic books, novels, TV Shows and Movies. The same argument works in reverse whereby according to the currently popular trope, it’s “too hard” to create content on a tablet and therefore it is and can only be a consumption device.

The fundamental structure of the iPad (iOS more specifically) and the constraints of a single viewport, the requirement to cater for the lowest common denominator input device being a human finger makes the form factor difficult to directly copy ideas and concepts from desktop devices which have 20 years or more of trial, error and refinement. As time goes on more examples of innovation in that space will develop for audio (eg Podcast Audio) Ferrite and video Luma Fusion and although these will not satisfy everyone, only a few years ago there were no equivalent applications on iOS at all.

In the end though there is no easy way for the iOS form factor (both physical and operating system) to permit certain important, proven aspects to all a specific class of application designs and use cases. For these unfortunate classes, fighting the form factor will yield only frustration, compromise and inefficiency.

Multiple-Screen

You can’t beat pixels (or points). Displaying information on multiple screens on an iOS device in a way that allows a user to display information side-by-side (or in near proximity if not perfectly aligned) and importantly to visually compare, copy and paste seamlessly between, is a feature that has existed and been taken from granted from desktop computers for decades.

On larger-screened iOS devices this feature has been added (to an extent) with slide-over and side-by-side views, however the copy and paste between the applications isn’t widely supported, comes with several caveats, but most importantly there aren’t enough pixels for a large number of side-by-side review tasks. The larger the documents or files you need side by side, the worse it is on an iPad.

iPads have supported application-specific monitor output which isn’t just a mirror of the iPad screen, however support for this is rare and bound to the application. There’s no generic way to plug in a second, independent monitor and use it for any general purpose. Then again, there’s no windowing system like on the desktop so without a mouse pointer or a touch-interface on the connected screen, how could the user interact with it?

Some have proposed in future multiple iPads could be ‘ganged’ together but apart from this being cost-prohibitive, it’s unlikely for the same reason that ganging iMacs together isn’t supported anymore (Target Display Mode ended in 2014). Beyond this no existing iPad (even if it supports USB-C) can be chained to support more than one additional monitor. If you have a laptop or a desktop currently, most support two additional displays with a combined cost of significantly less than the multiple ganged iPad Pro solution.

Navigation Methods

Scrolling and navigating around large documents is slow and difficult on an iPad with few short cuts, many applications lack search funtionality, loading large files can take a long time and there’s a lot of fast-flick-swiping to get around a document. These issues aren’t an issue on a desktop operating system, with search baked into practically every application, Page Up/Down, scrolling via scrollbars, trackpads and mouse wheels all of which are less obtrusive and overall much faster than flicking for 30 seconds to move a significant number of pages in a document.

Functional Precision

The capacitive touch screen introduced with the iPhone and subsequently with the iPad made multi-touch with our highly inaccurate built-in pointing devices (our fingers) a reality for the masses. As an input method though it is not particularly precise and for that a stylus is required. The Apple Pencil serves that function for those that require additional precision, however pixel-perfect precision is still faster and easier with an indirect positioning mechanism like a cursor.

Conclusion

My efforts to make Windows work the way I needed it to (reliably) weren’t successful and the iPad Pro met a great many of my computing needs (and still does for written tasks and podcast editing). However I was ultimately trying to make the system do what I needed, when it fundamentally wasn’t designed to do that. I was fighting the form factor and losing too much of the time.

Many see working on the iPad Pro exclusively as a challenge, with complex workarounds and scripts to do tasks that would be embedded or straightforward on a Mac. Those people get a great deal of satisfaction by getting those things to work but if we are are truly honest about the time and effort expended to make those edge-cases function, taking into account the additional unnecessary friction or resistance in so doing, they would be better off using a more appropriate device in most cases.

For all of the reasons above I came back to the Mac and purchased a Macbook Pro 13” 2018 model and I have not regretted that choice. I am fortunate the my company has provided a corporate iPad Pro 2, which I use every day as well for written tasks. I feel as though I am no longer fighting against the form factor of my machines, making my days using technology far less stressful and far more productive. Which in the end is what it should be about.

From GitLab to GitHub

I previously wrote about a new website publishing workflow that used Netlify CDN as the front end, using my own Private GitLab repository, hosted on an OpenVZ 2GB SSD VPS from Hosted Simply. I wanted to have my own fully private repo for projects and to host the websites I maintain for as little expense as possible.

So about that.

After having my VPS shut off by the Hosting company due to high CPU usage, and experiencing multiple build failures and web interface non-responsiveness, I tweaked every setting that I could, disabled everything non-essential to running GitLab, and finally gave in.

Turns out that following GitHubs acquisition by Microsoft in late 2018 they decided to make private repos completely free and announced that in January this year. By that time I’d already built my GitLab instance, but with the issues I was having I decided to switch between the two. Turns out that took all of about one hour, Netlify integrated without complaint and my GitLab is now disabled.

I don’t blame the hosting company for protecting other users in the OpenVZ Shared environment, that’s totally fine. Ultimately the 2GB VPS simply wasn’t enough for the GitLab instance to function on. Looking back there were some updates applied that fixed a bug I was experiencing but bundled with that bug fix was new functionality that caused higher memory and CPU usage. Hence what used to work (just barely) on my VPS, would no longer function reliably without a higher spec.

GitLab has a lot of enterprise-type features that ran in the background and consumed all of the memory with a lot of performance issues on the VPS I had available. If I didn’t mind spending more money I could have reinstalled it (and maybe in future I will do that) but for now GitHub is working much better with Netlify and technically it’s free - so there’s that.

Engineered Space Take 2

Some time ago I started my own Mastodon server and opened it up for invites. What I learned quickly is that I wasn’t alone and plenty of others were doing the exact same thing. There was no shortage of options for anyone wishing to join the Fediverse, including lots of bigger servers with far more funding than mine. I then learned more about the problems Mastodon faces on a server - there are a LOT of moving parts and the gentleman driving the Mastodon standard was (and still is) having some trouble with direction now its popularity has exploded. For my little server it had only a handful of users and the VPS it was installed on was struggling, with constant delays and timeouts and an overall lack of reliability I started looking for other options. My original instance used the address @chidgey@engineered.space which was associated with Mastodon and part of the spoken outro-ductions of all of my podcast episodes as the way to get in touch with me.

I investigated and fell in love with Pleroma and wrote about how you can Own Your Social Graph late last year, mentioning Pleroma as my now preferred Fediverse server and at the time it easily outperformed Mastodon on a VPS with only 256MB of RAM (Mastodon was slow even with 1.6GB of RAM). I tried it briefly to confirm it’s functionality on a sub-domain: @chidgey@pleroma.engineered.space and after a few weeks tried a backend switch (move Pleroma underneath the original Mastodon address and domain) only to discover that followers wouldn’t and couldn’t be migrated between the servers. Messaging was a complete mess and I was unable to follow or be followed by anyone that had followed me previously. I hence ended up sticking with my “new” pleroma sub-domain in the longer term than I’d planned, and asked people to follow me there instead.

Since I wrote the social graph article there have been a few incidents with Pleroma as they progress towards a formal release. The first was a series of backend changes that meant it would no longer operate as reliably on low-spec VPSs like mine. The second was when the Pleroma team changed the ID data type for posts, which broke a lot of apps and scripts that I had come to rely on for various things (Auto-Posting, iOS apps, MacOS apps). Given how unreliable it had become at that point I decided it was time to shift to a newer, bigger VPS, and to try shifting back to my original domain again.

Now I have a freshly installed Pleroma instance, on my original Mastodon domain: @chidgey@engineered.space and my pleroma-sub-domain will be deactivated by the end of this month. In order for people to do what I’ve done, effectively switch the backend but keep their domain remains impossible to perform without losing followers. Interestingly, it’s the only way I know of to pull that off. The sequence:

  1. Start on Domain X
  2. Create a different Domain Y, then ask followers to follow you there instead
  3. Re-create your original Domain X, then ask followers to follow you there again

There’s currently no option to import, or auto-re-follow if you swap out the server side components that I’m aware of. I have exports of my original timeline posts for both the Mastodon and first Pleroma accounts, but to date I have not been able to successfully import them. On the plus side the broken apps and scripts have now been mostly fixed with everything I need to use back up and running, fast and reliably again.

So in the end, apologies to all, but I’m done shifting servers and instances around. I think that for the broader fediverse these sorts of issues moving servers will inevitably lead to the same problems as EMail addresses. There is no one true EMail address for most people and knowing someones address on the Fediverse will never be as simple as a single siloed solution because it can not be. Coming from a siloed world it’s annoying but a small price to pay for more control over your own social graph.

If you’re looking for me I’m back at @chidgey@engineered.space and you can follow me there on the Fediverse.

The Need For Speed

After a lot of deliberation and consideration I’ve decided it’s time to push the web-front end further forward for all of my sites. Not happy with just going Static with Hugo, and after many months of pushing local caching as far as I could on NGinx, I’ve finally joined the rest of the web developers from 3 or so years ago. All of my sites are now backed by the Netlify CDN:

Ultimately you just can’t beat a distributed high-performance, low-latency Content Delivery Network. The website tests vary from a 5x to a 11x pageload improvement on average from multiple points around the globe. Locally for me it’s been amazing, but then packets for me generally traverse the Pacific to get to my backyard so that’s not really surprising.

Wishing to have control of my environment (yeah I know) I snagged a OpenVZ 2GB SSD VPS from Hosted Simply for a New Years special of $15USD/yr and built my own Private GitLab repository, then linked that to Netlify. I’m now using a well-known developer workflow with each site it’s own self-contained Git Repository, with the GitLab remote origin mapped to the Netlify CDN with a Webhook for auto-deployment whenever I commit a file to the repo. In addition since it’s Hugo and I want to publish pages into the future, I’ve also added a webhook to trigger a page rebuild periodically.

On the Mac I’m using the passable SourceTree App for Source Control and the awesome Sublime Text 3 for text editing, and on iOS I’m using the excellent Working Copy App with Textastic for text editing. To be honest I feel a lot more in control of what I’m doing now, and being able to run up my changes locally with Hugo, create custom development branches for public/private testing through Netlify and with the ability to rollback changes at the source code level, well, it makes web page maintenance and blogging a lot more like programming.

And as a programmer, I have to say I finally get why so many others do the same workflow. It’s pretty neat :)