Herein you’ll find articles on a very wide variety of topics about technology in the consumer space (mostly) and items of personal interest to me.
If you’d like to read my professional engineering articles and whitepapers, they can be found at Control System Space
This first post in a series about the Fediverse focuses on three aspects as they relate to the future of TechDistortion (this blog): Full-Length Blog Link MicroBlogging, WebMentions and Federation support (ActivityPub/LitePub/OStatus).
Links to full-length Blogs posted as Microblog entries don’t intend to convey much other than a title and some brief text, drawing potential readers to the full article. I mentioned the phenomenon of Twinkblogs 5 years ago, but really it’s an avenue of communicating an article exists, not the content of the article itself. In that regard it’s the size of the audience you can reach through that channel that matters the most.
IndieWeb are popularising the WebMention as a method of allowing users to reply to a blog or article with the article then able to aggregate all comments, mentions, reblogs as part of the article. Any WebMention compliant site would allow that interaction to occur creating a common point for all comments in a federated way between users from different accounts on different systems, like Disqus but not centralised and more flexible. If you’re interested in comments on your blog then that’s something worth exploring. I’ve never had comments enabled on TechDistortion in the decade I’ve been writing articles and don’t intend to add them now. Any feedback from readers is welcomed to either myself via the feedback form or via the Fediverse directly to me personally.
Not all platforms are so text-length restrictive as Twitter (280 characters) and Mastodon (500 characters) with Pleroma allowing administrators to set whatever limit they like. On my Pleroma instance I’ve left it at the default 5,000 characters but might change that at some point in the future. The idea is that using ActivityPub/LitePub a blog could be subscribed to as if it was a regular account on the Fediverse. That seems convenient however scrolling through a 9,000 character long article on a smartphone screen application intended for short posts might not be as clean an experience as a dedicated long-article reading application like Unread (for example). That said, the simplicity of being able to consolidate into a single window is quite appealing. Unfortunately when moving away from Statamic to Hugo, Federation wasn’t a thought I had in mind, and hence since neither supports Federation it will not be explored in the short term.
Currently when a blog entry goes up on TechDistortion, an RSS Feed scraper takes a copy of the title, a URL link to the article, then publishes it to a Mastodon account. From there a second script takes that and re-tweets it to the TechDistortion Twitter account. Currently counting the number of actual people and lists on the TechDistortion Twitter account, there are more real people subscribed to the sites RSS Feed directly and also to both my personal Mastodon and old Twitter accounts.
Based on the above Twinkblog rationale and also with my move to gradually step away from Twitter, I’ve decided to close the TechDistortion Twitter account. I will instead be posting those links only to my personal Fediverse account, which is copied to my ‘old’ personal Twitter account. RSS will always remain for anyone to subscribe to. My recommendation is that people following the blog on Twitter either follow my ‘old’ Twitter account @johnchidgey or better still, jump on the Fediverse somewhere and following me @email@example.com where I’m active every day.
In future if a Hugo–>Federation intermediary service is developed I’ll probably look into that, since I really like Hugo ;)
Oh yeah…Happy New Year.
Being that it’s the middle of summer in my hemisphere, after a hard days work in the yard a nice cold frozen drink is always well received. Recently the pricing war between McDonalds, Hungry Jacks and the old-faithful 7-11 has led us to an over-supply and low prices of frozen drinks. All that’s lovely for consumers, and if you’re keenly interested in the zilch-sugar (sugar-free) options then 7-11 is the way to go (or if large amounts of sugar don’t worry you, I still think 7-11’s Slurpees have more/nicer syrups)
They offer three primary sizes, but if you actually measure the cost per volume it shows how 7-11 are making their money: they want you to upgrade to the bigger drink.
Based on the above therefore I’d suggest that if you’re REALLY thirsty, getting x2 Large drinks is the clear winner. Otherwise stick to the $1 size and save your money.
Oh yeah, and don’t drink it too quickly either…
Imagine a world where you could pick and choose what server backend you wanted for your social media (if you want to - like picking a bank to bank with?), pick a social media identity that is truly canonical for all time (you know, like your name is in the real world), and pick whatever application(s) you want to use on your platform of choice so you get to interact the same way no matter who you’re talking to. They’re ALL your choices. Are we there yet?
This is the story so far as we all collectively (hopefully) move towards that goal.
In April 2017 I wrote about Engineered Space and recorded an episode of Pragmatic about my experiment with Mastodon. I was attempting to ‘take control’ of my Social Graph and Mastodon held a promise of that.
The reality hasn’t entirely lived up to expectations for me so far, although I still prefer it to Twitter and Facebook. The truth is that currently Mastodon is still a silo of a sort, which I discovered as I attempted to move to a different platform.
One EMail-like social address to rule them all
When I started @firstname.lastname@example.org I had a longer-term intention in mind: purchase a domain that I liked, and then with OStatus and now ActivityPub, it should be possible to use whatever standards-compliant backend server setup I wanted, and I should be able to retain the same Fediverse username for all time.
Not only that, I could also then choose whatever front-end client I wanted to and it would connect to the standards-compliant backend server infrastructure I was running.
What’s Wrong With Mastodon?
There’s three issues I have: how it’s having its feature-set prioritised, a lack of testing for upgrades with regular mis-steps, and finally it’s resource-hungry. I was running my instance that had only my account on it and about 10 others with minimal traffic, on a VPS with 1.6GB RAM, a reasonable CPU and if I tried to refresh my timeline it would regularly throw a 502 error. Image posts regularly failed, it would also completely fall over once or twice every week requiring a server reboot to recover with no obvious cause. In short, it became a hassle.
The production guide to install Mastodon is very good though, with plenty of examples for different Linux distros to install it on and it takes a bit of effort requiring Rails, PostgreSQL, Redis, Sidekiq, NodeJS and ElasticSearch (if you want search functionality at all). It also wouldn’t install and run on Centos 6 and whilst I don’t mind admitting that Centos 6 has had its day, sometimes you can snag a cheap VPS that won’t run Centos 7. Upgrading required a series of git pulls, rake commands and database migrations and could take half an hour to fully compile, requiring me to kill the NGINX server or it would never complete.
I was advised to throw more money at the problem. I could upsize my VPS at more expense or I could shift my hosting elsewhere and let someone else deal with it. Altenatively, I could look for a different ActivityPub compliant platform…
Lain walks through what Pleroma is and I won’t repeat that but essentially it’s 90% of what Mastodon is but only requires Elixir and PostgreSQL, it runs on Centos 6 (although you won’t find any Production guides for that) and it’s happily running on a Speedy KVM VPS (DAL-VOL0), 1 E3-1230 3.2Ghz CPU, 256MB ECC RAM, 12GB HDD for $18USD/yr. If it keeps chugging along nicely, I’ll fork out for three years for $36USD ($1/month).
Not only is it cheap to run, it’s quick. I can refresh and refresh and fill gaps in my timeline and it responds in a second or two and never fails. Uploading images works every time now and if you’re like me and you’re not really into the TweetDeck-esque Mastodon FrontEnd (Pleroma offers this front-end option if you really want it though) then it has a far more Twitter-eque Pleroma FrontEnd that I much prefer.
Before you think “John’s ready to marry Pleroma…” stop. It’s not perfect. In fact there’s a few significant drawbacks:
- There are no dedicated Pleroma client applications I’ve found, but becuase Pleroma also implements the Mastodon API, most Mastodon client applications will mostly work with Pleroma
- Web Push Notifications aren’t implemented yet (since most Mastodon clients use this for push, that’s annoying) More on this in a minute…
- Many site layout tweaks are buried in the config.exs file on the server
- Documentation is generally lacking in a lot of areas if you want to deploy/understand it
- It’s v0.9 at time of writing (Yes, it’s not ‘officially’ released yet…)
On the plus side some of my favourite Mastodon apps work almost perfectly with it (notifications generally not withstanding):
All of the above notwithstanding, there’s a strong beating of the open-source drum by the development team on Pleroma. Whilst Gargon on Mastodon makes no bones with the fact he wouldn’t mind if Twitter collapsed tomorrow, he supports whatever clients, forks of Mastdon, other projects that support ActivityPub in whatever form they might take. The Pleroma team on the other hand have actively and aggressively shamed non-open source developers trying to get more involved with Pleroma. I’ve seen sole developers that are making apps that are free but closed-source, paid and closed-source, and even federated services like Micro.Blog trying to open up connectivity with Mastodon be shunned all becuase they aren’t open source.
The future of federation will ultimately be a blend of open and closed source software running on servers and clients from different groups, inividuals and companies around the world, all talking on a common standard or sub-set of standards. The fear that one closed-source player will “take over” neglects the nuance that people will vote with their feet and that if a corporation does wrong by their users, they will eventulally abandon that server for another (like many have abandoned Twitter for Mastodon already).
“Open Source” mantra is an idealology, not
Pleroma need to consider their position in the cross-platform game, supporting other standards to improve operability and usability otherwise they will be outgrown by Mastodon and will become irrelevant before they start.
Attempting to Migrate
Mastodon provides the ability to export a user list as a CSV: this worked as expected. Pleroma also imported what it could, but when instances are offline (I discovered I wasn’t the only Mastodon instance that was regularly offline) if Pleroma couldn’t verify that an imported user actually existed it wouldn’t add it to the follows list. Over the duration of a week I successfully added all but 6 of my follower list progressively with the import script in Pleroma smart enough to not create duplicates.
Exporting my “Toot” history proved impossible through the web interface in Mastodon. I tried many times and it failed every single time.
Originally Drafted 13th October, 2016
We’re lazy creatures. That and things cost money. When things take too much effort or cost too much money, we don’t take advantage of them. Only those people with enough spare time or money can do them. I first came across this phenomenon when studying traffic engineering. Widen a freeway and the amount of traffic it conveys will increase to utilise that new capacity. The newly accessible capacity of the road becomes quickly known by local residents that previously took public transport, rode bicycles, walked or just didn’t travel at all, and then they decided to utilise this additional capacity. The opportunity to travel either more directly, in more comfort or more quickly than the alternative drives the opportunistic behaviour to utilise that additional capacity. Theoretically it should be possible to build a freeway with an extremely large number of lanes that has capacity that far outstrips the physical quantity of vehicles that could ever use that route between two set locations, even including for external visitors. The sheer cost of doing so generally precludes this from ever happening on a macro scale but the limit still exists. Hence there’s a point at which increasing accessibility reaches a point of diminished potential such that it is unlikely to ever be exceeded.
A more popular example I came across recently relates to watch bands on an Apple Watch. The watch itself is quite expensive, however unlike many other watches in the world, it may have its bands easily replaced in less than a minute when the wearer needs to exercise, change to a dressier outfit or go off to work. Changing the band changes the appearance, feel and usefulness of the watch without having to have a second watch as was previously the tradition: two watches, one for normal day use and one as a dress watch. Replacing bands on a traditional watch is a cumbersome, frustrating exercise but with this watch in particular that’s no longer the case. As changing the bands becomes more accessible, the possibility of changing bands becomes easier. As cheaper alternative bands become available, this further drives accessible choices for more people. Of course people will eventually reach a limit whereby they have more than enough bands to cater for every circumstance they personally desire, at which point the maximum potential is exceeded once again.
A final example is changing code in mass-deployed devices. When I was starting out my career software updates were handled by physical ROM ICs, that were attached by sockets to the motherboards of the control cards in the field. Changing out the firmware was a manual, slow, annoying task that was very expensive. Many locations didn’t have a network connection of any kind and wireless was very uncommon and even less common for data connectivity so this was just accepted as reality. At time progressed and the internet became what it is today, with mobile data networks becoming wide-spread, there was a more and more accessible data path to end devices for manufacturers. Over the air updates then became the preferred method of fixing problems and this accessibility drove opportunistic updating of end devices. This seems like a good thing at first with manufacturers able to correct problems even ofter their devices had left the factory, however it drove manufacturers and engineering companies down another route: minimally tested software. As the speed to fixing bugs after the device shipped improved, management circles pushed the key features (heavily tested we hope) out the door with the devices quickly, leading to many features being far less tested and requiring future OTA updates to be applied. Provided these were low-impact bugs then that’s probably a good trade off but end users don’t always see it that way.
As always no one complains about good software, they only complain when it breaks and just because you can ship something today less tested with the aim of “fixing it later” doesn’t mean that you should. The opportunity to quickly fix problems is tempting but rigorous testing and qualification will generally save time and money in the long run. The only question to ask to ponder is whether the availability has driven opportunistic thinking and if it has, what opportunistic cost will you incur for it? Opportunity cost cuts both ways.