Total of 327 posts

Herein you’ll find articles on a very wide variety of topics about technology in the consumer space (mostly) and items of personal interest to me. I have also participated in and created several podcasts most notably Pragmatic and Causality and all of my podcasts can be found at The Engineered Network.

Optimal Interface Part 3: Devices

I’ve been fortunate in recent years to have tried the vast majority of consumer user interfaces and also the software running on each platform that’s widely regarded as best in class for each interface. I’ve written previously about going Back To The Mac and podcasted about using a Microsoft Surface Pro and even tried going Phoneless with just an Apple Watch.

One aspect of my job has been user interface design, conceptualisation and controls and in this series of posts I’d like to explore inputs, outputs and devices in turn, looking at what has worked well and why I think that is as well as what the next inflection points might be.

Part 3: Devices

With the newest iPad Pro outperforming many laptop machines, the question has come up again regarding whether iOS on an iPad can replace a macOS desktop or laptop machine. It’s a debate that’s been circulating for some time and having tried this for years I’ve reached my own conclusions.

Due to much of my experience recently in the Apple ecosystem, I’ll focus on Apples product lines however the conclusions should apply relatively similarly based on product type across all vendors. In addition areas where that are no mass-consumer interface devices available for use will be excluded (i.e. Neural, Smell interfaces etc). We’ll break devices down by several criteria then look at optimal interface for use case:

  • Robustness
  • Portability
  • Connectability (without adding adaptors)
  • Input Methods
  • Visual Screen Real-estate
Device Robustness Portability Connectability Input Methods Screens
Apple Watch WP, SR Wearable LTE/WiFi Touch/Voice 1 Small
iPhone WR Pocketable LTE/WiFi Touch/Voice 1 Smartphone
iPad/Pro Nil Bag/Backpack LTE/WiFi Touch/Voice/Stylus/Keyboard 1 Small Laptop
MacBook Air/Pro/One Nil Bag/Backpack WiFi Mouse/Trackpad/Voice/Keyboard 1 Small Laptop/Expandable 2+ Large
iMac/Pro/Mini Nil Nil WiFi/Ethernet Mouse/Trackpad/Voice/Keyboard 1 Large/Expandable 2+ Large
AppleTV Nil Nil WiFi/Ethernet Remote/Voice 1 Large

Key:

  • WP Water Proof
  • WR Water Resistant
  • SR Scratch Resistant (Ceramic/Stainless Steel)

Notes:

  • Some people have been known to carry their iMac Pro in a bag (of sorts) but that does not make it portable
  • iPhones will mostly fit in most pockets in most clothes hence they are pocketable
  • Only LTE Apple Watches connect to LTE
  • iPhones technically can support keyboard input via BlueTooth but it isn’t a dedicated keyboard and breaks pocketability

Exclusions and Fails

There are multiple voice assistants in the market today, from Amazon (Alexa), Google, Microsoft (Cortana) and Apple (Siri) however they all share the same problem: They don’t always work. Until voice recognition improves (refer Part 1) then this interface will remain a curiosity or at best only be useful for a handful of commands that they can reliably deliver on. Users need certainty of cause and effect before trusting an interface and still overwhelmingly prefer a click interface on a physical remote to provide a definitive result for a TV interface with the recent AppleTV Touch-remote also being sub-optimal for its indirect touch/swipe pad interface.

Adding Siri/Cortana etc functionality on all devices does not present an actual benefit if the underlying technology still can’t reliably set a timer, for example. Where other methods of input are lacking due to size (eg the Apple Watch) then it’s a useful addition, on a HomePod where it’s the only interface it’s absolutely necessary, whereas on every other device type listed it will never be used as a primary input as it is outperformed but all other methods available on that device.

Pre-Clarifications

Many of the following use-cases can be conducted using software and techniques on more than one platform. To say one is better than the other is a judgement independent of the software used but rather to reflect the ease of execution for the majority of users attempting to execute said task on the best platform for the job. If not those users are fighting the form factor which is always an option, but is not optimal. Additionally a “desktop-connected” machine could also be a powerful portable machine connected to one or more external displays. There are many reasons why users prefer a powerful laptop rather than having one desktop and one laptop or two desktops depending upon their daily use requirements for all of their use cases - not to mention their personal budget.

Notifications: Apple Watch

A device capable of notifying the user of a message in complete silence, and one that is physically the most robust and always connected and always attached to the body is the clear winner. The watch can then be the optimal gateway device for notifications to other devices.

Video Editing: Desktop

Large screen real-estate and precision pointing make a desktop-connected machine the best choice for this activity.

Music Editing: Desktop

Large screen real-estate and precision pointing make a desktop-connected machine the best choice for this activity, especially for many audio tracks in the edit.

Podcast Editing: Desktop

Despite the fact I know an increasing number of podcasters are editing on an iPad (myself included) for the majority of podcast editors, the larger screen real-estate and precision pointing make a desktop-connected machine the best choice for this activity for the majority of users.

Photo Editing: Tablet

This is a close call with a desktop however a large screen iPad with good software can now quickly and more easily edit photos directly.

Photo Library Management: Desktop

Large screen real-estate make a desktop-connected machine the best choice for this activity.

Notetaking: Tablet with Stylus

For those accustomed to taking written notes, the flexibility of writing notes, sketching and diagramming on a piece of paper that syncs to the cloud, is text searchable and with some apps can also attach audio recordings at the time of note-taking is the ultimate tool for notetaking.

Drawing: Tablet with Stylus

Hand-drawing on an iPad Pro with an Apple Pencil is more accurate than a Cintique which used to be the preferred interface via a desktop machine. The tablet is more portable and comfortable to use such that many artists have switched to tablets for creative drawing.

Documentation: Desktop

Large screen real-estate make a desktop-connected machine the best choice for this activity for larger documents in particular. For smaller documents with a single contributor and with improvements to tablet keyboards it’s difficult to differentiate between a desktop and a tablet for document editing in some instances.

EMail: Desktop

(Longer form messaging blurs the line between EMail and Messaging, hence this applies to long-form messaging as well as EMail) The desktop interface still provides more flexibility and more configurability options than mobile interfaces permit. That said Tablets would be a close second and improving. Both desktop-connected and tablets with physical keyboards provide superior text input interfaces over a smartphone.

Short Messaging: Smartphone

The portability of the smartphone is close to a SmartWatch but the text input makes this a better option, as well as the built in cameras for sending photos of either the camera view or face view (as Apple likes to call it: ‘Face Time’) make it the best device for Short Messaging.

Social Media: Smartphone

Per the Short Messaging use case, the portability of the smartphone with the text input makes it the better option, as well as the built in cameras for sending photos of either the front view or face view for social media like Twitter, Facebook, SnapChat or Instagram.

Watching Movies or Television Shows: TV

The comfort, large screen and generally better audio make this the optimal interface for this activity.

Personal Audio Listening: Wireless In-Ear Headphones

The move from Wired to Wireless improves portability, convenience and with choices of many models now available the vast majority of users can find a pair that will fit their ear canals without causing discomfort. They are the most comfortable to wear in the majority of physical environments as well, with the only two drawbacks being 2-3 year lifespan (battery technology) and relative cost.

All-round Apocalypse Device: iPhone/Smartphone

It may seem odd considering it wasn’t the optimal interface device for very many of the above but it remains capable of performing every single one of them in the widest range of environments and use cases. Hence if you ONLY had one device, it’s the clear choice.

Conclusion

A disgruntled tech-writer once said, “only my use case matters” with their tongue firmly in their cheek, and whilst he was feeling exasperated I imagine when he said it, the idea is that every person has a different set of use cases and whilst any one person can assess their own needs for their own use cases, articles such as these can’t take that position.

The point of these articles and episodes of Pragmatic is to highlight that people need to be honest about what truly is the optimal interface for their specific use cases and to stop trying to justify to themselves or others that they can force a device to be the right device for everything they do, just because they have one or chose it.

If you have more than one use case, there is no single device that is optimal for all.

If we can collectively agree that, then we can (financial budget permitting) pick the devices that best suit the use case we need and then to get on with whatever we’re doing albeit now with the most efficient and effective device for the job possible.

Or as a tradie once told me, the right tool for the right job.

So get on with it already - back to work ;)

Optimal Interface Part 2: Output

I’ve been fortunate in recent years to have tried the vast majority of consumer user interfaces and also the software running on each platform that’s widely regarded as best in class for each interface. I’ve written previously about going Back To The Mac and podcasted about using a Microsoft Surface Pro and even tried going Phoneless with just an Apple Watch.

One aspect of my job has been user interface design, conceptualisation and controls and in this series of posts I’d like to explore inputs, outputs and devices in turn, looking at what has worked well and why I think that is as well as what the next inflection points might be.

Part 2: Output

Output from a device to a person must be in a form the person can receive and interpret and hence has to be via one of our senses:

  • Sight
  • Smell
  • Taste
  • Sound
  • Touch
  • Neural

Sight

Visual information works for most of the population and those that are visually impaired, sound and touch are the next most common output mechanisms. Visual information can come in shapes, lines, colours, or written language and can be conveyed by a single flat representation on a flat surface or can mimic human stereoscopic vision with a representation for each eye independently, capable of appearing as a 3-dimensional image in our mind.

The distance between the eye and the target are ultimately the deciding factor about the optimal interface. For tasks below optimal interfaces suggested by use case:

Watching movies, TV shows, Entertainment

Large television screen, viewing whilst seated. User is relaxed while seating, minimal physical fatigue, screen must be sufficiently large such that at the selected seating distance, eye strain is not a concern. Average dwelling size and most common room sizes dictate the optimal screen size between 40-55". As costs reduce larger screens may be possible however become sub-optimal due to increased pixel count requirements. Once wall-sized (maximum height 2m) drives for additional pixel count will become pointless beyond the human ability to discern pixel boundaries at a standard seating distance. Screens that require too much head movement for the user in order to observe every detail from their comfortable seating position are sub-optimal. The lack of adoption of IMAX / OmniMax Theatres which support a much larger format (approximately 1,500 globally) compared to standard cinemas (multiple hundreds of thousands) is in part due to this issue. (Beyond that licensing and format issues also contribute)

Handheld devices are sub-optimal for maximum comfort however being handheld allows for smaller screens, though only useful for short-term viewing periods. Portability of devices results in more damage and shorter lifespans. Head-mounted devices (VR Headsets) are sub-optimal due to the least comfort due to neck strain and poor air circulation around the eyes and face. Whilst some of these concerns can be improved with lighter devices, stronger materials and better design current technological limitations will delay improvements for some time to come.

Information Dense Tasks

Small to moderate screen size, viewing whilst seated or standing is best for this use case. The user is within arms reach of the screen, which leads to a high resolution and display brightness and sharpness for detailed language and text display, reducing eye-strain. Seating position allows minimal long-term fatigue (though standing desks are increasing in popularity to address RSI concerns for some people) and closer viewing position allows more economical screens with higher pixel densities which in turn further reduces eye strain. More modern screens are now UHD (4K) and at 28" diagonally in size have a 157ppi whereas traditionally popular in the 2000s and early-mid 2010s, a24" HD (1080p) screen had only 92ppi.

Handheld devices are restricted by their screen size for portability and handheld fatigue and hence can be optimal for when all required information can be displayed efficiently on a smaller screen, however long-term use drives fatigue and is sub-optimal.

Immersive Gaming

The definition of what is or isn’t immersive can be debated, however Virtual Reality is the best example of being fully immersed for most of our senses. Ultimately though gamers still prefer the comfort of console gaming for long periods on a large television whilst comfortably seated and others also prefer high resolution and higher frame rates afforded by dedicated gaming desktop machines. The ultimate solution is Virtual Reality however the technology will remain sub-optimal until it becomes lighter with improved air circulation and is then able to worn for longer periods.

Glancable Information

Small screens where the visual target is known and direct with minimal other information to distract the user - smartphone screens or watches. The more information on small screens the less glanceable it becomes. Larger screens may have subsets of glanceable information however due to the visual seeking time they are less optimal for this application. For glanceable information to be truly glanceable it must be instantly presented at the moment of the glance, and information must be clear, concise and readily locatable. In this manner a standard wristwatch that displays the time on its watchface is the ultimate expression of glanceable information. Modern smartwatches that do not have always on displays introduce delay as the screen lights upon turning the wrist which is sub-optimal. In addition some watchfaces (eg the Infograph watchface on the Apple Watch Series 4 and 5) can become too information dense in such a small area and becomes less glanceable as a result.

Even the Apple Watch Series 5 implementation that includes an “Always On” does not actively display any second by second information including notification indications when in part-asleep mode; choosing to update the minute hand only as each minute passes. Whilst this is glanceable for the time in minute-increments it is still not ideal compared to a mechanical watch for telling the time to the second or even sub-second, reliably at a glance.

Smell and Taste

Admittedly some have argued the inter-relatedness of these two senses, but considering them together is reasonable from the perspective that scent generation and taste generation are both technologies in their infancy and therefore there are few examples of commonly used applications that can utilise these senses.

Future inflection points might be a scent-generation to provide a sample of how a bunch of flowers will smell based on an online order, or similarly for choosing a meal ordered online about how it might taste before ordering. No prior knowledge of the flowers or food would be required.

Sound

Output sound from device to a person can be via tones, music or speech. The method of delivery of this audible information can come via several technologies: Speakers (broadcast), or personal/individual-only devices such as Over-ear Headphones, in-ear Headphones or Bone Conduction Headphones. Whilst each can have different features such as noise-cancelling technology, sealed, non-sealed, open back and so on the key features of each will be mentioned where relevant but not all will be discussed.

Speakers

The oldest and most common method for conveying sound, these provide the most flexibility in terms of accurate frequency response and when multiple speakers are positioned around a room they can also provide the most realistic reproduction of real world recorded sounds when replicated. They are the most comfortable to listen to as they do not require anything to placed on the head or in the ear so are ideal for longer listening tasks with minimal fatigue. They would be used exclusively if not for the fact that they can only be used when all of the people that can hear them are interested in listening to what they are playing. Hence for multi-person situations they have fallen out of favour with preference given to personal-devices, such as headphones.

Over-ear Headphones

The oldest style of headphone encapsulates the entire outer ear with a padded cup to contain and seal the sound within that cavity against each ear. This allows for full stereo separation and large speaker elements providing the best audible spectrum of sound reproduction whilst fitting over the widest range of ears possible, however they are bulky, can be heavy and have poor air circulation over the ears leading to discomfort when worn for longer periods or even short periods in hot or humid conditions. Optimally suited for temperature controlled environments where accurate sound reproduction is a priority and where isolation from the outside world is desirable. Unsurprisingly these are the optimal interface for podcasting and radio and many prefer these in noisy open-office environments.

In-ear Headphones

These are sometimes called bud headphones, ear-buds, and in-ear monitors and have become the most widely used of all headphones due to their low cost, portability and disposability. However other than IEMs they do not fully seal the ear canal which causes sound to leak out such that passers-by will hear some audio. For sealed ear-buds as they exert some outward pressure on the ear canal, whilst they can far better isolate outside sounds they are also less comfortable for longer wearing durations. The variable nature of individual ear-canals can make in-ear headphones problematic being either too loose or too tight for some individuals. Some model include different tips, foam or silicone or for some models allow moudling services to perfectly match the wearers own ear canal. In addition the significantly smaller speaker possible by in-ear designs restricts accurate sound reproduction particularly at low frequencies. More recently fully wireless ear-buds have gained popularity due to their small size, lightweight and portable features albeit with a limited lifespan and at a significant cost.

Bone Conduction Headphones

A newer technology consisting of two pads that press gently against the temple above and in front of the ears that vibrate at complex frequencies, accounting for changes in sound due to bone and fluid density impacting resonance in the skull beyond 10kHz. At low volume levels the sound is imperceptible to the passer-by however as volume increases they are not silent to non-wearers. As they do not impede the ear canal nor the ears themselves in any way, the wearer can clearly hear the world around them and these are widely considered the optimal interface for listening to audio as a background sound without disturbing others nearby whilst still maintain full awareness of surroundings. Examples include busy city streets and some work environments where interruption by others is a job requirement.

Touch

Touch output from devices to a person is most common via haptics, vibrations, or in more advanced interfaces intended for the visually impaired via a braille device such as a Refreshable Braille Display. Whilst RFBs have limited use for those that are visually impaired and can read braille they are extremely useful. Those that can read Braille can become extremely fast and can average between 125 and 200 words per minute. Whilst this is still slower that sighted word reading rates that average just above 200 it’s close enough to demonstate that Braille is an effective replacement for the visually sighted word however it is not faster to consume.

The first pagers popularised by Motorola in the 1980s use a rotation vibration mechanism to indicate when a paged message was received, altering the wearer to the message after which they would call back that provided number (if given) via the nearest landline phone. As mobile phones gained popularity they also added this and introduce different stepped vibrations to indicate different events, like an SMS message vs an incoming phone call or an alarm. Haptics are driven by linear actuators and can be more precisely controlled and are increasingly found in smartwatches.

The main advantages of haptics vs rotational vibrational messaging is the reduction in noise generated, though the complexity in the haptic sequence can allow for more subtlety in messaging types. The amount of information that can be conveyed is extremely small and therefore only useful for notification messaging of events.

Neural

Technically not a sense but rather the brain that collects and makes sense of the senses, however progress in direct neural technology is improving each year. Current technology allows a user to train a cursor to move by thinking (as mentioned in Part 1: Input), though it is slow and inaccurate it’s inevitable that neural interfaces will someday allow all sensory information to be provided directly out to controllers or computers rather than via our body physically.

Future inflection point will be when that interface performs the equivalent function of our body via gestures or touches without the need for us to move. At that point we may well truly be in the simulation because we won’t be able to tell the difference. Will be both interesting and concerning when that happens.

Devices

For final conclusions about a subset of devices may people use, relative to optimal interfaces for inputs and outputs, refer to Part 3: Devices.

Optimal Interface Part 1: Input

This article is posted in conjunction with Episode 93 of Pragmatic.

I’ve been fortunate in recent years to have tried the vast majority of consumer user interfaces and also the software running on each platform that’s widely regarded as best in class for each interface. I’ve written previously about going Back To The Mac and spoken about using a Microsoft Surface Pro and even tried going Phoneless with just an Apple Watch.

One aspect of my job has been user interface design, conceptualisation and controls and in this series of posts I’d like to explore inputs, outputs and devices in turn, looking at what has worked well and why I think that is as well as what the next inflection points might be.

Part 1: Input

Input to a device from a person must be in a form the person can send to a device and hence has to be via a mechanism we can perform via:

  • Sound
  • Touch
  • Movement
  • Neural

We shall exclude attempts to convey meaningful information utilising smell by projecting a scent of some kind since that’s not a trick most people can do and likewise for taste.

Sound

The first popular device to perform control inputs from sound was the Clapper. “Clap on, Clap off” to turn lights on and off. Spoken word has proven to be significantly more difficult, with many influencing factors: local accents, dialects, languages, speaking speeds, slurring, variable speech volume and most difficult of all: context. The earliest consumer products that were effective were in the early 1990s from Dragon Dictate, that used an algorithmic approach that required training to improve the speed and accuracy of the recognition. Ultimately algorithmic techniques plateaued until machine learning, utilising neural network techniques finally started to improve the accuracy through common language training.

Context is more complex as in human conversation, we infer much from previous sentences spanning minutes or even hours. For speech input to track context requires consistently high recognition accuracy and the ability to associate contexts over long periods of time. The reliability of speech recognition must be consistent and faster than other input methods or people will not use it. Sound commands are also not well suited in scenarios where discretion is advised, nor in noisy environments where isolating a subject is difficult even in a human conversation, let alone for speech detection by software.

Despite improvements the Apple Siri product ‘feature’ remains inaccurate and generally slow to respond. Amazon Alexa, Google Assistant and Microsoft Cortana also offer varying degrees of accuracy with heavier use of Machine Learning in the cloud providing the best results to date at the expense of personal privacy. As computational power improves and both response time and accuracy improves sound will become the preferred input method for entering long form text in draft (once it keeps up to average human speaking rate of about 150 words per minute) since without additional training on a physical keyboard this is faster and more convenient. Also once these things improve it will also be the preferred method for short commands, such as turning home automation devices on or off for example, for scenarios where no physical device is immediately accessible.

Touch

Touch involves anything that a person can physically push, tap, slide across or turn and encompasses everything from dials to mechanical sliders, to keyboards to touch screens. Individual buttons are best for dedicated inputs whereby that button represents a single command or very similar command, with a common example of a button grid being a keyboard.

Broadly touch can be grouped into either direct or in-direct. Examples of direct movement include light pens, resistive and capacitive touch screens. Light pens needed the user to hold them and they were tethered, slow, and weren’t very accurate. Resistive Touchscreens still needed a stylus to be accurate although some could use the edge of their fingernail, however the centre of a finger wasn’t very accurate. It was also not possible to detect more than a single touch point at a time. Capacitive Touch had better finger accuracy and allowed multiple finger touch detection simultaneously which allowed for pinch and other multi-finger gestures. Although no stylus was needed, to achieve high levels of accuracy a stylus was still recommended.

Indirect inputs include keyboards and cursor positioning devices such as mice, trackpads, trackballs and positioning sticks. Keyboards mimicked typewriter keyboards and have remained essentially unchanged from the first terminal computers through personal computers, apart from preferences for some key-switch mechanisms between users little has changed in decades.

Cursor pointing devices allow for precise cursor positioning with the ability to “nudge” a cursor which is not possible without zooming on a touch interface.

Hence for precision pointing, indirect methods are still more accurate than a stylus due to “nudging”. However precision pointing is generally not a strict requirement for most users in most applications. Non-precision pointing therefore for most tasks benefit from the simplicity of direct touch, which is faster and requires no training making direct touch the most accessible method.

For bulk text input, physical keyboards remain the fastest method however training is necessary to achieve this. Keyboards will remain the preferred bulk text data entry method until speech recognition improves noting that the fastest English typing speed record on a computer is 212 wpm in 2005 using a Dvorak simplified keyboard layout. The average typing speed is about 41 words per minute, hence speech recognition that’s any faster than this at a high degree of accuracy will be the preferred dictation method in most use cases.

Movement

Movement requires no physical connection of the body to the input device and includes gestures of different parts of the body. Some early technology like the Playstation Move ball was a recent example where the user held a device that wasn’t tethered to the machine but directly tracked their movement. Other examples are in Virtual Reality systems that use a handheld controllers with gyroscopes and accelerometers for tracking movement of hands and arms.

The most popular natural free-standing movement tracking device so far has been the Microsoft Kinect that was released for both the PC and the XBox. The movement tracking had issues differentiating backgrounds and was thrown off by people walking past, in front of or behind those people it was tracking at that time. The room size and other obstructions also created a challenge for many users whereby in order to use movement tracking reliably couches, chairs and tables needed to be moved or removed in order to accommodate a workable space within which it would function reliably.

This form of movement tracking is useful for individuals or small groups of people in enclosed environments with no thoroughfare, though the acquisition time of precise positioning even with an Xbox One Kinect 2, was still too slow and the Kinect 2 was discontinued in 2017. The newest development kit for the next generation of Kinect is the Azure Kinect which was announced in February 2019.

Current technology is still extremely inaccurate, easily confused and immature with a limited set of standalone use cases. Extremely accurate natural free-standing position tracking is unlikely to be useful as a mass input device, however in conjunction with speech recognition could provide vital contextual information to improve command interpretation accuracy. It also has applications in noisy environments, where an individual is isolated in front of a device such as a television and wishes to change channels with a gesture without using a physical remote control.

Neural

Brain Computer Interfaces (BCIs) allow interaction through the measurement of brain activity, usually using an Electro-Encephalography (EEGs). EEGs use electrodes placed on the scalp and are cheaper and less intrusive than a Functional MRI (fMRI) that tracks blood flow through different parts of the brain and whilst it is more accurate it is not straightforward.

In the Mid 1990s the first neuroprosthetic devices for humans became available, but they took a great deal of concentration and the results were extremely difficult to reliably repeat. By concentrating intensely on a set thought it was possible to nudge a cursor on the screen in a certain direction, however this wasn’t very useful. In June 2004 Matthew Nagle had the first implant of Cyberkinetics BrainGate to overcome some of the effects of tetraplegia by stimulating the nervous system. Elon Musk invested $27M USD in a company called Neuralink in 2016 that are developing a “neural lace” to interface the brain with a computer system.

It remains extremely dangerous to interface directly with the brain however in order to become useful in future it is necessary to explore since the amount of data we can reliably extract from sensors sitting on our scalp is very limited due to noise and signal loss through the skull. We therefore need implants to directly connect with neurones before we can get data in and out at any rate that will ever be useful enough to overtake our conventional senses.

Attempting to guess how far off that inflection point is at this moment is extremely difficult. That said, when it comes it will come very quickly and some people will decide to have chips implanted and that will allow them to out-perform other people for certain tasks. Once the technology becomes safer and affordable, even then there will always be ‘unenhanced’ people that choose not to have implants however mass adoption might still take a long time depending on rewards vs the risks.

Despite many claims, no one really knows exactly how fast a human can think. Guesstimates are somewhere between 1,000 and 3,000 words per minute as our brains refer to speech however this is very broad. In terms of writing as a task, there’s word-thinking-rate but then when you’re writing something conventionally you will be reading back, reviewing, revising and rewriting as these are key parts of the creative process, otherwise what you end up with is most likely either gibberish or just not worth publishing.

Beyond that there’s an assumption that descrambling our thoughts is possible to do coherently, though more than likely some training will likely be necessary in the same fashion in which we currently have to rephrase our words for a machine to interpret a command initially at least re-ordering our thinking might be required to get a usable result. All this plus multi-lingual people may think words in a specific language or mix languages in their thinking, and how a neural interface could even begin to interpret that is a very long way off and not in our lifetimes most likely.

More in Part 2

Next we’ll look at outputs.

Back To The Mac

It’s been a long series of experiments beginning in the mid-2000s when I moved from Windows Vista to MacOS Tiger, then to the iPad in 2011 running iOS, back to Windows 10 on a Surface Pro 4, back to an iPad Pro in 2016, trying a sole-Apple Watch LTE as my daily device and finally now back to a Macbook Pro Touchbar running Mojave.

Either I’m completely unprincipled in the use of technology, or then again perhaps I’d prefer to think of myself as being one of the few stupid and crazy enough to try every different mainstream technological option before reaching a conclusion. Whilst I admit that Everything is Cyclic it is also a quest for refinement. Beyond that sentiment naturally as the field of technology continues to evolve, whatever balance can be found today is guaranteed not to last forever.

If you want the TL;DR then skip to the Conclusion and be done with it. For the brave, read on…

Critical Mass for Paperless

Ideally computers would replace paper and ink for communicating ideas in smaller groups in person, and replace overhead projectors and whiteboards as well for larger groups, but they haven’t. The question is simply: which is easier?

We are all able to pick up a pencil and write as we are taught to at school and despite typing being an essential skill in the modern world, many people can not touch type, and with keyboards on small glass screens now all non-standard sizes, even that 80s/90s typing skill presents difficulties for skill level equalisation among the populace. (I’m now beating most 15-25yr olds in typing speed tests as they’ve learned on smartphones, away from standardised physical keyboards)

The iPad Pro with the Apple Pencil represented the best digital equivalent of an analogue pen or pencil and hence for nearly 2-1/2 years now, I have not needed to carry an ink-based pen with me. At all. An an engineer I’m not interested (generally) in sketching and whilst that’s something I can do I’m not particularly good at it, so I use the Apple Pencil to take notes. Unlike an ink pen on paper notes though, I can search through all of my notes easily with handwriting recognition.

The use of iPads for this purpose has increased significantly in our office (no, not entirely because of me though I was the first I am aware of to do that in our office), and it has increased because it is so much better than ink on paper. The amount of photocopier and scanner usage has dropped significantly and it’s only a matter of time before there is a transition away from them altogether. Like the fax machine shortly there will be one photocopier per floor, then one for the building, and then none at all in a matter of a decade.

The paperless office may finally arrive; a few decades behind schedule, but better late than never.

Fighting the Form Factor

A term I’ve come across in programming is “Fighting the Framework” which is meant to illustrate that Frameworks and APIs are written with an intent, with data structures, methods and objects within all cohesively designed around a specific model, view and/or controller, inter-object messaging and so on. If you choose to go around these structures to create your own customised behaviours, doing so represents significantly more work and is often far more error-prone as you are going against the intended use and nature of the frameworks.

I’d like to propose that there are people that love technology that are obsessed with taking devices with a specific form factor and making them “bend” to their will and use them in ways that fundamentally conflict with their design intention. Irrespective of whether you believe pushing the boundaries is a good practice or not, there are limits to: what is possible; what is practical; and what can be expected realistically when you fight the form factor.

Examples include the commentary around the iPad or tablets in general, still “just being a tablet” meaning that they are predominantly intended to be used as consumption devices. Of course that’s a reductive argument since content comes in many forms, written, audible, visual at a very basic level, and within each there are blends of multiple including newspapers, comic books, novels, TV Shows and Movies. The same argument works in reverse whereby according to the currently popular trope, it’s “too hard” to create content on a tablet and therefore it is and can only be a consumption device.

The fundamental structure of the iPad (iOS more specifically) and the constraints of a single viewport, the requirement to cater for the lowest common denominator input device being a human finger makes the form factor difficult to directly copy ideas and concepts from desktop devices which have 20 years or more of trial, error and refinement. As time goes on more examples of innovation in that space will develop for audio (eg Podcast Audio) Ferrite and video Luma Fusion and although these will not satisfy everyone, only a few years ago there were no equivalent applications on iOS at all.

In the end though there is no easy way for the iOS form factor (both physical and operating system) to permit certain important, proven aspects to all a specific class of application designs and use cases. For these unfortunate classes, fighting the form factor will yield only frustration, compromise and inefficiency.

Multiple-Screen

You can’t beat pixels (or points). Displaying information on multiple screens on an iOS device in a way that allows a user to display information side-by-side (or in near proximity if not perfectly aligned) and importantly to visually compare, copy and paste seamlessly between, is a feature that has existed and been taken from granted from desktop computers for decades.

On larger-screened iOS devices this feature has been added (to an extent) with slide-over and side-by-side views, however the copy and paste between the applications isn’t widely supported, comes with several caveats, but most importantly there aren’t enough pixels for a large number of side-by-side review tasks. The larger the documents or files you need side by side, the worse it is on an iPad.

iPads have supported application-specific monitor output which isn’t just a mirror of the iPad screen, however support for this is rare and bound to the application. There’s no generic way to plug in a second, independent monitor and use it for any general purpose. Then again, there’s no windowing system like on the desktop so without a mouse pointer or a touch-interface on the connected screen, how could the user interact with it?

Some have proposed in future multiple iPads could be ‘ganged’ together but apart from this being cost-prohibitive, it’s unlikely for the same reason that ganging iMacs together isn’t supported anymore (Target Display Mode ended in 2014). Beyond this no existing iPad (even if it supports USB-C) can be chained to support more than one additional monitor. If you have a laptop or a desktop currently, most support two additional displays with a combined cost of significantly less than the multiple ganged iPad Pro solution.

Navigation Methods

Scrolling and navigating around large documents is slow and difficult on an iPad with few short cuts, many applications lack search funtionality, loading large files can take a long time and there’s a lot of fast-flick-swiping to get around a document. These issues aren’t an issue on a desktop operating system, with search baked into practically every application, Page Up/Down, scrolling via scrollbars, trackpads and mouse wheels all of which are less obtrusive and overall much faster than flicking for 30 seconds to move a significant number of pages in a document.

Functional Precision

The capacitive touch screen introduced with the iPhone and subsequently with the iPad made multi-touch with our highly inaccurate built-in pointing devices (our fingers) a reality for the masses. As an input method though it is not particularly precise and for that a stylus is required. The Apple Pencil serves that function for those that require additional precision, however pixel-perfect precision is still faster and easier with an indirect positioning mechanism like a cursor.

Conclusion

My efforts to make Windows work the way I needed it to (reliably) weren’t successful and the iPad Pro met a great many of my computing needs (and still does for written tasks and podcast editing). However I was ultimately trying to make the system do what I needed, when it fundamentally wasn’t designed to do that. I was fighting the form factor and losing too much of the time.

Many see working on the iPad Pro exclusively as a challenge, with complex workarounds and scripts to do tasks that would be embedded or straightforward on a Mac. Those people get a great deal of satisfaction by getting those things to work but if we are are truly honest about the time and effort expended to make those edge-cases function, taking into account the additional unnecessary friction or resistance in so doing, they would be better off using a more appropriate device in most cases.

For all of the reasons above I came back to the Mac and purchased a Macbook Pro 13" 2018 model and I have not regretted that choice. I am fortunate the my company has provided a corporate iPad Pro 2, which I use every day as well for written tasks. I feel as though I am no longer fighting against the form factor of my machines, making my days using technology far less stressful and far more productive. Which in the end is what it should be about.