Total of 327 posts

Herein you’ll find articles on a very wide variety of topics about technology in the consumer space (mostly) and items of personal interest to me. I have also participated in and created several podcasts most notably Pragmatic and Causality and all of my podcasts can be found at The Engineered Network.

Traction and Doom

Much has been written about when companies are doomed and who has traction in the market with what. This is my attempt to bring order to that chaos as it relates to Apple and a few other tech companies.

For a product to initially succeed it must solve a problem.

The Wii solved two problems: 1) Games traditionally required excellent finger-eye co-ordination and took lots of practice; 2) Games were marketed to gamers not families. By introducing a wireless, motion-based controller Nintendo opened up gaming to those people that had never picked up a game controller. Now people could interact in a far more natural way and most games were designed to NOT have to use a D-Pad or small buttons. Nintendo bundled Wii Sports with their product to guide developers in how to make the best games for their new platform. It was easy to use, fun, about sports that many people had played, and required little to no finger-button co-ordination which levelled the playing field with traditional gamers (compare and contrast the audience with the XBox and Halo).

The iPhone solved three big problems: 1) Laptops were inconvenient for accessing the internet outside the office/home even though they were portable; 2) Mobile phones used ‘overloaded1’ keys to for data entry into SMS, Notes, and emails increasing the learning curve for new users; 3) The very popular iPod music player required people to carry/keep charged an additional device with them for their music. Perhaps there are more, but those two are key. At the time there was always Blackberry and there was always WAP and the Palm Treo was also popular. Neither experience was even remotely close to that on a laptop/desktop web browser at the time. The ‘full web’ experience was only possible by developing a low-power native web browser for the new product: Mobile Safari. The timing of the iPhones initial release if anything was too early: the initial EDGE data (2G) was very slow and to get reasonable page load times it leaned too heavily on WiFi that was not heavily deployed at that point. Thankfully the second generation model had 3G and with enough networks around the world supporting 3G data, page load times became acceptable. Solving the data entry problem required nothing more than a brilliant soft-keyboard. Nothing short of that would solve the key overloading problem. One key, one purpose, but fully configurable based on the application and input field that was currently selected. Predictive text and auto-correction that worked well were also key and add to that a disappearing keyboard when not required and a massive amount of extra space was freed for applications to use in its absence. On the keyboard front the iPhone delivered.

Once we have these building blocks the rest are “me-too” items: Camera-phones existed (the initial iPhone camera wasn’t particularly good - my Nokia N73 at the time had a better camera); Better email existed (some would argue that Blackberry email is STILL a better experience); Contact management existed through syncing (Activesync/sync-services with many major phone brands at the time); Map services and mobile phones that could store and play music also existed. Of these only music was a significant step ahead of the competition as the iTunes library was formidable at that time. No longer did people need to carry a separate device to listen to their favourite music and for existing iPod owners the existing ecosystem integrated perfectly with the iPhone. Offline maps had existed for several years on competing devices and were in many ways practically better than Apples solution as they didn’t consume download bandwidth and were always there irrespective of your physical location. The downside was the expense in both storing the maps data and purchasing it. On the iPhone it was essentially free.

If one needs more proof where Apple saw the key features with the original iPhone then consider: Apple’s initial camera sensor was cheap and unrefined and GPS was not included which would have shown intent to push the Maps as a priority. Clearly Apple did NOT see these other pieces as key, and with time they were proven to be correct.

Identify the Barrier and Remove it

Nintendo made it easier for people not versed in gameplay to enjoy family games on their TV and Apple made it easier for people to access the internet without a computer and allow quick and easy data entry into their mobile devices and eliminate an additional device. They identified the barriers and then removed them. That is what primarily drove their successes, at least initially.

The Weight of Learned Behaviour

Innovation is at odds with the experience of learned behaviour and known user interface paradigms. The trick is to push the user down a less familiar road but one that leads to a better place - not just a different place.

Many have speculated regarding why Microsofts “Metro” interface first seen on the Zune and then morphed into Windows Phone 7, 8 and now Windows 8, has not been a resounding success. Whilst it is quite excellent in many important respects it is perhaps TOO different from that which people are used to considering for the desktop that means people trained in the paradigms of Windows XP, Vista and Windows 7 will struggle initially. Where’s the red “X” to close the window? For that matter, where the heck is my START BUTTON?? Resistance was always going to be high but then Microsoft is playing the long game. They recognise they must innovate in usability and their user interface but the massive weight of learned behaviour in their own customers is a difficult obstacle to overcome.

Metro on mobile devices has not taken off very much either. As on Windows 8, Metro offers a great way to consolidate your notifications/updates in tiles that are actively refreshed (Hubs). This makes communicating with other people quicker and simpler for many by changing the access hierarchy for information, but apart from this it adds little else. The market seems to have decided the effectiveness of this as a whole but the question is why? Some pundits assert that it is simply a bad design, but I think that’s the dismissive and misses the point. The problem that Microsoft faces is that people have been trained from the Desktop that Applications are the doorway to data. One uses Skype to call people; Word to write documents; Excel to write spreadsheets and so on. Certainly the files and folders metaphor confuses the issue slightly, but selecting a file isn’t viewing or editing it but rather selecting which program to use to work on it (if more than one option even exists). On the web we visit a website to communicate with friends starting with MySpace, Facebook, Twitter and so on. When Apple made the iPhone it knew that it had to be consistent with that and continued the “app” paradigm.

We have been well trained to view the data of our lives using the application as a window. Take that window away and we feel lost - even if the new road leads to a better place the discomfort drives us to not take that road if there is not another compelling enough reason to unlearn what we have already learned.

Innovative Products Won’t Always Win

No matter how good your product is; no matter how innovative; sleek; beautiful or trendy it may be, it is all for nought if it does not provide a compelling reason for people to use it. Conversely because a product does not succeed does not mean it wasn’t innovative: the Palm Pre (and WebOS) is a recent example of this and in some ways so is the Metro user interface. So many words are spent deriding one design over another but fundamentally the debate is flawed. Products succeed because they fill a void that existed in the market for a significant proportion of the populace. Such products were not immediately intuitive to all of their users and yet they succeeded. The point is that the problem they solved was enticing enough for the users to learn new behaviours and interfaces to access what this product made possible. In time, their learnings gain momentum in the populace, and shared knowledge drives momentum which drives more success. In my mind, that is traction.

Spinning The Wheels

Every company and individual desires some degree of success - the definitions for which vary of course, but suffice it to say when developing a product people and companies want to understand if/how their product will gain traction. In other words, which are the problems that most need to be solved and can they make something that solves those problems. The problem is that there IS no easy way to know. Steve Jobs at D2: “I’m as proud of the products that we have not done as I am of the products we have done.” Apple test extensively in-house and assess their own creations amongst themselves. If they fail to find their own product useable or useful in addressing the gap it was intended to, they kill it and move on. This type of iterative development takes big budgets, lots of time and a willingness to accept you’ve made a mistake, but the end result is that they only ship that which they believe is both usable and useful. All too often I have observed products being shipped that simply get shipped because they were “done” not that they usable or useful. Milestones and management wanting to save face, protect completion bonuses or recoup investment costs. “We’ve just spent $1M on this we need to recover some of that cost…” is an understandable sentiment. Throwing it against the wall and seeing if it sticks is fine but to use the entire market to witness your success or failure is just foolhardy. Reputation matters and Apple understand this. For humans it seems it is easier to remember a lesser failure than it is to remember a bigger success and that’s a problem.

Perception of Quality Affects Judgement

Young start-ups with no products in the field have no choice - they must press ahead with their product to the market and be judged. Those larger companies with existing product revenues and success do not need to be so hasty. Focus on product quality is always touted by companies but the number that truly mean it is much less. On-going successes (even those that are far between) with good quality products lead to consumer confidence that the company in question creates good quality products (even when they have flaws). No one is perfect and no company gets it right all of the time however if such a well-respected company releases a product with several flaws, brand loyal customers may well overlook them, but release a second, then third flawed product and the loyalty disintegrates quickly. Many companies have been through this rise and fall including Sony, Nokia and perhaps someday Apple (not a clairvoyant). If history tells us anything it’s that no success lasts forever.

Initial success does not equate to long-term success

Breaking into markets with revolutionary products is very hard and doesn’t happen often but when it does, keeping the first mover advantage can be very difficult. In order to stay ahead it is critical to find an aspect of the product or service that is difficult for competitors to replicate and hope that it is a key feature consumers want (presumably the company in question thought about this back during their initial brainstorming phase). The ongoing success of the iPod was ensured because of the iTunes music store that was the largest digital music store in the world. This also gave a leg-up to the iPhone, iPad and AppleTV. As Amazon and Google both catch up, Apple is now off pushing different boundaries. They know they can not remain the biggest in town forever. With iOS, Googles competition with Android is fierce and to many onlookers Google has already beaten Apples offerings in the mobile space in terms of popularity. Apple recognise this and are working on new products as they should. Despite their initial success with the iPhone, that success was never going to last forever.

The final flaw in prevalent thinking is that this could mean Apple is doomed. If all Apple was doing was iterating its iPhone a bit at a time and absolutely nothing else, then I’d probably agree. They aren’t. The new Mac Pro shows they are still pushing the limits of convention. Apple has pre-announced a new product category. They aren’t relying on existing products and they are continuing to iterate and search for the next big problem to solve. A while back I was worried. I’m not anymore. Apple don’t rush their development process and they only release when they are ready. Some people see time-gaps between products as Apple being lost and unable to innovate. The truth is that Apple are holding their nerve and only releasing what they believe will improve their existing market traction when they are ready to do so. Apple is so very very far from being doomed.


  1. Saying a button is overloaded is analogous to one key has several personalities like on a standard PC/Mac Keyboard, on it’s own, 1 = ‘1’ but when holding Shift-1 = ‘!’ instead. On a traditional mobile phone the number 1 was also A, B or C or even punctuation marks making it more difficult to learn. ↩︎

Another Elementary vs Sherlock Article By A Geek

(Contains Spoilers)

I have been a fan of Sherlock Holmes the character for as long as I can remember. I’ve read all of the books multiple times and seen many of the TV/film adaptations of the character but two recently have become quite popular and now, having seen all of the episodes for both series I’d like to consider the differences between them and why different people will be attracted to each.

Firstly, unlike most other adaptations of the character, both Elementary and Sherlock pose the question: what would Sherlock Holmes look like in a modern setting? Rather than going back to the time the books were written 100 years ago, we are given two perspectives of a modern Sherlock. To make it more interesting, Sherlock is the British interpretation and spurred on by the success of the first season of Sherlock, Elementary became the American interpretation.

The actors: Benedict Cumberbatch plays Sherlock Holmes in the UK series. He is tall and commands a presence, speaks quickly yet very clearly and stops rarely to take a breath when he’s on a roll. Excited and energetic yet cold and calculating much of the time. Jonny Lee Miller plays the role in the US series. He is of average height, somewhat of a nail-biter and almost nervous, but his presence conveys more of a quiet condescension from the corner of the room rather than a confident impatience Cumberbatch portrays. Equally though he tackles the dialog very well and brings a decidedly different flavour to the role albeit further from the books depiction but nevertheless very entertaining.

The role: In Sherlock the lead role is for a man who is obsessed with mobile phones, who lacks even the most basic interpersonal skills (insulting pretty much everyone at some point), is fighting a nicotine addiction (somewhat of a faint nod to the ‘pipe’ from the books) but does use patches as a way to focus on difficult problems and who enjoys having Watson around to compliment him on his brilliance. He regularly plays his violin to help him think and there’s a funny twist on the infamous Sherlock Holmes hat that ends up being quite endearing. In Elementary the lead role is for a man who, whilst he lacks many interpersonal skills, is prepared to admit he made mistakes and even apologises to those he is closest to, is a recovering heroin drug-addict and enjoys training Watson on how to be a better detective. He isn’t seen playing his violin in the entirety of the first season and there are no nods to the imfamous hat or pipe. What’s interesting is that the opiate usage is closer to the novels but Sherlock was never in recovery (though perhaps it never existed in those days) whereas in the UK version the writers toned it down considerably and just stuck with cigarettes.

Watson and the rest: In the UK version Watson, LeStrade and Mrs Hudson were all very close to their character descriptions from the books, but in a more modern setting. In the US version it’s almost as though the writer/producers thought: “How can we make these characters different somehow?” Being in New York meant no Scotland Yard and hence no LeStrade and instead the character of Captain Gregson is far less openly reliant on Holmes and much less of a friend to Holmes than LeStrade ever was in the books or the UK series. Mrs Hudson is a tall blonde woman with an Adams apple and Watson is not man, it’s Joan Watson, a woman. Watson is played extremely well by Lucy Liu, no doubt about it, but her character could not be more different than Watson from the UK series or the books. Watson is fascinated by Holmes and can’t stop being amazed at what he does. The character depicted in the US Version sees Holmes as a broken man that needs fixing and there’s plenty of touchy-feely dialog about how much they respect each other. It feels quite off to listen to because it detracts from the story being told and worse it’s nothing like the characters interacted in the books.

The Final Nail: Moriarty Every hero needs a great villain, and Moriarty is considered by many to be the pop-culture pinnacle of villainy. The UK portrayal has Andrew Scott playing the part of a small but brilliant man that has violent mood swings but creepy to the extreme and cold as ice at the core. In the US version in somewhat of a ‘shock’ (spoiler alert skip to next paragraph if you haven’t seen the end of Season 1 of Elementary) turns out that Irene Adler is actually Moriarty - yes, not only is Watson a woman but so is Moriarty. Well why not? The character is played by Natalie Dormer and is confidently portrayed and the Moriarty character is cold, confident, and arrogant. Funny thing is, I’d say that was more inline with the books portrayal but the issue I have is the amalgamation of one brilliant character from the books (Irene Adler) with another brilliant character (Moriarty). It seems wasteful to say the least that we the audience get one for the price of two as it were and worse than that it’s wrapped up in one episode at the end. In the UK Version at least Moriarty shows up in multiple episodes before (spoiler alert if you haven’t seen the end of Season 2 of Sherlock) he eats his own bullet. In addition the UK Moriarty pushed Sherlocks buttons far more effectively and intensely than in the US version which was awesome to watch.

To summarise there is no doubt that the US Version is harder-hitting, more politically correct, more gender balanced and much easier to pick “whodunit” early in most of the episodes. It lacks the polish of the dialog from the UK version but then, the writers cranked out 22 episodes in 1 year as opposed to 6 episodes in 2 years. Clearly the US writers earned their paycheques.

Personally I find both Elementary and Sherlock to be entertaining however in making the US version more PC and bending too many of the accepted rules for portraying Sherlock Holmes and the associated characters, Elementary ceases to be a modern-day Sherlock but rather a Law and Order/CSI show with people that just happen to be called Sherlock and Watson. This gives the writers a certain creative freedom but at the same time that carries the risk of losing something of what made the original stories so enduring and popular. This show will appeal to those that like the CSIs and Law and Orders of the world but Sherlock Holmes aficionados be warned: it may taste strange.

On the other hand, the UK version soft-pedals the drug angle and over-plays the whole “we’re not a couple” bit between Sherlock and Watson that could have potentially made it sharper than it is. Coupled with the fact that the BBC don’t fund shows in the same quantities as the US (the writers have different motivators too) and hence a television hungry audience ends up annoyed that currently there are only 6 episodes of Sherlock to Elementary’s 22. That said Sherlock has more engrossing stories, is closer to the original and uses many clever modern twists effectively and is more entertaining overall than its US counterpart.

I see great things in both series and will continue to watch them both. If you haven’t watched them yet, you should.

Design Theft and Attribution

An Author signs their work for novels, essays, journalism article, as does any artist, as does any Engineer. The difference in Engineering is that sometimes the responsibility of cost or personal injury mistakes goes with that signature but inevitably that is the playground in which engineers choose to play. Why is it then that so often it is deemed okay to change the name after it has been designed?

The designer of a system may well copy something from another, similar system, but the moment they put their name on it, the full responsibility of its success or failure lies with them. There needs to be traceability of where it started for legal reasons but also so that the original designer(s) can be tracked down if necessary to ask key questions that may include, “What the hell were you thinking?” and/or “Which CornFlakes packet did you rip this off of?”.

Seriously though, traceability matters. For this reason drawing title blocks and documents should have an Original Designer/Author field that is never changed, with each subsequent revision in a separate table that lists the revision number/letter, the names of those making the changes and a brief description of the changes made. Many such documents do but that isn’t just ‘problem solved.’ Who is allowed to write their names in the boxes needs to be controlled. An example: A pressing deadline requires that the drawing is submitted today, but to pass through the document release system it requires the sign-off of the original designer, alas they are sick today and won’t be back until next week. Too often the easy path is taken and someone handy fills their name in the box and signs it instead. After that and for all time someone else’s name is in the box and original designer gets no attribution. I’ve seen scenarios including this, but also the designer is: temporarily assigned to another project, team leads pulling rank/seniority and management reallocating design throughout the team, as several other examples of real-world scenarios in which it is done.

So why change the names at all? Surely one can leave the original name and sign by another authority with documented permission to sign, in their absence? Indeed some companies have this however the state of document control systems and drafting procedures varies and recently I’ve even been told this is not acceptable - the signature MUST match the name.

Irrespective of whether it’s the easy way out for whatever the reason; irrespective of whether it’s a failing of a badly defined document-control system; and irrespective of whether the document structure is setup correctly to accommodate good originator and revision controls, there remains one HUGE issue.

ATTRIBUTION.

No differently than if another Author copies whole or any part of any of my posts without attribution then it is called plagiarism, how is it possibly okay to take someone else’s design work and put your name on it? It’s theft. Design theft pure and simple.

The shocking truth is that whenever I’ve brought this up with other Engineers the normal response seems to be one of surprise that I would care. What stuns me is that not enough other engineers care about it. I don’t steal other peoples work and I expect the same in return. Alas deals with the universe don’t seem to pan out that well in reality but I can live in hope that someday Engineering disciplines will take attribution far more seriously than it currently does.

All Money on the Alpha

I’m a great believer that good design comes from having a design lead that sets a high standard and won’t compromise, is open to different ideas and is humble enough to admit when their designer has the better idea. The team of designers is diverse and people are free to be themselves but are also inspired by their lead designer to accept the umpires call when things don’t go their way.

If this sounds a bit too much like fairyland to you, you’ve clearly worked in the real world at more than just one company. Of course situations like that described above do happen but they are the exception rather than the rule. It depresses me greatly to observe this happening and over the years I’ve tried to understand why this is so rarely the case.

The best designers so rarely enter management because they usually realise it’s either not what they want to do or it’s not their strength. Managing people is hard but managing people that do work you don’t understand is a recipe for frustration and pushes the better designers and leads out the door. The problem is that with so few talented designers moving into management positions (there is a growing trend for talented designers to simply leave and start their own companies) those management roles are filled by people that lack the grass roots experience of the job they are managing.

Management push the leads because that’s their job. They push on schedule (and hence cost) and they respond to action and deadlines being met. An experienced and good design lead would consider a younger designers inspiration even late in the game if the trade-offs were worth the disruption and their innovation was a good one, but try to sell THAT to a budget and time sensitive management chain. A grass roots manager might understand the tradeoffs but one that isn’t, probably won’t, and defaults to focusing on schedule.

When this starts to happen the good leads leave and management tend to appoint someone who is a confident. All of their money is on the Alpha. Perfect for the job they arrange the chess board (whether it needs arrangement or not - and yes that’s a metaphor for the people in their department) just the way they want. It’s their way or the highway. Certainly they can be very inspirational, just don’t tell them you’ve got a better way to do something. Before you know it there are new dress standards, everyone in the team must be in the building between fixed hours, lunch hours are now at their discretion and everything is your fault and not theirs.

Before you say, that’s over the top, I’ve seen people become the Alpha. I’ve seen them change from reasonable designer to ruthless dictator over the course of a year in control of a department. It’s terrible to watch but pride and ego are things all of us possess and hence we can all be sucked in by power and it does corrupt. The final problem though isn’t how much you like your boss (or how many shades of Alpha your boss may possess), it’s what happened to the innovation?

The more people are forced to conform to one persons way of thinking, one method, one set of rules, the faster innovation dies. Of course there’s always the possibility that our Alpha is themselves innovative. The sad truth of the matter though is that most individuals aren’t a tap (faucet) of innovative ideas [usually half-baked stupid ones and that includes loads of mine]. People can get lucky, for sure, but truly innovative companies don’t rely on one individual to innovate for the rest of them.

It sounds like a cliché but it is true that the company’s greatest asset are its people but also their ideas. Letting their ideas be shared means the good leads can pick the most innovative ideas. When balanced against end to end cost fairly these small innovations in turn lead to successes and the company will prosper. Forcing everyone to conform to one way of thinking stifles innovation and although the mileage may vary, the company will inevitably do the opposite.