10th April 2013
Over the last decade I have developed HMI (Human Machine Interfaces) and SCADA (Supervisory Control and Data Acquisition) user interfaces [amongst many other things] as well as few desktop/mobile apps - one of which made it to the App Store a few years back. As I’m embarking on yet another piece of desktop software development I’ve been thinking a great deal about how end-users (aka customers) see, use and react to the work I do. This is healthy and I suppose, nothing new or particularly interesting. Today I had an epiphany: the problem so often isn’t satisfying your customer, it’s identifying your customer in the first place.
When developing software for a client there are often meetings [sometimes far too many] with different people representing the client and all too often they are not the end users of the product itself or worse, have no experience with what you’re doing. When developing your own software the first people that use the software in question are usually the alpha and/or beta testers. They say a good beta tester is worth their weight in gold by exercise functionality and also reporting bugs accurately: making it easy to track them down and fix them. That’s all well and good but testers like that don’t provide the balance you need. The problem is the selection process for beta testers and for client representatives is usually A) influenced too heavily by ourselves or B) out of our control entirely.
Clients will often "allocate" someone who is interested in the product from the client side and for our own Beta testers we typically choose those within our close circles: i.e. geeks like us. In both cases these people will often bend your ear and say "I’d really like this feature" or "I think this would work better this way". They distort what your application is trying to achieve. As such, feedback from them will give a distorted view of what end-users will actually think and how they use the end product.
With clients I find the best choice is to request two representatives: one that is well versed with similar products/software in their organisation or the market and one that is going to use the software regularly but is preferably NOT well versed in other similar type software. The balance between their views should hopefully guide the product to a more balanced end result. Admittedly one does not always get a choice, but it doesn’t hurt to ask nicely - they can only say no.
With beta testers, have as many friends (geeks probably) that you like, but try to get older people (aunts, uncles, parents, grandparents) that aren’t as familiar with technology to have a look if you can. They will provide more balance and push and prod the software into places it wasn’t supposed to go. These are the sorts of beta testers people should value the most.
My only other piece of advice is to remember that you are also your own customer. Think to yourself: Am I proud to put my name on this? Would I use this software and recommend it to others? (selflessly of course). If you make a design decision that you strongly believe is right (even against the opinion of one or more of your testers or client representatives) sticking with it is usually the right call. Understanding who your end customer honestly is, helps you to focus. Be careful who you choose as the customer(s) to satisfy. Pick the wrong one(s) and you’ll burn time, money and waste effort. Pick the right one(s) and your chances of success will improve dramatically.
5th August 2011
Time and again lately I’ve been hammered at work regarding software schedules. Whilst it’s difficult enough to explain this to management that don’t understand software, maybe it would be possible to explain it to anyone else. I thought it would be interesting to put down my thoughts on how it is possible to generate a software schedule that is realistic and achievable.
Civil engineering is a great analogy to draw on, so let’s start by setting the scene with a project where they’re building a road. The designers start by doing investigation and determining the requirements for the road (testing soil, how many cars per day does it need to carry, how heavy are those cars etc). Once this is done and agreed the design work begins - detailed drawings, specifications and reports that are handed over to a constructor. The constructor takes these drawings, orders the bits they need and they start building. Unfortunately it rained on and off for weeks and they were unable to lay any pavement so there was a big delay. Once the road is built and the finishing touches are put on (lines, barriers etc) it needs to be tested to ensure it’s the right thickness and consistency and complies with the design. Once that’s done the road is finished and everyone is (hopefully) happy.
Still with me? I hope so because now it’s time for software development.
First the programmer defines the scope by investigating the requirements for the software: this includes the interfaces to people, other software/hardware systems, databases etc. Once these basic requirements are agreed the basic framework design begins: how they will break the code down into objects (blocks) as well as specifications describing how the code will be finally assembled and then tested. Once this is done the programmers then begin the task of programming the objects. When the objects are done they are tested and then the objects are all assembled together into larger, functional pieces of code. These pieces of code are then tested and once the finishing touches are done the final system is acceptance tested and then hopefully everyone is happy and we are done.
The similarities are clear enough, so why is scheduling software so difficult? I think there are two drivers behind the problem: 1) Because software can be easily changed, it often is - even though the change can have massive implications; and 2) Time for "inclement weather" is rarely allowed for in software schedules. In software it doesn’t rain but bugs are found that need to be fixed. Think of it like someone raining on your otherwise perfect piece of code. This time needs to be allowed for. In Civil projects weather delays are often included in the schedule from the very beginning, but time for code rework in software projects isn’t.
If one was building a road and it was discovered they forgot to cover a big hole in the ground and put the pavement over the top of it, they would have little choice but to rip it up and start again on that bit of road. So it is also for software. If we find a bug it needs to be fixed. Let’s say the road constructor was doing their final testing of the road when they discovered the hole - they would have to fix it before they could finish the road. The same is true of software. Allowances must always be made for rework - no matter how good the programmer is or claims to be, they will always make mistakes and there will always be some rework. Like the weather, the size of the delay is hard to quantify, but some delay is a guarantee.
In the land of Civil Engineering there is a well understood concept that late changes to the design like, "oh we wanted the road to a bit to the left around this forest" at the 11th hour are just impossible. The roadbase is likely done, some pavement is laid, and you well, just can’t change it right? Even if there is scope change earlier in the design once the road clearing has begun, there is usually a big cost involved in making a change. The problem with software is that there is belief that "it’s not a physical thing I can touch so I can change it." Perhaps the problem is similar to typing a document on a typewriter and typing a document in a word processor. We have become so used to fixing typographical errors in a Word document with a simple keystroke or two and a fresh print-out, we think that changing software is just as easy.
There may well be some changes to software design that are minimal in impact, but every change should be assessed for its time and cost implications. If objects are already written and tested and are ready for final integration, if a change is then made all of that code needs to be regression tested again and this must be costed. Schedules should be adjusted and extended in time in that instance as well with every change to the scope of the design - no differently to a Civil schedule.
The other problem comes back to design vs implementation. Some people say that the code is the design, but in that sense it is also its own implementation. The problem is how does one separate design, implementation and testing in software? In the Civil world it’s clear: the hole you just dug is your implementation and you only dig holes when your design is done and you only test the hole is correct when the hole is finished.
The problem with software is that in this day where multiple programmers are usually needed to complete a project and programs are quite large - there is no choice but to break it down into objects and smaller pieces. This means that not only do the objects have design, implementation and test stages for each object, but so does the overall final product delivery. Add to that the fact that good programmers test continuously as they program and it gets difficult to track time.
How does one track hours? Tracking software as one blob of time (like most projects do) will artificially blow out the time taken to develop the software. If it is broken down into costs for object development and overall development and those are broken down into design, implementation and testing, the hours spent can be correctly tracked. Certainly piecemeal testing of sub-functions within an object are not always formally, rigorously tested and this testing may well be counted as code development - no system will be perfect.
There are also common things to all kinds of scheduling such as accounting for staff turn-over and retraining of new staff, illness, loss of data etc. That said, if one accounted for all of the real-world issues in a schedule and used that schedule as the basis for final project costing in a tender response, there is a good chance the company would not win the job based on their price alone. This doesn’t mean people shouldn’t try and cost things accurately. If the project is won for a cheaper price and the allowances have been removed to win the job, in the end the managers need to understand that costs were cut and they should not take it out on the programmers who are just trying to finish the job.
Thoughts; advice; venting; now over. I hope this helps someone in the future…
22nd May 2011
If a customer asks me how best to provision spare parts and on-going support for their control system, the first question I ask is: "What kind of mean time to repair are you able to live with?" The second question is: "Does your control system hardware and software have a single integrator or multiple integrators able to support it?" Why is it no-one else thinks about these things?
Technology will always die in the end. Capacitors and resistors drift, dry and die over time and these are failure mechanisms that can not be prevented. Whilst every effort is made to protect equipment from damage, lightening strikes do occur, as do power surges and brown-outs and all of these are outside the control of the customer and can cripple their control system assets. Hence it is not a question of: "Well it’s really reliable and we’ve never had a problem with it" or "So and so have always supported us so that’s all we need to worry about". Natural events, external factors, businesses going out of business are always going to happen. The real question is have you invested wisely in your control system software and hardware: beyond price and features? Most customers don’t.
I recently visted a process plant with a somewhat rare control system. When asked, "Do you keep spares on site?" their answer: "Why would be do that? The supplier/integrator (same people in this case) keeps them all for us". Problem was, this supplier/integrator are the only people that supply, service and maintain their equipment and software and are thus a sole source of supply. If they decide to close up shop tomorrow they will have a completely unsupportable control system. This didn’t seem to bother them.
In the long run these ideas of an expensive single source of supply and support simplify the management side of the site, but leave the business depending upon that control system with no options if something ever goes wrong. It’s short-sighted. It’s dangerous. In the end, it will catch up to the customer and they will lose.
10th April 2011
One of the big parts of the automation industry that has always annoyed me is the seemingly pathetic cash grab for end-user and integrators money for PLC Programming software. Before we dive into that, a quick re-run of how the PC industry works.
The end-user/developer can buy a Mac or a PC for software development but if they want to develop for the Windows platform they also need to buy the Windows operating system (OSX comes with the Mac and is not licenced with keys etc like Windows but then, it only works on a Mac). That said, most PCs from major manufacturers (DELL, HP etc) come with Windows pre-installed and it’s part of the price of the PC hardware (effectively what Apple does). The hardware (the PC hardware itself) is useless without an operating system to tell it what to do. Hence both of these approaches makes sense: what’s the point of buying hardware that is useless without then buying additional software? If they tried it in the general consumer space the backlash would be immense. To the enthusiast though, building their own PC from parts and running Windows, the OS is an expense they understand they need to pay for separately. It also gives the enthusiast a choice - they could buy a version of Windows OS that would happily make any PC they assembled work just fine - the hardware wouldn’t matter.
In the PLC industry it’s more the equivalent of the PC hardware enthusiast market where PLCs don’t come in a made to order configuration but in pieces that must be put together: CPU, Power Supply, Digital I/O and Analogue I/O and Communications cards on rack(s). They also follow the Microsoft model selling the software needed to make it work separately (programming software in this case).
The issue I have is this: PLC Hardware is proprietary such that the programming software used to program the PLCs is unique to that PLC (much the same way that OSX is locked to Apple hardware). In the case of OSX, this is fine since there is no additional licence for OSX. Yet for PLC programming software there is always an additional cost. In some cases upwards of $5,000 USD without which the PLC Hardware is useless.
The only reason that PLC hardware vendors get away with it is because the market is completely closed - they design and manufacture the PLC hardware, firmware and programming software. There is no open standard that allows one PLC Programming package to program them all. This lack of standardisation is fostering a lack of competition and a lack of innovation that has seen the PC industry take off into the distance while PLC/SCADA Automation technology is decades behind.
If the IEEE introduces a common PLC Programming interface standard (they’ve already been pushing open programming languages which is a good start) then some innovation might finally happen in this space. Until that time, prepared to be ripped off.