Engineering is about the design, construction and operation of engines, machines and structures. Engineers are bound by a code of ethics and legislation that can end careers if they are proven to be negligent in carrying out engineering that results in the death, injury or loss of property from their work. As engineers create things that the public use (and the public at large are mostly not Engineers themselves) the quality of the engineers work needs to be scrutinised to ensure that it is of an acceptable standard. In essence the challenge is to prove that the work of one or more engineers is competent and by inference if the individual engineers are themselves competent. Whilst the following piece focuses on Engineering it applies to many other professions, if not all.
Be warned that this is an essay piece, not a short-form article. We need to cover the basics before we circle back to competence. Stick with me - it’s worth it in the end.
Before attempting to understand how one might measure competence, it’s good to go over the issues that we all face as members of the human species.
1) Humans Forget
Our brain is continuously bombarded with new information, sights, sounds, smells and events that push other knowledge aside inside our minds. Sometimes the information that is pushed aside is critical to the task at hand and can either slow down or stop work until sources can be cross-referenced to confirm what was once a known fact and has since become a suspicion.
2) Humans Lack Focus
Our bodies require nourishment, we get tired, we get sick, we become distracted by both work and non-work related issues. In short our emotions and our stresses drive us to lose focus on the task at hand, and we sometimes have a bad day when we can’t focus at all. In every job, time (and hence cost) is measured the same irrespective of how much focus you have on any given day.
3) Humans Are Driven By The Need to Survive
In many developed countries, money drives people to work as it ensures survival and the ability to have those things that we would choose to have. Seldom does it ever fully satisfy, however the need to survive and to have security is a primary driving force in us all (the so-called survival instinct). It indirectly causes issues by equating time to money (usually in the far too short term) and cutting corners and bypassing established processes to save money. Why save money? The higher in the corporate chain the bigger the financial reward for doing so. Fighting money/greed/survival driven distraction is too hard for most people to handle. Do I take the extra time and thoroughly recheck my design, or do I just send it off to shut the manager up who’s harping on about a deadline, real or manufactured? People cut the corner, submit the design and mistakes creep in.
4) Not All Humans Are Equal
It’s a great fairytale to tell children: “You can be anything you want to be.” In truth it should be “You can be very good at anything you have a talent for if you work hard at it,” but admittedly the child needs a longer attention span for that one. Not everyone has a talent for problem solving, critical thinking, or inter-personal communication. These traits in particular (in my opinion) are key to being an effective engineer and hence not everyone is cut out to be one. Some realise this during their careers and the good ones that do change their career. The bad ones sometimes to go into management and this is not always a bad thing so long as they don’t interfere with the actual engineering. (To be clear: many people go into management for other reasons and from other diverse backgrounds and this does not make them bad engineers by default. This is not a debate about management - perhaps another time.)
5) Humans Form Relationships With Each Other
We are social animals when survival needs are met and we generally enjoy the company of others. Whether it’s to share a common complaint, regale a story or discuss the topic of the day, socialising is normal behaviour. As relationships grow, friendships can form that change the dynamic and drivers of the engineers involved with the design. Objectivity is often lost and confused when emotions affect judgement. A critical part of professional development is to provide critical feedback when mistakes are made. All too often I have seen feedback been excessively softened due to relationships between the people giving/receiving feedback.
If we agree that the above is true then we can begin to address our own shortcomings. Before that, let’s explore the evolution of design as projects increase in size beyond the capacity for a single person to deliver.
One Man Band
Beware the sole design engineer. With no checks or reviews internally, a failure of any of the first four traits and mistakes will creep into their work. No matter how amazing they may be they are human and will make mistakes. This may be fine for a smaller project with a smaller budget and the cost of rectifying mistakes is small, but when companies invest multiple millions of dollars into a project to build a water or gas pipeline or a new manufacturing facility, it’s reasonable to expect that one engineer alone could not assure that such a massive design could be done alone, flawlessly.
From the sole operator we introduce a checker whose sole purpose is to confirm that all calculations and design details are accurate and correct. Who is best to check design? It must logically be someone with more experience than the lead designer. This will likely add significant cost by attracting a higher hourly rate and thorough design check will take at least 50% of the design time (in my experience). To increase throughput we add a second design engineer now fully utilising the design checker. Percentages can be argued but the concept is that for a genuine design check, a dedicated resource needs to be accounted for and one for each discipline being designed (electrical, mechanical, process, civil etc). Clients also have their own engineers (on larger projects) checking design documentation and such client review can often provide additional design clarifications (or scope creep).
Rather than produce a single design we will now add several gates to pass through. Not literal gates of course, but checkpoints in the progress of the design. Typical values are 35%, 70% and 100% however these can vary from project to project. A great deal of the detail in a design occurs in the first two steps but the idea is to get preliminary feedback (client review) before the design is fully fleshed out. This provides multiple opportunities to review the design to catch mistakes and improve our check effectiveness. Unfortunately each step needs to be reviewed and this takes additional time for both the designers and the client. Gates are often used as payment milestones as well, breaking down the total cost of the job into smaller, more regular chunks.
Version and Document Control
In smaller projects with fewer people it can still become dangerous working from design documentation without version control. Being certain that the client has the correct version and avoiding clients claiming they never received documentation for client review is vital on larger projects, especially if milestone payments are at stake. This usually involves traditional wet signatures approving engineering design documents (even though they are subsequently scanned into soft copy afterward) and additional personnel to ensure that version numbering is adhered to and confirms that the client and internal reviewers received the documentation for their review so the design can progress.
Larger projects with many terms and conditions, milestone payments and contractual conditions, design engineers need to focus on the design and so a contract manager is employed with contract law experience to let the designers focus on their design.
I find nothing so nebulous in engineering as the concept of a project manager. With document control, engineering design and design checking going on, as well as a contract manager handling financials, someone needs to keep an eye on the budget available and negotiate with the client regarding progress on the project. It’s very, very hard to find and to be a good project manager as they must essentially know a bit about everything that is going on. Inevitably they crack the whip and remind the designers that there is a fixed amount of money to finish the project, so they should hurry up and finish it. Sometimes this can introduce mistakes driven by trait four.
Competence By Real World Performance
In reliability engineering we learn that test is a screen. Different test types will attract a different test effectiveness. We can apply the same concept to a design checker by attempting to measure a checkers “check effectiveness”. In the following scenarios, assume the design checker is more experienced than the designer.
Scenario 1: Designer “X” is new at this and introduces 100 mistakes into every design document. If design checker “A” has a check effectiveness of 70%, then 30 mistakes will pass through. Assuming there are three gates in the design, no new mistakes are added on each design cycle, and check effectiveness remains the same each time, next time around there will be 9 mistakes left and finishing with about 3.
Scenario 2: Designer “Y” is young but talented and introduces only 27 mistakes per document. Due to budget cuts the design checker “B” is less experienced and achieves only 53.5% check effectiveness. If we calculate this through we end up with about 3 mistakes in the final design.
In real life designers and design checkers are all human and all have good and bad days - irrespective of talent or experience. Where this leaves us is with variable check effectiveness and variable design effectiveness further muddying the waters. On a large project with large volumes of documentation the numbers should average out and the above scenario is demonstrative of a major flaw in the design check ethos: how can we prove the design checker is/is not competent based on their checking performance if they are paired with an excellent designer? For that matter, how do we determine the competence of the designer if design checker is not competent?
In practice the designer often gets the blame for mistakes however I find that to be unfair. If a design checker is more experienced and employed specifically to check designs then why are they the more guilty party?
Familiarity Breeds Mediocrity
In larger teams it’s natural that relationships will form between the people involved. Where relationships grow, familiarity grows and this can be good for team cohesion but can also be very dangerous with under-performing engineers protected (to some extent) by their friendships with those elsewhere in the team. Suddenly feedback becomes less direct, more subtle and designers don’t have critical feedback to ensure improvement in the quality of their work. In addition too much conversation during work periods erodes everyones productivity.
Age Does Not Equal Wisdom
Faced with the scenario played out two paragraphs back, how would an external manager judge the performance of their staff, given that in both scenarios both the designer and the design checker blames the other for letting 3 mistakes out in the final design? As the design checker is more experienced in these examples (usually meaning older) preference goes to the older, more experienced engineer as with age and experience comes wisdom? Correct?
An example from my experience: Two design teams from different countries in the world are designing a pump station. Team A uses an older technology that is proven but has known inefficiencies, the other (Team B) proposes a new approach that eliminates those inefficiencies and whilst has higher upfront cost, presents cost savings after only 5 years of service life. Team A have been implementing older systems for much longer (obviously) and are given approval for the older design despite the fact that Team B has documented examples of the newer technology being successfully used at multiple other plants around the world.
Age and experience means nothing if your experience is outdated or worse, you’ve just been doing it wrong all that time. Automatically trusting the design checker is flawed reasoning and makes it more critical that their competence is measured than that of the more junior staff.
Real World Performance is Only About Opinion
In the final analysis one cannot reliably measure an engineers competence based solely on past performance because all measures of real-world performance are based on opinion. The question is posed to the ‘senior’ engineer: “How did they perform?” Since the real world wasn’t a standardised exam there is no fair benchmark and hence there is no fair answer. Perhaps if you were to obtain enough opinions from enough qualified people you might reach a consensus among them, but the result is more likely to be confusing rather than conducive to a single judgement. Relying on a smaller handful of opinions that you trust is flawed since opinion is tainted due to relationships between yourself and those ‘trusted’ people and this affects judgement.
Competence Based On Examination
To remove the emotional judgement element the only true way to measure competence is by a standardised exam. Exams have either right or wrong answers (in everything except art) and are essentially impartial. If set up correctly the testing/marking can be done blindly (i.e. the name of the individual under test is not known to the marker) to remove any potential bias. For this reason universities, colleges and schools have used standardised testing for a very long time to determine competence when learning a new subject.
Theory vs Practical
It is easier to write an exam that is purely theory. That is to say a collection of facts that test the students knowledge retention on the subject - usually with no reference material. The problem is that in Engineering that is not how engineers solve problems or do any part of their work. Whilst it is handy to know which engineering standard has which information in it, becoming a walking encyclopaedia is less and less useful given that Google search exists and that documentation is now available in soft copy that can be easily searched by keyword. In essence, engineers are tested regularly on their ability to find information and then apply it rather than being a font of knowledge on the trivialities of their discipline. For those reasons when it comes to proving competency, theory exams are essentially worthless.
Practical exams require that the student apply certain rules or formulae to determine various design parameters: for example, how thick does the beam need to be to support that weight, how thick does the cable need to be to carry that current and so on. To do this we refer to a pile of standards and textbooks (hard or soft copies) and reference those in our calculations to provide traceability. In effect, most days in engineering design are practical exams, and practical exams should be written as though they are day to day engineering design activities.
Degrees and Certificates
To revisit our human traits we described previously suggests a problem: theory or practical knowledge gets pushed aside with the passage of time. Even if all university degrees had purely practical exams, in 15 years time the engineer with many years experience in the field would likely fail a great number of exams were they made to sit them again without warning. In my early years of employment I noticed that when applying for jobs my University degree was heavily scrutinised, but in recent years people only seem to care about recent experience. This is a big reason why.
Don’t misunderstand, formal qualifications still have a place but the value that they bring is merely a snapshot in time that suggests potential ability. Many years ago, when I was in my prime, having crammed for dozens of exams I proved that I could pass and obtain a degree. For me, nearly 20 years later I have to question the relevance and usefulness of that degree to my current employment and use of that degree as a measure of competence for engineers of several years experience is essentially, completely invalid.
I’m an Engineer. Yes But Which Kind?
One of the problems that occurred to me very early on in my career is the sheer breadth of the engineering profession. Early on (pre-industrial revolution) it was just about building roads and bridges and houses but then there were steam trains and electricity and computers and my god now it’s apparently endless. By degree I am an Electrical Engineer however within that there is Instrumentation, Control Systems, Low Voltage Electrical Design, High Voltage Electrical Design, and Software just to name a few. Even those sub-definitions are too broad especially when you consider software. Let’s talk about real-time systems, single and multi-threaded programming, graphics, networking, firmware and driver software. Even those can be broken down even further by programming language structures such as object oriented programming and memory safe programming languages.
Saying one has a degree in “insert kind of engineering here” and that makes them competent is somewhat disingenuous irrespective of the amount of experience. Qualifications need to be specific, current with technology and relevant to the job required.
Continuing Professional Development
One solution in my industry is the Registered Professional Engineer (RPE) and the Certified Professional Engineer (CPEng) qualification. Essentially to qualify you must have 5 years of relevant experience, write several essays about projects you’ve worked on and in what capacity, with confirmation that what you have written is correct signed off by someone who was supervising you directly who is also either a CPEng or RPE, and then hand over a bunch of money to IEAust (the Institute of Engineers Australia) for the initial application and then again each year.
Once you have RPE/CPEng status you need to prove you have performed sufficient CPD (Continuing Professional Development) in the past year in order to maintain that qualification. They suggest training courses but also accept many different kinds of development which is not tested. Critically they only break down the qualification by high level discipline. In short, an electrical engineer doing LV Electrical Design goes to a training course on instrumentation but does no such work day to day in their job and this counts as valid CPD.
10 years ago it was not a requirement to have a CPEng/RPE (the concept of the RPE was only born about 10 years ago in my state) involved in your design project, however now it has become a requirement for pretty much every client I’ve worked for. It’s a requirement that the CPEng/RPE is directly involved with the design from start to finish if they are to legally sign off on the design at any stage. This seems to offer a reassurance that the engineer working on your project is competent (or their design checker is at least) but in all seriousness it doesn’t stop people from treating their CPD with contempt, nor does it stop people from leaving companies mid-design, with a new CPEng/RPE who wasn’t involved with the design then forced into signing off on a design they had nothing to do with. (With high industry turn overs, on large projects this is a regular occurrence)
Don’t misunderstand, I’m glad that CPEng/RPE qualifications exist. Engineering is a better place as a result of their existence (for the most part). The issue is that they are simply not an accurate enough method for determining ongoing competence and the entry into the ranks is highly subjective in the first place. There is a better way.
Let’s review. Humans forget and lack focus: cover this by design checking and multiple design stages to increase the probability mistakes will be found. Humans form relationships can be tackled by employing design checkers from outside of the team to examine work with no emotional bias. Businesses need to focus on quality as well as cost and this is more difficult subject for another time. Finally, not all humans are created equal has to focus on design checkers to ensure their competence. As they check multiple engineers work it is vital that they are kept in check regularly and thoroughly.
Whilst we currently only need to rely on CPEng/RPE qualifications, without regulations and client demands to go beyond that, all responsible design organisations need to take additional steps to ensure the quality of their design checker staff and inevitably their competence.
All is not lost
Proving competence is a balancing act between regular, detailed examination and it’s costs and the ensuing frustration of engineers having to re-prove their capabilities. Exams should be a regular occurrence to ensure ongoing competence with different practical questions each cycle. The cycle time between exams is likely to be subject for debate, as clearly there is an inherent cost overhead in preparing, conducting and marking the exams. Since each is a snapshot of capability at any given moment in time I wouldn’t suggest them being any more than 12 months apart.
Exams must also be split by specific discipline. In other words, to be a design checker on a project with High Voltage design, the checker must have currently assessed competence by exam for that specific subject. This would mean that engineers would need to choose the strands they wish to be qualified for since it is conceivable that too many exams would lead to too little time actually reviewing and earning money for the company. Hence a limit would need to be set as to how many strands an engineer could take in certain circumstances.
The exam questions would need to be unique each year and recycled and should be set by personnel that are not related to the department under test. Without an external standards body like IEAust taking ownership of such a system it may be advisable to employ external consultants to create and another to review these questions. The key is to remove as much bias as possible and create examinations by true peers in the engineering field under test. Budgetary constraints would likely restrict the layers of separation required to ensure minimal bias. Inevitably a governing body would be the right way to go and would ensure better question control and consistency.
Of course there are always the same examination risks such as foreknowledge and other methods of cheating however the test environment should be the same as that designers face every day - in other words fully open book with full access to the internet and all required/applicable standards in soft or hard copy for reference.
From these results companies could then maintain a competency matrix showing areas they are strong and weak and resource accordingly. Many companies I have worked for have such matrices, however they have all been self-assessed. “Are you experienced in LV Design?” ‘Yes I am!’ “Rate your experience from 1 to 10.” ‘I am SO a 10!’ Trustworthy data.
That Amount of Testing Is Over The Top Isn’t It?
Pained as I am to suggest adding more layers of regulation these steps will surely improve the profession and weed out under performers. That said, the scale of the problem is huge. I’ve seen million dollar pipes start rusting due to poor Cathodic Protection design, treatment plants massively undersized and overloaded when they were first turned on after their ‘upgrade,’ and power supplies specified that couldn’t possibly power their own load just to name a few. These were all design mistakes that should have been caught. Worse than that, these cases had all the measures like multiple gates, design checks, independent reviews and client reviews and mistakes that were very costly to rectify still made it through. The purpose of this essay to explore how we can stop this from happening.
The sad truth is that large companies are only interested in money and the risk of losing it. No lives were lost and no-one was injured in the mistakes cited above. The cost of rolling out such an exam qualification program would be a guaranteed, ongoing expense for the company and whilst it may improve design check effectiveness significantly, humans aren’t perfect and a mistake could still make it through. There are no guarantees of perfection. If it costs $10 million over 10 years to run a qualification program and a replacement pipe costs $1 million then the company is ahead without the competency program as an ongoing expense. The numbers may well be stabs in the dark, but this is how large companies (for the most part) think. It’s sad, but it’s true.
The biggest problem with reputation is that it is intangible making it difficult to genuinely assess relative risk as a result of damage to it. The other issue is that although reputation is not greatly expensive to build, it is time-consuming. With the average tenure of high-level executives in companies today being so short such long-term concepts generally aren’t their concern.
Whilst it is widely agreed that a company is made up of people and those people determine the success or failure of the company, it’s still easier to think of companies as being either being “good” or “bad” as a whole. The thinking is that if a company screws up badly on a project then the company (not the individual designers) would be blacklisted and get a bad reputation. Sometimes this is just on an individual client basis but other times this bad reputation can leak out into the industry and across different markets. The bad designer(s) or bad design checker(s) on the offending project may well be sacked but then the company must then try to regain its reputation after their departure.
Reputation-driven Companies Will Test Their Competencies
If we accept that nobody is competent all of the time and stop relying on the established methods of assessing competence then things can improve. Companies that truly care about their reputation will place additional measures in place to ensure the on-going competence of their key employees. If correctly balanced against cost they can still remain competitive and in time that will mean more good engineers are attracted to the company, they will win more work and will prosper.