Introduction
It's estimated that 70% of projects globally fail. Seriously. Google it. I believe cognitive biases are the silent killers of many of these projects if not all of them.
Even projects that deliver rarely do so smoothly (shock, horror). And why would they? The very definition of a project is something that has not been done before and, therefore, is a path laid from the stones of 'unknowns'. Typically, they are fraught with decisions, risks, assumptions, and uncertainty. Throw into this mix funding limitations, scope changes and quality problems, and you are creating a stewing pot of problems.
Bad thinking seeps in, subtly influencing the decision-making processes and often leading to terrible outcomes.
In project management, detecting and avoiding common cognitive biases is crucial in detecting and avoiding precision and accuracy.
Talking about our successes in projects and deliveries is all well and good, but is that where the real learnings come from? I don't think so. We learn far, far more from our failures and hurdles.
In my experience, project problems are virtually always down to some form of human decision-making failure (and I'm really trying to think here... I want to say ‘exclusively’).
In a series of articles, I want to explore what causes projects to crash, if not fatally, then in some significant aspect, they will go wrong.
What is a cognitive bias?
Well, it's 'bad thinking' in a nutshell. It's mental shortcuts that we can all take, based upon all sorts of factors, but for example, generalisations, that lead us to a bad place, but born from our biases, be they overt or subtle.
Common Cognitive Biases Affecting Projects
Confirmation Bias
So, let's start with confirmation bias, the tendency to filter and interpret new information that aligns with our existing beliefs or theories.
It's about more than just agreeing with information that complements our ideas; it also involves considering alternative viewpoints less.
Imagine a scenario whereby you have a friend and they believe strongly in a particular brand (or politician, nudge, wink) beyond any evidence that might demonstrate the contrary. Their bias disregards the evidence that doesn't fit their view, regardless of its pedigree.
So, despite contradictory data, someone holds onto their pre-existing beliefs. In fact, A study published in the Journal of Personality and Social Psychology found that people are twice as likely to seek information that confirms their beliefs compared to information that contradicts them.
In fact, some people fall deeper into their beliefs the more they are challenged with contradictory evidence. This can be incredibly dangerous and sometimes amusing (reference: Loch Ness Monster).
In project management terms, you'll have individuals or teams doggedly clinging to conclusions, regardless of the evidence available.
Examples of Confirmation Bias in Projects
Ignoring Red Flags
If a project starts to ignore or dismiss issues or risk flags (likely actually amber) and downplay them over a period of time, then you probably have confirmation bias at work.
For example, if we start to get the sense that something isn't going to be delivered on time, and each week we get an update from the responsible manager that they, “Hope to pick up the slack,” and downplay it, we may continually allow that to slip because we like and trust that person, but the reality is, the data probably isn't supporting that confidence. Some data might, such as previous experiences, but the data in the moment is pointing to another conclusion.
So, we need to be data-driven and try to remove our bias as much as possible from the equation. This could take the shape of 'burndown charts' or other metrics that give us a factual overview of the situation. We also need to put a line in the sand and say, 'If this situation hasn't changed by X, we must...'
Making Decisions
Confirmation bias can also limit the range of options considered, leading to poor decision-making. Managers might be biased towards a particular technology or a supplier that isn't suited to the job. They ignore signs or expert advice that another solution is more effective.
Several years ago, I worked on a £1m+ project evaluating suppliers to walk them through a rather complex technology delivery. One of the managers decided they wanted an organisation they’d worked with previously to deliver it, despite the fact they had never undertaken a delivery of this nature. The evidence all pointed to a more expensive but experienced option being the right way through, but it was pushed to one side with counterarguments about 'lower cost' and 'flexibility' of working with a partner that hadn't done it before. I'll let you guess how the project outcome went.
According to Irving Janis, who coined the term, “groupthink”, in 70% of cases when decision-making groups work together, they fail to consider all available options. That’s sobering.
The bias can also affect how resources are allocated. Those with an undue preference for specific tasks may funnel more resources into them to the detriment of other critical activities.
Overconfidence Bias
Ah, so this is a biggy.
When we are new to something, we tend to have an overconfidence bias, believing that whatever task is in front of us is much easier to accomplish than it is. I suffer this on virtually every DIY job I do around the house.
So, a manager might overestimate her team's abilities or the complexity of a task.
A particular form of overconfidence bias, known as the Dunning-Kruger effect, leads individuals to overestimate their capabilities, thinking of themselves as experts in a subject where they are not.
Here's an example from the Decision Lab; When a survey was sent out to software engineers at a company, 42% predicted they would be ranked in the top 5%. Well, you don't need me to help you with the maths there....
What can you do? Simply put, seek expert guidance. Talk to people who have been there and done it before (there's always someone). They'll help you see the pitfalls and reset expectations. Or, if no expert is available, have a proof of concept or investigation phase in your project, where you are specifically looking at the feasibility of those aspects you are unfamiliar with.
The Sunk Cost Fallacy
This is one I've seen time and again. You can witness it in all parts of life as well.
When a project faces challenges, it tends to keep investing in it due to the time, effort, and resources already committed. This can lead to a cascade of bad decisions, amplifying the scale of failure rather than mitigating it (Arkes & Blumer, 1985).
It speaks to loss aversion and that people will continue with nonsensical investments into things to avoid a loss. Studies suggest that people are twice as motivated to avoid a loss than to make a gain; people are more motivated to save a $2 loss than make a $1 gain.
A few years ago, I wanted to kill an in-flight project (I know, imagine a project manager suggesting it, the heresy!) I thought the business case was only valid for a while, and it was turning out much harder and longer than anyone anticipated. I voiced my unpopular opinion and was shot down pretty quickly. The reason given? You guessed it. The cliche was spoken; "We've invested too much into this now to turn back". So, we kept throwing good money after bad, doubling down on the mistake, and pushing on with the delivery. In the end, while the project did deliver, be it overdue and over budget, it didn't have the business benefits anticipated. It wasn't just frustrating for me but soul-destroying for those who sunk their efforts into the delivery, and here I'm particularly thinking devs, QA and Product team.
Mitigating Cognitive Biases
So, we know that everyone can be fallible to cognitive biases, and we know that they can destroy projects, but how do we avoid them?
Self-Awareness and Training
Like any pitfall, awareness of these biases is the first step in mitigating their impact.
Training programmes are available, specifically focusing on good decision-making practices and avoiding cognitive biases, which can equip team members with the tools to identify and counteract these mental pitfalls. But are these actually practical solutions? I'd say not really. Training a team one week will end up with that knowledge lost, dispersed or forgotten about within a few months.
Training yourself is the better approach. Constantly asking yourself, 'Is this bad thinking?’, ‘What is my evidence?' and ‘How can I test my assumption?’ (note: NOT 'How can I confirm my assumption? That's a bias right there, but you know that because you’ve been reading this).
Be mindful. Keep your mind open. Challenge everything.
Call out bad thinking as just that, but present the evidence. Find allies to make your voice louder. One evangelical voice is easily dismissed, but if you can find others, you can likely create momentum.
Checkpoints and Reviews
Incorporating unbiased checkpoints and reviews can offer a reality check for the project status.
I think of it as the rules of drinking. When sober, I set myself a rule of not using my phone to text people to save me any embarrassment the next day (less of an issue these days, but it served me well in my younger days). I follow the rule specifically when I'm not doing my best thinking.
So, in this instance, I'm a big believer that a project should set its review points, the questions it needs to answer, and the metrics it needs to evidence well in advance of reaching the checkpoint because when it gets there, it might not be thinking clearly.
Having an external consultant or a different team assess the project at various stages can also provide an impartial view.
I provided this kind of assessment on an ERP implementation. The assessment uncovered some major project issues with undefined scope, no sequenced deliveries, and a lack of expertise within the team to deliver. I submitted my report, expressed my concerns and the biases took over within the executive team. They couldn’t accept the evidence presented. They couldn’t accept that all their work was leading to disaster. A few months later the project collapsed and needed to be refactored from the ground up. Eventually, it delivered, but not with the same scope, programme manager or project lead, and in a vastly different manner.
Decision-Making Frameworks
Structured decision-making frameworks can help avoid cognitive biases by encouraging a systematic approach to problem-solving.
Orchestras have known this for years, and many have adopted 'blind auditions' whereby applicants to join an orchestra are evaluated without being seen (either by demo tapes or other methods) because the bias of those recruiting in the past led to a significant lack of diversity.
Here's a common example in projects: using scoring criteria for supplier selection, such as 'cost', 'capability', 'payment terms' etc. Weight those more important items to you, and score the suppliers on an even 'apples for apples' comparison as possible. It's all too easy to get influenced by a good sales team, or someone you click with. But its much more important to be making decisions based on facts.
Tools like decision trees or the Pugh matrix can assist in evaluating options more objectively.
Here's a simple example of a Pugh Matrix for selecting a tool within a project. Assume the baseline tool is "Tool A", and we are comparing it against "Tool B" and "Tool C":
In this matrix:
"0" indicates that the alternative tool performs the same as the baseline for the criterion.
"+1" indicates that the alternative tool performs better than the baseline.
"-1" indicates that the alternative tool performs worse than the baseline.
Conclusion
What have we learnt? Well, cognitive biases, if left unchecked, can stealthily derail projects. They can infest every aspect of decision-making and at every level.
Biases impact judgments and influence decisions, often without us even being aware of their influence.
By understanding and addressing these biases, project managers can improve the likelihood of successful outcomes in complex IT projects, especially in areas like governance and security.
As you engage with situations going forward, ask yourself: "Is the decision-making based on balanced evidence, or is it based on a preference?"
Comments