What disasters can teach us about mission critical UX

Let’s begin with a simple question – what is the purpose of the brain? We can come up with fancy answers, but its real job is to keep us alive. Think about the time you were chased by a dog and ran like Usain Bolt, or the time you fought the burglar like Mary Kom. We’ve all been in situations where we acted beyond our physical capabilities to save ourselves from immediate harm.

When we are exposed to such situations, our brain activates the pituitary gland which sends hormones or messengers to the adrenal glands. The cortisol released by adrenal glands allows us to think clearly, send energy to important muscles and subsequently increase heart rate and breathing. Conclusively, if we are not able to detect harmful or stressful objects or situations, we as a system will perish. So, it is safe to say that our brain is our superhero, ready to save the day.

Now, if we are stating that our brain enables us to act beyond our estimated capabilities in times of stress how do we, UX designers conduct user studies to determine our target group’s behaviour in an emergency? In mission critical situations, we cannot simply interview the user or observe them for answers. If we do not understand who the users are and what they need, it will be impossible to build the right solution and this is where Mission Critical UX comes in.

So, what do we mean by mission critical UX?

Mission critical refers to a factor or a system the failure of which, will result in serious impact on a business operation, an organisation or cause social turmoil and catastrophe. The user experience of such a factor or system must not have any room for error. Thus, mission critical UX refers to an experience which produces close to zero chances for human-error by considering complexity of human factors.

But what happens when UX is left out of Mission Critical?

We often read about accidents and disasters and hear people debate on how they could have been avoided. We have court cases going on for ages and then declaring the operator or the system guilty. There are tons of examples out there which tell us why it is not a good idea to ignore user experience especially when lives are at risk.

  • The Challenger shuttle disaster
    On the 8th of January 1986, millions of viewers watched the launch of the Space Shuttle Challenger, live. But to everyone’s horror, 78 seconds after lift-off, the Challenger exploded, killing all of the seven crew members. In the investigation that followed, it was revealed that the ‘o-rings’ were not able to withstand the extremely cold launch temperature of 31 degrees Fahrenheit and were the cause behind the failure. The problem was a decision based on poor visualisation. The information visualisation did not layout the data well enough to be understood clearly which lead to the disaster. ‘Image A’ shows the visualisation used at the time and Image B shows how they could have plotted the data to make the correct decision.

    Image A
    Image B
  • Station nightclub fire
    The Station nightclub fire that occurred on the 20th of February 2003 has been described as the fourth deadliest fire to have occurred in the USA. A live band that was using fireworks as part of its act caused a fire outbreak. In a place where even the farthest exit was only 60 feet away, 100 people died. The club had three exits, the main entrance, a secondary exit which was three feet wide and a door leading outside from kitchen, but its presence wasn’t obvious even under normal circumstances. When the fire broke out, large quantities of dense black smoke was generated which must have made it harder for people to locate exits apart from the one they used to enter. As a result most of them pushed towards the main exit. In a state of panic they did not realise that the door they were pushing was made to be pulled and many of them lost their lives to it.

  • Hawaiian missile crisis
    On the 13th of January 2018 the people of Hawaii woke up to a bizarre and alarming message. It was an emergency alert which had been accidently issued. It read, “Ballistic missile threat inbound to Hawaii. Seek immediate shelter. This is not a drill.” At 8:07 am this message was all over the television, radio and cell phones. Naturally, it caused fear, panic and confusion among the residents as they were left unsure of what to do. Thirty-eight minutes and thirteen seconds later the state officials blamed a miscommunication during a drill and the Governor apologised for the erroneous alert.Below is the interface from which the operator had to select the option for a drill but mistakenly selected the one for an actual alert. Who or what do you think is to be blamed?

  • Duct tape plane crash
    Aeroperú Flight 603 took off from Lima on October 2nd but did not reach its destination. Shortly after take-off it crashed into the Pacific ocean killing all 70 people on board. Minutes after the plane left Lima the pilot radioed informing that the instruments had malfunctioned and he was not able to ascertain the altitude at which they were flying or even if they were over land or the sea. Sadly, this incident could have been avoided if only the workers had not forgotten to remove the duct tape they had put over key sensors while cleaning the outside of the plane.
  • Butterfly ballot of Florida – US Elections
    The presidential election ballot for the state of Florida in 2000 caused some to vote for the wrong candidate. On the butterfly ballot, the space that voters pressed to mark their choices was misaligned with the row of the given candidate. If a voter wanted to vote for Al Gore (second name on the first column) he was supposed to press the third button! The ballot design cost Al Gore the presidency but worked in favour of George Bush, who won by a 537-vote margin.

People resort to tagging blames for such disasters, but often it is not a person or a group which is at fault. The above examples prove the same. So, what does it take to create error-free experiences and how do we design for it?

Here are 6 points to keep in mind!

  • Zero error
    The outcome should be ZERO Error. For example our goal is to ‘transport people from A to B via an aeroplane’. The underlying objective is to ensure safety of the passengers, the aircraft and the crew. This is possible only if we ensure that there is no room for error before, during or after the task is completed. Ensuring the least probability for an error requires us to look at the problem from multiple angles, views and perspectives and understand human factors involved.
  • The PEAR model
    The aviation industry carries out thousands of mission critical tasks on a daily basis and in order to reduce the possibility of an error they devised the PEAR Model. This model enables us to have a unified view of human factors and ensures efficient assessment and mitigation. It considers People who do the job; the Environment in which they work; Actions they perform; and the Resources necessary to complete the job. It enables us to have a 360-degree view of the situation and come up with solutions proactively. The PEAR model allows designers to look at things qualitatively and quantitatively, and saves them from making assumptions that might lead to disasters.
  • Double call method
    Most of us are fans of ‘House MD’ or ‘Grey’s Anatomy’ and have seen the doctor ask for a ‘scalpel’ during a surgery and the nurse repeating ‘scalpel’ while handing it to the doctor, this is known as the Double Call Method. This method is used in the complex system of an operation theatre where high risk tasks are performed by a team of individuals with varying levels of information and knowledge. The practice of repeating things ensures that no mistake has been made in hearing and that the correct object is being handed to the doctor who’s focus is on the patient. The double call method ensures smooth communication and effective collaboration among the members of the OT system. It is a classic example to demonstrate that we do not need to design or build something new every time to solve a problem but rather innovate.
  • Cocktail party effect & Signal detection theory
    We all have names. We respond when someone calls out our name. Our name is a signal to which we respond and can notice despite heavy distractions such as loud music or noisy chatter, this is known as the Cocktail Party Effect or the Signal Detection Theory. Humans have the amazing ability to focus attention on a particular stimulus while filtering out a range of other stimuli. It also states that the detection of a stimulus depends on both the intensity of the stimulus and the physical or psychological state of the individual. This theory is often used to strategically design escape plans or emergency stops. One example is the bright red colour of emergency exit doors and fire extinguishers in a white hospital which cannot be missed even in times of stress.

  • Design for varying levels of alerts
    During an emergency a person’s cognitive capabilities are under tremendous amounts of stress which makes it difficult for people to think rationally and make the right decisions. As described by J. Leach, researcher and author of the book ‘Survival Psychology, there are three typical behaviours in response to threats. Some may remain calm and retain their situational awareness, some might have difficulty understanding and have sluggish thinking, and some might have a freeze or exhibit counterproductive behaviours. Mission critical UX needs to accommodate and design for varying levels of alerts.

  • TEST! TEST! TEST!
    Mission critical UX demands testing out theories as well as prototypes. Simulation training can be used to create true-to-life environment and mirror real life scenarios to produce realistic and immersive experiences. Using latest technology such as AR/VR we can achieve greater control and insights around performance and failures of our proposed solutions. A simulation based testing allows us to test multiple scenarios and provides a safe way to produce accurate, dependable results.

The human brain can act in unprecedented ways and with the ever-evolving technology we are faced with the challenge to design for it. Mission critical UX is imperative to the success of any system, business operation or organisation. Disasters and mishaps occur not cause of human error but because UX is left out. The PEAR model and the signal detection theory are some of the methods that can be used to gain a well-rounded insight into the problem and look at areas of potential threats. Using the latest tech to our benefit and testing out solutions we can reduce the probability of unfortunate incidents. Mission critical UX is not about adding delight or beauty…it’s a matter of life and death!

References:

  • https://www.nngroup.com/articles/minimize-cognitive-load/
  • https://thepsychologist.bps.org.uk/volume-24/edition-1/survival-psychology-wont-live
  • https://uxstudioteam.com/ux-blog/vr-in-ux-testing/
  • https://www.nfpa.org/Public-Education/Staying-safe/Safety-in-living-and-entertainment-spaces/Nightclubs-assembly-occupancies/The-Station-nightclub-fire
  • https://www.nasa.gov/centers/langley/news/researchernews/rn_Colloquium1012.html
  • https://www.sciencedirect.com/topics/medicine-and-dentistry/cocktail-party-effect
  • https://www.skybrary.aero/index.php/PEAR_Model
  • https://blog.prototypr.io/designing-for-emergencies-a-ux-case-study-398b780f3c2f
  • https://uxstudioteam.com/ux-blog/vr-in-ux-testing/
  • https://thepsychologist.bps.org.uk/volume-24/edition-1/survival-psychology-wont-live

You might also want to check out