A few years ago information architect and product design manager Ron Bronson sat at a train station in Chicago. He missed his train, and with a few hours to spare he watched people try to use a self-service ticket machine. Fascinated with how passengers from all walks of life — young and old — struggled to figure out what to do, he decided to give it a go himself just to see what they were experiencing.
“I didn’t think it could be that hard but the UI was a disaster,” Ron remembers. “It was miserable, it just didn’t work. Even if you were trying to use the kiosk’s voice-activated system, it was really difficult. I already had my ticket but what if I had been in a rush or had to run? I asked myself a few questions: Who designed this machine? Why did they think it was a good idea? Did they test it, and if so how? And why did nobody fix it?”
The incident led Ron to analyse other service design experiences, online and offline, examining specifically why they weren’t designed in a more humane way. He studied dark patterns (or anti-patterns) in user interfaces, which are flows purposefully designed to deceive a user into doing something they might not otherwise do.
Realising that it’s all connected and always comes back to preventing harm from the start, Ron then started looking into ways to reduce the hostility that’s frequently baked into the design of everyday experiences, and quantifying the cost of friction in design, which can often end up ruining the UX. He also began questioning how designers have abdicated their responsibility for destructive patterns across digital products, exploring ways to put integrity back into the design process.
The result is an evolving practice that Ron calls Consequence Design. He’s currently working on a book to bring together his research.
“Consequence Design is the mat in front of the door,” Ron explains. “When you look underneath, insects who weren’t visible scurry in all directions. All designed interactions have consequences, whether it's someone getting stuck buying a train ticket from a kiosk, or hidden menus inside of web applications. The consequences might be unintended but they cost time and money, and erode trust with our platforms. We need to uncover how a product can cause harm and fix it.”
Common friction in user experiences
Digital experiences are full of friction, which refers to anything that prevents users from accomplishing their goals or getting things done. Sometimes this friction is intentional, and sometimes it isn’t. For example, service design and user experience often fall short when interactions that were once human to human are delegated to digital systems, such as chatbots or self-driving cars. It introduces friction that didn't exist previously.
Ron points out that these days we’re so focused on the transactional relationship with our users, that all we care about is the customer journey — but only to onboard and convert them. We’re putting all our effort into getting someone into a funnel and persuading them to use a service. Sometimes people will want to leave but we make that process really hard and create an additional layer of emotional burden, even using design patterns that employ guilt friction as a user retention strategy.
“We don’t think about humane ways to let folks off board,” Ron sighs. “We guilt-trip people! I got a message from my barber saying, ‘I miss you’. I mean, I have great conversations with him but we’re not best friends. It’s not personal, we’re not breaking up. But that’s how we treat it. We need to respect our users at every stage of the process, not just at onboarding or when we’re trying to convince them to sign up, but also when they decide to leave.”
Another type of bad friction Ron mentions is coercive friction: interactions meant to coerce or induce users to make counterintuitive decisions. We assume that someone doesn’t really want to unsubscribe and so we create a modal that makes them answer 25 questions before they’re able to actually do it. Or we ask our users to read something once they’ve been in our app long enough, but we ask for money and extort them before we let them look at it.
Then there’s attention theft — benign interactions meant to siphon attention over small periods of time. Individually, they’re not that big a deal. You might receive a notification on your phone and that’s not too bad. But in an always-on world we are bombarded with notifications for everything, and if UI patterns force you to click every single item to disable them (like Instagram does), you might just give up because it’s simply too hard.
Problems also occur when designers copy a pattern or interaction that they found online. As a result, the same mistakes and bad information architectures are perpetuated all over the web. Also, what works for one project doesn’t necessarily work for another. Teams need to think about their particular use cases and tailor their products accordingly.
“I see this all the time,” Ron warns. “People look at what Silicon Valley companies do and treat it like the gold standard. They take everything someone else built and design their own version of it. Did you test with anybody? No, but Google did it and they have tons of user researchers, so it must work, right? But you need to think about if it actually works for your audience. Do the modals and calls-to-action make sense, or are you really just copying it all?”
Why design ethics and empathy alone won’t solve the issues
On their own the problems we deal with on a daily basis can seem small, but Ron argues that spread over hundreds or thousands of experiences they compound, become bigger and can be very stressful and alienating. Ron therefore calls for designers to be more rigorous.
If a dentist accidentally hurts you, you’re covered by insurance. But you’re not covered against badly designed products and services. If a product team designs something that causes harm because they didn’t test it well enough, it might be time to improve our processes to prevent it from happening again.
“It’s death by a thousand cuts,” Ron cautions. “There are a lot of small issues, and the crux of it is that they’re all micro-interactions that nobody owns, so they get short shrifted. Designers often say it’s not their fault and didn’t mean for something bad to happen but we can’t keep passing the buck. We need to stop making excuses. At what point do we become accountable? Unintentional design friction, patterns and mistakes are inevitable, but any poorly researched interaction can harm. It’s our responsibility as designers, with the help of researchers, to continue to interrogate our work to figure out what we screwed up, so that we can keep iterating on our designs.”
Ron finds that the lens from which designers operate can be very narrow. We tend to design for our own experiences rather than solving actual problems.
Many products are designed assuming good intent, but that’s a flaw in service design. Ron is skeptical that ethical design and empathy, which a lot of conversations in the industry evolve around, are enough to improve products and services.
He argues that we can’t just take it for granted that everyone is necessarily operating in good faith, and that all we really need to do is demonstrate a level of empathy. It’s not sufficient to get rid of the consequences of hostile design. “Trying to solve design’s problems using ethics as a framework is problematic. It presumes that everyone is on the same page and the disagreements are just on the margins. But that’s not the reality of the world we live in. It’s idealistic to say we all mean well. No, actually not everyone’s playing fair. Some friction is designed on purpose. By assuming everyone’s good intentions, services allow toxicity to proliferate on their platforms. So it’s going to take a bit more than that to create better, more humane experiences.”
While Ron acknowledges that the work of ethicists is important (and recommends Cennydd Bowles’ book Future Ethics), he cautions against ceding control to them. Instead, he stresses that designers, engineers, user researchers and product managers will have to do the hard work themselves, making sure questions around consequences are reflected at every stage of the product development process.
Defining Consequence Design
Consequence Design builds resilience into systems that lack it. Ron argues that it’s not good enough to think about harm after the fact and fix problems later. We should be thinking about and mitigating challenges much earlier in the process. We need to understand the root cause, who we could potentially impact and why, and how to design products and services that don’t adversely affect users.
According to Ron:
“Consequence Design defines adverse frictions to quantify and reduce harm built into systems and platforms.”
The goal of Consequence Design isn’t to remove all of the negatives, because Ron concedes that’s not possible. “We’re not trying to achieve the impossible. The idea, though, is to create some kind of framework that helps us to continually improve our tools whether it’s early stage or post-release. One of the things we start with is definitions. Every use case has a set of consequences. It’s not a monolithic framework, it adapts to the challenges being addressed in whatever tool, service, or product we build.”
Improving iterative design methods
Research plays a big part in Consequence Design. Ron says that one of the first steps is “identifying the ideal state and working backwards”. To do that, he recommends using a consequence frame — a collaborative activity that enables teams to dig deep into their own (or organizational) assumptions about why something was decided. It helps identify the actors involved in making the process come to life and figure out who they might have missed.
“The reason I like this modeling is that it can be used for large-scale change, but also for something as small as a feature within a tool or application,” Ron explains.
We also have to broaden the base of the people that we talk to in our research. It’s another way to challenge our assumptions. “Often we’re just trying to validate them and aren’t willing to accept that they might be wrong,” Ron points out. “It’s important to engage feedback in a way that gets us not the answer we want, but the answer we need to build better products.”
Ron finds that the lens from which designers operate can be very narrow. We tend to design for our own experiences rather than solving actual problems. To overcome this design myopia, it’s crucial to bring in other perspectives in the early stages of building a product as much as possible. Ron acknowledges the pressures and the often small size of the teams working on products and services, both in the private and public sector. But, as he explains, this research doesn’t necessarily require a lot of resources. You can do it in a DIY way and only canvas a few people to test your product.
“Just let them use it without directing the research and asking leading questions,” Ron suggests. “Don’t focus on where you want it to go but let the users take the lead. Take notes of their feedback and then go back, process and iterate on it. This research isn’t comprehensive but by watching actual humans use your product, you’ll discover ways your product may be used that you might not have thought of before.”
Ron advises to expand the user base you test your product with. Review how you recruit test participants and where from (i.e. do you use third parties, how diverse is your recruitment pool?). If everyone seems to like your product, conduct guerilla user testing in the street. Don’t just talk to your ideal audience, but include participants from different backgrounds to get a better sense of where in your design you might be messing up. Gather different types of feedback to be able to make informed decisions across the entire product lifecycle.
"Designers often say it’s not their fault and didn’t mean for something bad to happen but we can’t keep passing the buck. We need to stop making excuses. At what point do we become accountable?"
In order to identify gaps and conflicts in our designs, Ron also recommends improving and diversifying teams. If a team is monochromatic, you need to find ways to adapt and bring talent into the organisation, which may include inventing new roles.
Rather than hiring an ethicist, Ron suggests nominating a designated dissenter: a member of every project team who imagines ways the product can be abused. It’s a sprint exercise that can be used in different design phases to check against unconscious bias and question assumptions. Another solution could be bringing in someone whose job it is to try and break the UI, similarly to how game testers, or quality assurance testers, play games that are under development to find errors, glitches, or bugs.
Other iterative design methods include harm analysis (a good way to speculate and assess how a feature might harm somebody or be abused before it gets launched) and friction audits (entire UX research sprints around friction, deception and mining for hostility where we don’t anticipate it). If we continually document and log intended or unintended consequences, and also include products and features that have already shipped, we can identify actual problems and create conversations that help us improve the products and services we design.
Be transparent and don’t stop iterating
Ron calls for more transparency and suggests baking it into the process from the start. He points to Basecamp’s new email service HEY as a good example. It includes a manifesto that explains the product’s philosophy, covering issues such as consent and attention. This design stance, which includes a product’s attitude and personality, can be part of the product story and helps the team align around what they’re trying to achieve. The user, on the other hand, doesn’t have to guess or trust that the company has their best interest at heart.
In the end, designers need to accept that harm is going to happen. It doesn’t have to be deliberate. To prevent it as much as possible, we need to pay a bit more care and attention to our products and services. Ron urges us to think about the ideal state of what we’re designing and then work backwards to all the consequences that it might cause.
Ongoing research can lead to more humane design decisions. We need to reconsider our assumptions, view our products through the lens of edge cases, try and uncover hidden problems, diminish them, and repeat this process. We need to constantly ask questions and keep ourselves in check. The sooner we expand who we talk to in our research and understand the many ways people live in the world, the better our products and services are going to be.