There might be no other way at this point to describe the relationship we have with our technology than to admit that it is often toxic. Whether it’s 87 million Facebook users having their data harvested in a major breach; or a growing addiction that many people have to their phones; or just the general security threat we seem to expose ourselves to every time we use an app, there’s a risk—to our data or to our mental health—that we seem willing to accept in order to stay in a relationship with our favorite products and platforms.
But at a time when Silicon Valley and Big Tech have an approval rating that might be on par with that of the U.S. Congress (which is to say, not very favorable), we might be at the beginning of a sea change in how our products and platforms are built, placing UX and UI designers on the frontlines of making our technology not just better, but better for us. Michael J. Fordham, a software engineer and UX designer, notes this change in approach by using the example of a button being designed onto a product.
This button could be made “more prominent” or made more “tempting to click” or placed “in an area of the screen that we know the user will scan when they land on the site,” Fordham writes. While these are all reasonable things for a UX designer to consider when building a product, it leaves out one important question, according to Fordham.
“We could do a lot with a button,” he continues. “Now all we need to decide is should we do something with this button?”
Fordham’s point is that designers need to grapple with a button’s purpose—“imagine if the button signed away all your personal information and allowed whoever is in control of the button to sell your information on to another third party”—and whether they should in good conscious allow it to be a part of the design when it could compromise the user’s privacy.
“Suddenly, making the button way more tempting to click might be a little unethical, mightn’t it?” Fordham adds. “It would effectively be coercing the user into doing something they might not actually want to do, but feel they have no choice but to do it.”
Most of us are very familiar with the warning that Peter Parker’s uncle gave him as he began his life as Spiderman: With great power, comes great responsibility. For companies and their digital products, the power does not necessarily come from the technology itself, as designer Jack Strachan points out, but rather all the data this technology is carrying with it on its platform.
The reason that Microsoft paid $26.2 billion to acquire LinkedIn back in 2016, for example, was not because LinkedIn is considered a technological or design marvel (the general consensus seems to be that LinkedIn is a terribly designed, clunky website), but because it has a massive treasure trove of data that any software company in the world would kill for. (Or at the very least spend $26.2 billion for.)
Strachan notes that “we are inevitably still stuck in this cycle of technological power but fortunately, some are now questioning the price of being stuck.” What this means is that many designers are now questioning whether the technology that enables the sharing of this data is being designed in a way that is ethical.
“Designers are starting to realise their power behind decision making in organisations and the more conscientious of them are starting to consider if they have had any effect on the negative events that have occurred through this technology in the first place,” he explains.
The best designers already know that UX design without ethics is not only bad design that creates a negative experience for users, but that it can also create a PR and legal nightmare for a client. The use of dark patterns—which can be utilized (nefariously) in UX design to get users to do things they don’t want to do (as Fordham alluded to in his example of the button)—cost LinkedIn $13 million after it was hit with a class-action lawsuit several years ago for spammy emails aimed at signing up as many people as it could to its site.
Chris Keiss, a Chicago-based UX designer working within the healthcare industry, suggests that the design profession develop an “ethical framework” to use when building products. Keiss points to three primary categories that should be considered: Existential values, ill or misdirected intent and benevolent intent.
Existential values are the designer’s own system of values that guide who you are willing to work for and where you draw the line on the kind of project you will work on. Ill or misdirected intent is pretty straightforward—is the intent embedded in your design looking out for the user’s best interests?—but Keiss notes that the ethics of a project “often fall beyond our control as designers” because the intent of the design “is often shaped or diverted by teams and individuals who do not bear the title of designer, nor have the word ‘designer’ anywhere in their job description.” Benevolent intent “is the ideal state we should strive for where the intent is to place the user’s needs first,” according to Keiss.
By considering your values and intent, you can begin to think about building an ethical framework for product design.
“This framework can help guide design teams and organizations in the development of products where harm, exploitation, and deception are minimized or non-existent,” Keiss explains. “It can also serve as a foundation for a code of ethics—a code that will guide us in how we choose and approach design projects, as well as how we decide who to work for and what our personal values are.”
The thing about building an ethical framework is that it provides a general guideline for how you approach product design, taking into account your values and the user’s wellbeing, but the truth is that each product will present its own unique ethical challenges. And how you respond to the challenges will depend not only on what’s best for the user, but also on what a company is looking to accomplish with its product.
The key—and this is where having an ethical framework helps – is to find a solution that takes into account what is best for all parties involved. But since there are no set rules or an established framework on what constitutes ethical design, it can sometimes be complicated to decide what’s best for the user.
Designer Hila Yonatan points to two approaches you can take when building out the user experience: Either giving users as much freedom as possible when navigating a site, or giving them less freedom and essentially holding their hand more than even they may like—but which is necessary in order to protect, not only the user’s best interests, but also the interests of you and the client for whom you’ve designed the product.
In the former approach, the designer is “placing the liberal principles of providing maximum personalization options and complete freedom,” which means that users “claim responsibility for their actions, whereas we, the designers of the system, are presenting the full scale of the system in the easiest and clearest way we can arrange the interface.” The latter approach adheres to the “principles of conservatism” where the designer “will not assume the user comprehends the entire gravity of their choices” which means that most of the choices on the site are “pre-made,” with “a common feature being a closed circuit of onboarding/tutorial that showcases the options, allowing the user to confirm only the most crucial choices (while being asked just once).”
When deciding how to design the user experience, it’s important to keep in mind whether giving the user too much freedom—or rather, too little guidance and therefore too little protection—would be unethical. Yonatan notes that the challenge of designing with the user’s best interests becomes “more and more significant when the actions become more ‘critical.’”
She uses the example of a platform that allows a user to access money—like a banking app—where “it is advisable to make the process slightly longer to eliminate mistakes, even though it might be tempting to design a system that ‘compels’ the user to make such an action quickly.” This means giving the user less freedom and autonomy in exchange for better protections.
“Everyone is rooting for ease of access and lightness of processes,” Yonatan explains. “Still, you might not want to be held responsible for making an instant loan request process too easy and impulsive, even though it might make the financial institution that provides the loan euphoric.”
Product design that adheres to a set of ethics—whether it’s a fully fleshed out ethical framework or just a rough set of values you and your team follow—is not only good UX, it’s good business. Users are becoming much more protective of their data as they’ve learned how valuable it is to the companies that want it on their platforms.
As designers, you have great power; the important thing is to always use it responsibly.