Designing Ethically with AI

Artificial intelligence increasingly powers many real-world applications, from facial and image recognition to language translations and smart assistants. Companies are deploying AI in their operations from frontline services to in-house support functions, to achieve productivity growth and innovation. While AI promises benefits, it poses urgent challenges. Hence, we need to discuss designing ethically with AI.

Key statistics on AI

According to Gartner, 25 per cent of customer service operations will use virtual customer assistants by 2020 (in six months’ time!). Management consulting firm McKinsey says that ‘AI has the potential to deliver additional global economic activity of around $13 trillion by 2030.’ With investment in AI increasing, and its immense ability to impact human and society, perhaps designers are too late to be involved with AI?

Building trust with machines

AI and machines are intertwined with our lives; We are relying on machines to complete our tasks; We allow machines to make key decisions for us; We trust it with our personal data for a service from an app, website, or machine.

As our interactions with machines are becoming invisible, these decisions made should not be based on a biased data set that is sexist, racist, etc. People must be able to engage in it with the confidence that their data shared on these platforms, will be utilised for the right purposes, and will not be judged or penalised. Being transparent on the invisible logic behind these decisions, and setting expectations of what these systems we designed could or could not do, would help to foster trust in using these automated systems.

Amplifying human biases

AI and other emerging technologies have the power to empower and include more people. For example, voice-enabled devices can allow the blind to access the internet while allowing a parent who is carrying a baby to place orders for groceries. Yet, due to limitations with the present technology, AI has excluded people on the ‘edge cases’ instead of providing accessibility.

The present technology is described as Narrow AI–an agent, such as a chatbot, that serves human without understanding the context. This seems worrying as it means that AI will only be as good as the information that we feed it. A biased set of data provided to the machine will return a biased set of analysis from the machine, or worst, an amplification of the biases by the machine.

Empathy vs diversity

Could we then address AI’s inherent biases? Perhaps we could if we practice empathy in our designs, and co-create with a more diverse team.

Empathy in design refers to our understanding of the users whom we are designing for. When we practice empathy, we’re in the person’s head and understands how they feel and what they think. Applying empathy to our designs, we approach the project by thinking from our users’ perspective and have their best interests in mind.

Yet, empathy has its limitations. Our users and we may have very little in common; things that are a given in our country may not be the case in other countries. Our own privilege, background, environment, etc., may form our own biases that hinder our ability to be empathetic.

In a recent article, Don Norman says that ‘We can’t get into the heads and minds of millions of people, and moreover we don’t have to: we simply have to understand what people are trying to do and then make it possible.’ It may not be possible to be empathetic and design for anyone and everyone.

Instead of recommending solutions based on what we think the users need, we should help to ‘facilitate, guide, and mentor’ the users to come out with their own designs. When we co-create with a team that is more diverse in race, religion, gender, abilities, age, perspective, etc., it could help to assess what the problems are with the design and identify who is being excluded. Through diversity, AI and machine learning could be free from an individual (or the designer’s) biases.

Design leadership

As designers get a seat at the table (or have I been mistaken?), we’re in a great position to make a positive impact on how businesses run and how the company’s products are built. The role comes with great responsibility–it allows us to step up as an ethical design leader and empathise and advocate for what is right for both the users and the business.

In the book, Ruined by Design, author Mike Monteiro suggests that designers should practice their craft and adhere to a set of principles and conduct. He suggested in his ‘Designer Code of Ethics’,

Designers should evaluate the economic, sociological, and ecological impact of their design and provide guidance to the client or business in the public’s best interest.

Managing polarities and unintended consequences

The more we automate tasks with AI, the more we are deskilling people from their basic craft.

The more we demand personalised services, the more personal data we are giving away.

The more we demand the servitude of AI, the more disservice a functionally incomplete AI might harm humans.

The practice of design is inherently about managing tradeoffs, polarities, and unintended or unplanned consequences from our design. As we seek to resolve a problem with a new solution, the new design or process may create another problem that was not originally planned. For example, Henry Ford did not design cars and imagine it to cause traffic jams. Products perform the way they were designed ideally; humans being humans, will find ways to break and misuse them.

New solutions are also meant to balance the lesser of two evils. For example, cities adopting the use of smart cameras with facial recognition technology could help to ensure greater public security by identifying criminals, but also leads to an increase in the monitoring of its citizens. Should freedom be forgone for greater public safety?

The Human-Machine Handoff

AI promises benefits, particularly in performing complex calculations and handling routine tasks. However, humans should still have their hands on the steering wheel and think about what’s right for others.

As artificial intelligence becomes deeply embedded in our lives, we need to think about how to design ethically with AI. Being empathetic and co-creating with a diverse group could help to address and identify ethical issues with a design. This duty does not fall only on a person with a job title ‘designer’, but anyone involved in the process of designing a product, process, or a service. Let’s design something positive for a better world today.

Source : uxplanet.org

This entry was posted in Knowledge sharing. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


three + = 9

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>