The risks of AI adoption
Today we are facing a rapid adoption of emerging technologies into our daily lives and with implementation of AI, certain ethical questions arise.
Cases of AI algorithms that are biased against certain groups of people have already been reported and concerns about how much power we should give to AI about important, irreversible and potentially life threatening decisions are rising.
Handing over control
Imagine the following scenario: the owner of a self-driving car relies on the system to avoid dangerous road situations and puts his trust in the car’s abilities. When the car is incapable of recognising an unknown situation, it will unexpectedly hand over control back to the driver and will thus be creating a dangerous, potentially fatal scenario.
This raises the question whether we want AI to be in control of high stake, irreversible or potentially fatal decisions, like deleting all our files, unintentionally transferring our money or losing control in a road situation. How do we prevent a human from getting into a dangerous situation because they put their complete trust in AI?
Another scenario to be aware of is that AI learns from our historical data. A system will reinforce existing bias that exists in data it learned from. And because data has been created by humans, bias is inherent. If racism, sexism or other discrimination is present in historical data, it will be amplified it and applied to everyone by AI, which is a worse outcome than 1 biased individual.
For example: throughout history mortgage was provided to people form a certain background less often. AI will learn from this data and as a result never give mortgage to anyone from this background, whereas a non-biased bank employee will give the mortgage to this individual.
A framework to prevent negative AI impact on humans
At this year’s UXLX conference in Lisbon, Carol J. Smith, a senior research scientist in human-machine interaction, provided us with valuable tools to create ethical, trustworthy AI.
By following a design framework, we can reduce risks of AI on human wellbeing and help mitigation planning.
The main aspects of this framework are:
Accountability to humans
When designing for AI experiences, we first of all want to discriminate between low stake and high stake decisions. We can hand the last word about our ideal outfit for a night out to AI, but humans should not overtrust a system with important tasks or decisions. Humans must be accountable and have the ultimate control about decisions regarding life, life quality, health and reputation of users. These decisions, if made by AI, should be explained in plain language and be able to be overridden or reversed by humans. At last the user should always be able to unplug the machine.
Analysis of speculative risks and benefits
Thinking about blind spots and unintended consequences of the system is crucial to create ethical AI. By conducting abusability testing and brainstorming with the team about abuse and misuse scenarios, we can eliminate ethical issues before they have the chance to impact a user.
Respectful and secure experience
An ethical system must be inclusive and value equity, humanity and accessibility as well as respect privacy and data rights. The system must be transparent about it’s decisions and harmful bias should be avoided.
To create an unbiased experience, we have to be selective with data we provide to AI, do our best to remove biased data from the system, make it easy to report any bias and reward the team for finding ethical bugs.
Honesty and usability
When designing for AI, it is important to value transparency with the goal of engendering trust. We need to explicitly state identity as an AI system and regularly remind the user that they are interacting with AI. This is necessary to prevent any confusion about whether the user is interacting with a machine or a human.
People who create AI may not represent the end user or be biased themselves. One individual may not consider nuances that result from different social statuses, genders or pay gaps. Diverse teams are more innovative and help us become aware of our potential bias. Therefore it is good to have a team with individuals from different backgrounds and with different thinking process. In this way the team will represent bigger groups of users and will create more inclusive products.
By adopting the above technology ethics, we ask ourselves questions like “What do we value?”, “Who could be hurt?”, “What lines won’t AI cross?” and “How are we shifting power?”.
Application of ethical thinking to the practical concerns of technology helps us to embrace the power of AI responsibly and build products that are trustworthy, therefore ensuring a human-centered future with AI.
Are you looking to get started with an AI application? We're happy to guide you through the ins and outs of development or integration. Let's talk!
Follow us on LinkedIn - Instagram - Facebook - Twitter!