Skip to content
All posts

Bias and Status in Conversational Agents

Perception is Everything

Photo by Tayla Kohler on Unsplash

It’s been several years since the first voice skills went live. Back when Alexa first came to be, we were taught how to scrape information from the web or link a ready-made API in varying tutorials to create our basic fact skills. Without consequence, we connected our skills and made them public for all to use without giving a second thought to the data or information our skills responded with. Never fully understanding how the user’s experience might be affected or how another group of people may perceive the response.

Every now and then a user’s utterance (voice command) wouldn’t match to what the skill was expecting and one of two things would happen: Alexa would apologize for not understanding and prompt the user to repeat their command or try another, or the error sound would play and the skill would shut down. I never thought about it back then but in recent debates the question would pop up: Why would a bot apologize? At times apologies are coded into the responses by developers in an effort to add a more human-like interaction for their users.

It’s just a bot! On the Humanization of Voice Assistants

To developers, an error message or the error sound would have sufficed as an indication that something went wrong. To users, the apology mixed with the sound gives them an indication of the system’s status (not unlike a GUI’s confirmation dialog or notification card). More than that, it demonstrated a classic example of status.

Who would write such biased and status-driven apps?

...we did. Developers did. Here is how this came to be.

Accidental Bias

Charles Hannon, a professor from Washington and Jefferson University, posted a piece in ACM’s Interactions magazine called Avoiding Bias in Robot Speech in 2018. It focused on the complications stemming from scraping data from the web that may contain bias and thus introducing bias to the consumer of this data. By allowing AIs free access to pull and reinforce learning from online text sources that contain bias, we allow for the AIs to mirror and disseminate that same bias in interactions with humans.

We humans are teaching our robots how to speak. But when humans speak to each other, we can be pretty terrible. The…

Our own bias, whether intentional or not, can be picked up in many forms and be used against us. The bias that we condemn and refute is the very same that is fed back to us and it is a disaster of our own making. The proposed solution in the article was for us to avoid introducing bias in our own writing and data sources so that we can prevent our systems from picking up and reiterating the same bias in our interactions with them in the future.

What does this look like? It would require a great number of community members to change their writing style and thought process. Our data stores will need to be cleaned a bit. In short, it would take a lot of clean-up. What we can do presently is proceed with our attempts at avoiding bias in systems that disseminate information and as developers, take into consideration what resources we allow our bots to extract data from. As consumers of the data, we will have to become more cognizant of how we disseminate or use this data and how it affects our end users.

The Establishment of Status

Photo by Jungwoo Hong on Unsplash

Just two years prior, Hannon submitted a piece to the same magazine called Gender and Status in Voice User Interfaces. The problem presented in this one was of our own plight to humanize our AIs introduced the concept of status in a negative way. Developers introducing I-words and cognitive words like ‘believe’ or ‘think’ in their interface’s speech give way to what is considered a lesser status. Hannon points out that the female-voiced AIs are provided these lesser statuses as a direct result of our desire for a more human-like conversation with these interfaces.

"I didn't understand the question that I heard." This is the somewhat awkward response I get when Alexa, the AI…

Alexa has a female voice, as does Siri, and Cortana. Changes have allowed for customized voice settings to permit the usage of male voices. They all have unique characteristics, other than voice, that permit frequent users to distinguish which AI they are interacting with. Something they have in common, unfortunately, are the speech characteristics associated with “lesser” status.

These agents are set up as our assistants and developers have designed the interactions to remain as such in many cases. In his 2016 article, he proposed the prevention of introducing or replicating status, whether it be gender or any other form of status classification found in real-world interactions, by avoiding I-words and avoiding the implementation of social and cognitive logic.

This issue goes beyond voice assistants. Chatbots and other forms of conversational agents can be affected similarly. Much like with bias, to limit the introduction and replication of status within our varying voice and chat agent types, we must change our way of thinking and development habits. This too will take time but with the conversations already taking place and the introduction of customizable agents, we are heading in the right direction!

Read this article on the original source.