HomeMioTech ResearchArticle Detail

MioTalk X Joanna Bryson: The Dangers of Biased Bots

In this article, MioTech speaks with leading AI specialist, Joanna Bryson on how machines can reflect the biases that humans inherently possess and the detrimental impacts it can have on society.

Joanna Bryson, Associate Professor, Department of Computer Science, University of Bath2018-11-26

Please tell me what your “Biased Bots” research paper was about.

Fundamentally, I want to understand cognition and the evolution of cognition. Since our paper came out, it's been evident that people are surprised that machines can be biased. They assume machines are necessarily neutral and objective, which is in some sense true -- in the sense that there is no machine perspective or ethics. But to the extent an artefact is an element of our culture, it will always reflect our bias.

Source: Dilbert

I saw that implicit bias could be something that was a part of culture and that was fascinating to me. Culture can push you into a place, a way of thinking.

People don't usually think that implicit biases are a part of what a word means or how we use words, but our research shows they are. This is hugely important, because it tells us all kinds of things about how we use language, how we learn prejudice, how we learn language, how we evolved language. All understanding is being created and communicated this way and I don’t think a lot of other people see how incredibly important it is. This gives us some important insight into why our brains work the way they do and what that means about how we should build AI.

"It’s important that we understand what our machines are communicating through us, that the meaning comes from us."

Our research paper sets out to show that applying machine learning to ordinary human language results in human-like semantic biases. It’s important that we understand what our machines are communicating through us, that the meaning comes from us.

This transference of inherent bias to AI technology, how does this occur?

Computation is a physical process. It takes time, energy, and space. Therefore it is resource constrained. This is true whether you are talking about natural or artificial intelligence. From a computational perspective there's little difference between these. The reason AI is making so much progress right now is because we've figured out how to transfer and represent what's already been computed by our culture or our biology into AI. A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning that prejudice the same way. That's the first source of AI bias: unintentionally uploading the implicit human biases that pervade our culture.

The Implicit Association Test that you utilize for this research paper, what is this test?

The test measures response times (in milliseconds) by humans when asked to pair word concepts displayed on a computer screen.

For example in a previous study, flower types, like "rose" and "daisy," and insects like "ant" and "moth." were to be paired with words with pleasant and unpleasant connotations, like "caress" and "love," or "filth" and "ugly." People were more quick to associate the flowers with pleasant words, and the insects with unpleasant terms.

To avoid controversy, we only replicated known biases that had been done by psychologists before, as measured by the Implicit Association Test. We utilized a purely statistical machine-learning model called “word embedding”, which is what computers are currently using to interpret speech and text, trained on a huge sample of internet content containing 840 billion words. The algorithm can showcase the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear in the same contexts (of other words) have a more similar meaning than those words that seldom do.

Source: https://imgs.xkcd.com/comics/how_it_works.png

The team examined for example, words of profession like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside attribute words, such as men’s or women’s names to see if the common views humans consciously or unconsciously possess are also easily acquired by algorithms.

Could you give me an example of human bias that you found?

Our machine learning experiment not only confirmed the word biases associated with flowers or bugs, but also found more troubling evidence of implicit biases related to gender and race.

The machine learning model found that women’s names were more associated with arts and humanities vocations and with words like “parents” and “wedding”. On the other hand, men’s names were linked closely with math and engineering professions and words like “professional” and “salary”.

Source: Dilbert.com

A previous experiment showed that CVs with European American names were 50% more likely to get offered an interview from a job application than CVs with African American names were, despite those CVs being otherwise identical.

And if you still don’t believe that there is racism associated with people’s names, we found out through our A.I. system that European American names are more likely to be associated with positive words like “gift” or “happy”, while unpleasant words were commonly attached to African American names.

This evidence reinforces existing social inequalities and prejudices that can be transferred to machines.

When being applied to the finance sector, what impacts do you foresee A.I. bias might have on finance industry?

Compliance is an issue, for example what kind of customers you’re picking. When using machine learning to assign credit scores, algorithms might rate an ethnic minority who needs a loan at a higher risk of default just because similar people have traditionally only been afforded unfavourable loan conditions. This could also be said for women who receive lower wages and less funding for new businesses as compared to men.

The main thing I worry about though is institutions not realising how important it is to document what they do and expose themselves to regulators and work with regulators.

What should we, as “authors of robots” moving forward do better?

Many people are trying to make out that AI is now self-learning, or at least all programmed via machine learning. No algorithm spontaneously generates a software system or a robot. Quite a lot of the algorithms that affect people's lives are just macros someone programmed in a spreadsheet. AI is created by people deliberately. And don’t forget you can get bias in A.I. because someone put it in there deliberately because they’re out to be bad people.

Source:Trader Feed ( https://bit.ly/2mn3u1M )

The way to deal with this is to insist on the right to explanation, on due process. All algorithms that affect people's lives should be subject to audit. If you don’t have accountability for software, it is dangerous. We take for granted cooperation with government, engagement in policy, training in legal responsibilities and so forth. It's time our discipline matures and we accept what that means in terms of accountability.

”All algorithms that affect people's lives should be subject to audit.“

Once we saw the result, we understood why our research was important. This is why blue sky research is important. Our research paper helps communicate the true nature of AI and it proves that implicit biases are coming off current realities, off culture and into our technologies. What we can do with that, is now, we can explicitly say, that that’s not the future we choose for ourselves. The implicit biases come for free; we have to make an effort to make the future better.

To read full research paper, click here.

The views expressed above reflect those of the authors and are not necessarily the views of MioTech.