Why you can't "just fix" machine bias derived from ordinary language

The journalists who called interviewed my coauthors and I about our April 2017 Science article, Semantics derived automatically from language corpora contain human biases (cf. the green open access version hosted at Bath) often asked two things:

  1. Where can we find examples of the kinds of bias you documented in real-world AI?
  2. If it's in a machine, can't we just fix it?
Well, David Masad managed to document a brilliant example of both, which I've replicated here – note the translation:
Macron after his election "Mes chers compatriotes" , Bing translates as "My fellow Americans"
This is exactly the kind of thing we documented with the Word Embedding Association Test (WEAT), and that we suggest the Implicit Association Test (IAT) is probably documenting as well.  In some ways, this is a better translation than any human would make.  A French or American president elect is feeling and meaning by and large exactly the same thing there, they both grasp the cliché of their predecessors.  But no human would make that translation, because it is factually, explicitly, inaccurate.  The American cliché happens to make explicit the nationality of the compatriots involved, and further, we know English is actually a shared language so even if Americans were willing to accept that translation, New Zealanders would not be.

This also demonstrates why it would be effectively impossible to remove all stereotypes from AI derived from large corpus linguistics: how could you enumerate all the ways that the expectations of the majority of one language-using group might not line up with the expectations of another?

For more of a discussion of what this tells us about human and machine intelligence, see the last section of my April 2017 blogpost, We Didn't Prove Prejudice Is True (A Role for Consciousness); the section titled a use for consciousness.

For a discussion of more pernicious problems in "correcting" machine semantics, see my July 2016 blogpost, Should we let someone use AI to delete human bias? Would we know what we were saying? 

Postscript
Having just said posted this, I find that Evan Hanfleigh has pointed out that Google translates it “My dear compatriots, you have chosen to give me your confidence and I would like to express my deepest gratitude to you.” So one way to "fix it" is to be more literal, but on the other hand the Bing text sounds much more native.  Though Eva Wolfangel just emailed me that the translation is back to Americans if you translate it into German!  Do German leaders not say anything similar?


Comments