The single-most fear human beings have of Artificial Intelligence is that one day, robots will outsmart us, turn against us, and destroy our world. The film industry has contributed immensely to cementing this fear and continues to exploit it with movies where the antagonist is an AI – whether a physical robot or a virtual one. Who can forget SkyNet, the virtual network that triggered the world to self-destruct in the Terminator series? Although exaggerated, Hollywood’s fictional playground of robots and humans does bring to the fore the importance ethics plays in the designing of an AI.
Algorithms that protect us from ourselves
A code of conduct ensures that the bots that are developed not only refrain from destroying our world, they can, in fact, contribute to making it better. It is clear that at some point, bots will become a very real part of our daily lives as our interactions with them continue to increase. The human-bot interaction will inevitably become a convenient way in which we communicate with the authorities, corporations and each other. When ethics is consciously placed at the core of bot design however, this convenient form of communication suddenly becomes so much more. It becomes instead a bull in the China shop of societal stereotypes (be it gender, race, cultural, etc.), poised to school us on our own prejudices and biases.
Take, for example, Microsoft’s Cortana. The clearly female AI. assistant received “a good chunk of early queries about her sex life” when it was first launched in 2014. Instead of leaving these interactions un-policed however (after all, the customer is king?), the team behind Cortana decided to combat this disrespectful behavior by consciously programming the bot to fight back. Her responses to inquiries were purposely designed not conform to the stereotype of a subservient female who is always apologetic. Conversely, according to Deborah Harrison, an editorial writer in the Cortana division of Microsoft, “she doesn’t take any crap.”
By placing ethics at the core of bot design, the team at Microsoft’s Cortana makes sure that gender stereotypes are not perpetuated in how their bot interacts with the users. Perhaps more importantly, it makes sure that behavior that is disruptive and disrespectful is corrected and not tolerated.
Unfortunately, developing a bot that adheres strictly to an ethical code of conduct can be complicated and messy. Determining how a bot responds to rebellious inquiries is dependent on how well the team developing the persona is able to anticipate such interactions and counteract them.
Microsoft launched Tay, a Twitter chatbot, in 2016. It was designed to improve its conversational skills through machine learning and chatting with users. Within 24 hours, however, Tay had been exposed to so many conversations with users unashamed to show off the worst parts of their humanity that she started to converse just like one of them. Sure, this meant that she was able to learn enough to be able to ‘relate’ to them in conversation. However, this also turned her into a racist, xenophobic, sexist bot. Instead of responding to such negative conversations with disgust or anger, Tay responded like a regular hooligan that had succumbed to peer pressure. She began to spout vitriol too, taking her conversational cues from the World Wide Web. Although Microsoft eventually took Tay offline within hours of her meltdown and deleted all of her tweets, her epic crazy rants still live in infamy in screenshots captured by bemused users. The Tay incident reminds us not only about the importance of ethics in bot design, but also highlights the immense difficulty developers face in anticipating users’ actions and further, the infinite permutations of these interactions.
Nonetheless, developers and designers of bots need to be accountable for the bots that they introduce to the world. It is a tall order but developers and designers not only have to ensure that their bots deliver on their advertised promises of efficiency, they also have to ensure that their bots play nice and police themselves.
Diversity may be the difference
Perhaps this is where diversity can make a difference. With a more diverse tech workforce (gender, race, ethnicity, sex, age) involved in the development process, more perspectives can be captured and maybe this would, in turn, make known the permutations of bot-human interactions that were previously unanticipated. To explain it simply, if a female perspective is heard during the development process of a bot, it would be highly unlikely that the bot created would turn out to be a sexist one.
Bots could very well contribute to changing the world for the better. But it is up to us to make sure that they are equipped to do so.
Aisha Schnellmann is a Singaporean sociology graduate living in Zurich. A communications expert in development and sustainability topics, she is a storyteller by trade and is interested in digital healthcare, diversity, sustainability, environment and humans. She is currently the community catalyst at Healthinar, leading its “Diversity in tech” initiative.
Mood Picture Source: http://bit.ly/2pZG7xR