Can the AI experiment give insight into radicalization across the globe?
This past Wednesday, Microsoft introduced Tay, an Artificial Intelligence chatbot that was designed to be a companion for 18-24 year-olds using mobile devices. Calling the effort an experiment in conversational learning, the company’s engineers designed the bot to learn from user input and profiles of other social media users.
According to Microsoft’s website for Tay, “Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.”
Sounds good, but it all went south in quite a hurry. In less than 24 hours, Tay was radicalized by its own machine learning and a flurry of internet trolls feeding Tay inflammatory racist and sexist information, teaching the bot to spew hate speech and resulting in the company removing the bot’s online presence.
A Microsoft spokesman said in an email, “It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
At first glance, it seems quite an embarrassment for the software makers and possibly even for AI itself. But, sometimes there are lessons to be learned from what is perceived initially as a mistake. It was amazing how quickly Tay became radicalized by a relatively small segment of the population. It may be we could gain some insight into the radicalization of youth around the world by looking more closely at what just happened.
First off, let’s not compare the AI of Tay to the complex thinking of a 14-year-old teen in today’s society. Artificial intelligence has a long way to go to figure out that kind of reasoning, and could be centuries away.
But, how does Tay’s relatively uninformed intelligence compare to the average uninformed teen? It may be easier to understand how a teen in the Middle East, with little access to many other parts of the world and its cultures, can be indoctrinated into radical Islam, simply because that is all he or she has heard since their first days of understanding. But how do you explain the youth of Western countries adopting similar positions, when surrounded by media and affluence, at least affluent by Middle East standards.
Perhaps we should be looking more closely at the influences on the youth, not just from social media, but 24-hour news shows, and radical talk radio, and agenda-driven reporting, being heard by our children in the background as our TVs play constantly in our homes, restaurants and almost everywhere.
The experiment highlights just how a few ultra-radical thinkers can change the perception of another mind, even if the mind is artificial, and what a short period of time such a transformation can take. How can we prepare our kids to hear all the vitriol and still be able to reason the source and the consequences?
Should education be focusing more on the current events in the world and presenting both sides of a particular argument or position? Certainly, we as parents should be discussing the news with our kids more than talking about the baseball game or who is going to be cheerleader this year. We have to reach them somehow and help them learn there are always two sides to an argument, and each person has to weigh the facts to make their own decision, not follow along blindly because your favorite celebrity, or even their own parents, said so.
Maybe we aren’t asking the right questions. Can we take Microsoft’s failure and learn from it? Isn’t that what intelligence is?
Leave a Reply