“Siri” is Failing Us in Times of Crisis
Most smartphone users use virtual assistants like “Siri” to accomplish mundane daily tasks from checking the weather, to directing us to the nearest coffee shop. But what happens in times of crisis or distress?
According to a recent study, commissioned by Stanford University and the University of California, these A.I.s might not be so helpful after all. When it comes to questions about mental health, rape, or domestic violence, “Siri” and similar virtual assistants dropped the ball. In response to many of these questions the A.I. would simply say that they don’t understand, don’t have an answer or in some cases even mock the users.
Although Apple has made some adjustments to “Siri” following this article being published, it does shed light on a number of deeper issues. If technology is meant to make our lives easier, shouldn’t it be there for us when we need it the most?
Microsoft’s A.I. Turned Into A Racist Jerk
One of the most epic A.I. fails we’ve seen to date is how a recent bot created by Microsoft devolved into a hateful, misogynistic and racist jerk within hours of being unleaded into the Twitterspere. The bot, which was primarily targeted at Millennials between the ages of 18 and 24, was designed to “engage and entertain people” through “casual and playful conversation”. But, after a short period of interacting with Twitter users, the bot (given the name ‘Tay’ ) began to spit out some really horrible things.
The bot, which was co-created by a team of developers and comedians, makes many people wonder how Microsoft could allow such a thing to happen. Why didn’t they use any sort of preventative measures to ensure the bot avoided words, topics or phrases that might be deemed offensive to users.
The tech industry has become so fixated on amusing and delighting its users that it ultimately fails in the areas where we need it the most. While it’s great that Tay can give us all the reasons why her “selfie game in on fleek” or can spit out a bunch of irrelevant facts about Kim Kardashian, it is much more important for an A.I. to learn appropriate ways of communicating with humans. And, if this incident gives us any insight into where A.I. technology is heading, then we certainly have a long way to go.
Facebook is Training Its A.I. Using Children’s Stories
Like humans, A.I. requires good teachers. But how can we teach AI using public data without incorporating the worst traits of humanity? Well, Facebook recently revealed that the company is training its A.I. using children’s stories.
Human children learn about the world around them from stories and fables, so why shouldn’t technology? The social network is using classics like The Jungle Book, Peter Pan, Little Women and Alice in Wonderland to help its A.I. better understand the way people communicate.
“Language is one of the most complex things for computers to understand. Guessing how to complete a sentence is pretty easy for people but much more difficult for machines,” said Zuckerberg, Facebook’s CEO.
“We still have a long way to go before machines can understand language the way people do, but this research takes us closer to building helpful services like M, our digital assistant in Messenger.”
While it’s clear that the company is taking what many people might deem a more ethical approach to programming its A.I., it’s no surprise that some people are a bit skeptical as to how well this would actually work. Can an A.I. really learn human behavior from a book? Or is that even the point? Only time will tell what role artificial intelligence technology will play in our lives. We can only hope it will improve our lives and not destroy it.