To err, not only human, because even machines make mistakes, and how they do it! Here 8 EpicFail of the AI, artificial intelligence
To err, not only human, because even machines make mistakes, and how they do it! Here 8 EpicFail of the AI, artificial intelligenceArtificial intelligence is slowly infiltrating every aspect of our world, from business, education, governments and up to our homes. Although the increase in artificial intelligence has made life more efficient, this system is not immune to making occasional mistakes. Although AI has the aim of solving our problems, it sometimes creates new ones. In August 2016, Facebook replaced the editors of its "trending" topics section with an algorithm after facing various allegations of political bias. Within a few days, the algorithm built a story in which it was stated that Megyn Kelly had been fired from Fox News because of his support of Hillary Clinton. Even self-driving cars are based on AI. Tesla had to face checks by the competent authorities due to traffic accidents that occurred while the drivers were using the autopilot mode, one of which was fatal.
These episodes, as well as others, remind us that we cannot always rely on technology. On the other hand, it would be unfair to deny the AI that they offered benevolent and sometimes even humorous stories. We see together the 8 most striking cases, namely the fail (failures) of the AI.1. Listening to some news triggers the Amazon Alexa devices to let a little girl buy cookies and toys.Not a rare case that children buy items without parental permission and with voice-activated devices like Amazon Alexa, parents need to be increasingly cautious. Earlier this year, a 6-year-old girl named Brooke Neitzel ordered a dollhouse worth $ 170 and four pounds of cookies through Amazon Alexa – simply – asking Alexa for these products. After receiving confirmation of her latest purchases, Brooke's mother, Megan, immediately understood what had happened and she decided to donate the dollhouse to a local hospital and add parental control to Alexa.
2. Control of robotic passports, refuses entry to an Asian because in the identification photo he had his eyes closed.After attempting to renew his passport, Richard Lee, a 22-year-old man of Asian descent, was rejected by the New Zealand Department of Internal Affairs following a software decision, a decision justified by the fact that his eyes were closed nellimmagine. The facial recognition software therefore rejected photos of Lee, who had to contact the personnel department asking to be able to talk to a human being and thus obtain the validation of the new passport. It turned out that not a single case, indeed 20% of the passport photos submitted are rejected due to software errors. Fortunately, Lee took the refusal lightly, commenting: “The episode ended without rancor on my part. I have always had very small eyes and new and sophisticated facial recognition technology. It was a robot's mistake and not discrimination.
3. Alexa transmits pornography instead of children's songs.Kids seem to have a lot of fun with voice-activated assistants like Amazon Alexa, which is maybe too much. When a child asked Alexa to play his favorite song, "Digger, Digger," Alexa heard something else. In response to the request he said, "Do you want to listen to a porn station? Then adding search keys such as: chick – amateur – sexy girl. Alexa's dirty mind didn't stop there and continued to list a number of porn terms in front of the child. The incident was even filmed.4. An alleged child robot goes mad and injures a teenager.At the China Hi-Tech Fair in Shenzhen, a robot named Xiao Pang – pseudonym "a bit fat" – attacked a viewing booth and wounded a boy. After Xiao Pang repeatedly slammed into a booth causing shards of glass to spatter everywhere, a boy was cut and transported to the hospital by ambulance. Fortunately, the damage was minimal and the victim received only a couple of stitches. The robot, on the other hand – which was designed to interact with children aged 4 to 12 and show facial emotions on its screen – seemed to be frowning after the accident, according to witnesses.
5. A robot judges a beauty contest and does not select dark-skinned women.Starting from unfair practices to the pressure on young competitors, beauty contests often face judgments from the public and critics. In order to counteract the bad reputation they generally enjoy, the Beauty international beauty contest has decided to hold its event online, using a machine as a judge. The machine's algorithm should have examined the symmetry of the face and identified both wrinkles that the spots to find competitors that best embodied "human beauty" and the algorithm did not favor women with dark skin. 6,000 people from countries around the world presented their photos and later the 44 winners were announced, of whom only one had dark skin.6. Microsoft's Twitter chatbot becomes anti-feminist and pro-Hitler.Racism seems to be a fairly widespread problem with AI. In March 2016, Microsoft presented its AI chatbot for Twitter called "Tay" with the aim of experiencing conversational understanding. Tay should have talked to people and become smarter as he engaged in conversation. But people started tweeting with raw, racist and inappropriate comments against the bot. Learning from the conversation, Tay started using the same language. Within a few hours this experiment turned into a vulgar offensive, pro-Hitler, referring to feminism as a "cult" or a "cancer" and much more.
7. Google Brain turns low-resolution portraits into pixelated monsters.In an attempt to improve images in low-resolution pixels, Google Brain has transformed some portrait photos into monsters similar to people. Despite unsatisfactory results, Google Brain is a step away from making blurred photos clear again. Using neural networks, the new technology compares the low-resolution image with the high-resolution photos in the database. And then guess where to place some colors and details based on those in the higher resolution photos.
8. Robot fails in trying to enter a college.People often fear that robots may end up being smarter than humans and thus conquer the world. To comfort the most worried, know that this robot failed even to enter a college. In 2011, a team of researchers started working on a robot called "Todai Robot" and aimed to get their creature accepted at the University of Tokyo, the most competitive in all of Japan. Despite attempting to carry out the admission test for 2015, the robot failed to get a sufficient score. Following the second attempt, also insufficient, finally in November 2016 due to the poor machine improvements, the researchers definitively abandoned the project.