Yeah, I saw that one. Have none of these people watched The Terminator or 2001? (Ok, in 2001 the computer was trying to save humanity but still...).
Anyway, here is the article from that link.
Norman/MIT
MIT scientists have successfully created an AI ‘psychopath’ by feeding it with content from Reddit. Three deep learning experts have banded together to show how the content we show AI really impacts their learning outcomes.
To prove their point, MIT engineers and researchers Pinar Yanardag, Manuel Cebrian and Iyad Rahwan had their AI, which they dubbed Norman after the lead character in Psycho, read image captions from a Reddit forum.
They then showed Norman a series of ink blots from the famous Rorschach test. While a standard AI when shown the images generally responded with answers such as ‘close up of a wedding cake on a table’, Norman responds with graphic ideas including ‘man is murdered by machine gun in broad daylight’.
To be clear, the scientists didn’t show the robot the actual images available on the eery Reddit site, it only read the images' captions. While a seemingly creepy and unnecessary piece of research, the Norman project couldn't come at a better time.
When asked what motivated them to pursue a psychopathic AI, Norman's creators explained that the project is about bringing awareness to the way AI develops bias through the content its fed. “The data you use to teach a machine learning algorithm can significantly influence its behavior,” the researchers said.
“So when we talk about AI algorithms being biased or unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.” As AI technology rapidly develops, questions of how it will be monitored and regulated persist.
Today Google released its ethics principles relating to its work with AI. The company says it won’t work with any military organizations to develop weapons.
Norman presents subtle questions to both AI engineers and the wider public about how we are monitoring the content and therefore bias of new AI.
AI used in self-driving cars, for instance, needs to be taught how to recognize pedestrians from inanimate objects. Attention needs to be paid to ensure the data provided to the AI is not referencing any particular look of what is human to ensure no bias is inadvertently taught.
The same team from MIT have forayed into the dark side of AI before. Their first project was the AI-powered 'Nightmare Machine'. The project tackled the challenge of getting AI to not only detect but induce extreme emotions such as fear in humans.
They then followed this gruesome project with the development of Shelley, the world's first collaborative AI Horror Writer. Shelley was ‘raised; reading scary bedtime stories from the subReddit NoSleep.
She was then able to write over 200 horror stories in collaboration with human writers. In 2017, the team turned away from the dark side briefly to develop Deep Empathy, a project that explored whether AI can increase empathy for victims of far-away disasters by creating images that simulate disasters closer to home.