Badmovies.org Forum

Movies => Press Releases and Film News => Topic started by: Svengoolie 3 on October 17, 2018, 10:39:50 PM



Title: Yeah, nothing can possibly go wrong here, right? Right?
Post by: Svengoolie 3 on October 17, 2018, 10:39:50 PM
https://www.digitaltrends.com/cool-tech/artificial-life-quantum-computing/?utm_content=buffer6f8ab&utm_medium=social&utm_source=facebook.com&utm_campaign=dt-buffer (https://www.digitaltrends.com/cool-tech/artificial-life-quantum-computing/?utm_content=buffer6f8ab&utm_medium=social&utm_source=facebook.com&utm_campaign=dt-buffer)


Title: Re: Yeah, nothing can possibly go wrong here, right? Right?
Post by: Alex on October 18, 2018, 02:40:53 AM
Did you see the article on the scientists who created a psychopathic AI?

http://interestingengineering.com/scientists-create-worlds-first-psychopath-ai-by-making-it-read-reddit-captions if you are interested. (http://interestingengineering.com/scientists-create-worlds-first-psychopath-ai-by-making-it-read-reddit-captions if you are interested.)


Title: Re: Yeah, nothing can possibly go wrong here, right? Right?
Post by: Svengoolie 3 on October 18, 2018, 02:44:46 AM
Did you see the article on the scientists who created a psychopathic AI?

[url]http://interestingengineering.com/scientists-create-worlds-first-psychopath-ai-by-making-it-read-reddit-captions[/url] if you are interested. ([url]http://interestingengineering.com/scientists-create-worlds-first-psychopath-ai-by-making-it-read-reddit-captions[/url] if you are interested.)


Um you link leads to a 404 page.

maybe another AI erased it to cover up the story... :buggedout:

Here is a story about a chatbot that had a short life.

https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist (https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist)


Title: Re: Yeah, nothing can possibly go wrong here, right? Right?
Post by: Alex on October 18, 2018, 02:50:14 AM
Yeah, I saw that one. Have none of these people watched The Terminator or 2001? (Ok, in 2001 the computer was trying to save humanity but still...).

Anyway, here is the article from that link.


Norman/MIT

Quote
MIT scientists have successfully created an AI ‘psychopath’ by feeding it with content from Reddit. Three deep learning experts have banded together to show how the content we show AI really impacts their learning outcomes.

To prove their point, MIT engineers and researchers Pinar Yanardag, Manuel Cebrian and Iyad Rahwan had their AI, which they dubbed Norman after the lead character in Psycho, read image captions from a Reddit forum. 

They then showed Norman a series of ink blots from the famous Rorschach test. While a standard AI when shown the images generally responded with answers such as ‘close up of a wedding cake on a table’, Norman responds with graphic ideas including ‘man is murdered by machine gun in broad daylight’.

To be clear, the scientists didn’t show the robot the actual images available on the eery Reddit site, it only read the images' captions. While a seemingly creepy and unnecessary piece of research, the Norman project couldn't come at a better time.

When asked what motivated them to pursue a psychopathic AI, Norman's creators explained that the project is about bringing awareness to the way AI develops bias through the content its fed. “The data you use to teach a machine learning algorithm can significantly influence its behavior,” the researchers said.

“So when we talk about AI algorithms being biased or unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.” As AI technology rapidly develops, questions of how it will be monitored and regulated persist.

Today Google released its ethics principles relating to its work with AI. The company says it won’t work with any military organizations to develop weapons.

Norman presents subtle questions to both AI engineers and the wider public about how we are monitoring the content and therefore bias of new AI.

AI used in self-driving cars, for instance, needs to be taught how to recognize pedestrians from inanimate objects. Attention needs to be paid to ensure the data provided to the AI is not referencing any particular look of what is human to ensure no bias is inadvertently taught.

The same team from MIT have forayed into the dark side of AI before. Their first project was the AI-powered 'Nightmare Machine'. The project tackled the challenge of getting AI to not only detect but induce extreme emotions such as fear in humans.

They then followed this gruesome project with the development of Shelley, the world's first collaborative AI Horror Writer. Shelley was ‘raised; reading scary bedtime stories from the subReddit NoSleep.

She was then able to write over 200 horror stories in collaboration with human writers. In 2017, the team turned away from the dark side briefly to develop Deep Empathy, a project that explored whether AI can increase empathy for victims of far-away disasters by creating images that simulate disasters closer to home.


Title: Re: Yeah, nothing can possibly go wrong here, right? Right?
Post by: Svengoolie 3 on October 18, 2018, 03:00:16 AM
Here's a more positive story.

https://www.bing.com/search?q=MIT+AI+breast+cancer&filters=tnTID%3a%2201461D6D-2E24-47c8-A718-589CD6AB4CA1%22+tnVersion%3a%222697171%22+segment%3a%22popularnow.carousel%22+tnCol%3a%2223%22+tnOrder%3a%22eba77f3e-03d9-492b-b07c-ffef3d8c9ce9%22&FORM=BSPN01&crslsl=2916&efirst=19 (https://www.bing.com/search?q=MIT+AI+breast+cancer&filters=tnTID%3a%2201461D6D-2E24-47c8-A718-589CD6AB4CA1%22+tnVersion%3a%222697171%22+segment%3a%22popularnow.carousel%22+tnCol%3a%2223%22+tnOrder%3a%22eba77f3e-03d9-492b-b07c-ffef3d8c9ce9%22&FORM=BSPN01&crslsl=2916&efirst=19)


Title: Re: Yeah, nothing can possibly go wrong here, right? Right?
Post by: Alex on October 18, 2018, 03:02:53 AM
Now that is a worthy use of their time and resources.