Japanese researchers create mind-reading A.I. that can transcribe a person’s thoughts

 

 

By now, probably everybody already knows that artificial intelligence (AI) technology is developing at a remarkably fast rate. It has been the subject of many essays and news articles that are mainly about an oncoming robot takeover, which could mean trouble for many humans in the world today.

But in order for advanced AI robots to truly take over the world, they’ve got to first be able to think like humans. A group of researchers have taken the first steps to making that a reality.

Japanese researchers recently published a new study titled, Describing Semantic Representations of Brain Activity Evoked by Visual Stimuli, wherein AI technology was used to predict – with great accuracy – exactly what people were thinking when they were looking at certain pictures.

The idea was to simply find out, through the “eyes” of their AI robot, what the humans themselves saw in front of them while looking at the pictures. And, as it turned out, the AI was surprisingly effective at this task.

The researchers noted that such a technology isn’t outside the realm of possibility, but that they truly honed it in their study. They said that quantitative modeling of human brain activity based on language representations is already an actively studied subject in the field of neuroscience. But the issue is that earlier studies only looked at word-level representation, and there’s very little knowledge or information in terms of whether or not it’s possible to recover structured sentences from a person’s brain activity.

And so for their study, the researchers went for exactly that. AI that had the ability to deliver “complete thoughts” via sentences after analyzing data based on a person’s brain scans. As a report on the study states, the AI used in the study was able to generate captions after looking at given fMRI brain scan images, which were taken while people were looking at pictures.

Some of the examples of the captions generated by the AI include: “A dog is sitting on the floor in front of an open door,” and “a group of people standing on the beach.” Both statements are deemed to be accurate by the researchers, which shows how effective it already is, even at this early stage.

According to Ichiro Kobayashi, one of the researchers from Ochanomizu University in Japan, the goal of their study is to better understand exactly how the brain represents information about the real world.

“Toward such a goal, we demonstrated that our algorithm can model and read out perceptual contents in the form of sentences from human brain activity,” he explained. “To do this, we modified an existing network model that could generate sentences from images using a deep neural network, a model of visual system, followed by an RNN (recurrent neural network), a model that can generate sentences.”

The long and short of it is, they had a framework ready and compared the data from analyzed brain scans to their existing records in order to let the AI generate captions.
“Specifically, using our dataset of movies and movie-evoked brain activity, we trained a new model that could infer activation patterns of DNN from brain activity,” Kobayashi added.

It may seem impressive already, but the researchers mentioned that it’s nowhere near ready for prime time just yet. The researchers haven’t even figured out a way to maximize a technology. “So far, there are not any real-world applications for this,” Kobayashi said. “However, in the future, this technology might be a quantitative basis of a brain-machine interface.”

 

yogaesoteric
June 15, 2018

Spune ce crezi

Adresa de email nu va fi publicata

Acest site folosește Akismet pentru a reduce spamul. Află cum sunt procesate datele comentariilor tale.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More