
Originally Posted by
Quark
Absolute far cry.
These 400 pictures were most likely the alphabet shown in different styles, colours, and sizes to arrive at a mean neural response for each letter - albeit with noise.
They then showed volunteers the six letters in the word 'neuron' and succeeded in reconstructing the letters on a computer screen by measuring their brain activity.
It is obvious here that each letter of the word 'neuron' was shown individually. Then, it's likely that the neural response from each observed letter was compared to the mean neuron response calculated from the previous pictures.
If the word was shown in itself, however, then the neural response pattern would have differed greatly, and they wouldn't have been able to reconstruct the word. The problem lies in the reconstruction order, and quite simply, the general compilation.
Ultimately, their technology did not reconstruct the word 'neuron', it simply matched neural activity of individual simple stimuli, i.e. letters, to previous neural activity that was elicited by, most likely, letters. The experimenter's knew what they were looking for, and thus had only to match neural activity.
Sensory experiences do not produce invariant neural displays, and therefore, such technology is useless.
(ps. I haven't read the original article, so this is speculation).
Bookmarks