PictureOfMindInActions
(appeared on March 2022)

(link to main website)

Print version - Action pictures of the mind

Projecting thought on the wide screen?, asks S.Ananthanarayanan.

When all of us see the same object, we do agree on what it is that we see. But is it the same picture, in our minds’ eye, that each one of us sees? This is clearly a question that we cannot answer, for we cannot make out the forms that things take inside the minds of others.

Yet, Ryohei Fukuma, Takufumi Yanagisawa, Shinji Nishimoto, Hidenori Sugano, Kentaro Tamura, Shota Yamamoto, Yasushi Iimura, Yuya Fujita, Satoru Oshino, Naoki Tani, Naoko Koide–Majima, Yukiyasu Kamitani, and Haruhiko Kishima, from Osaka, Juntendo, Kyoto and Nara Medical Universities, ATR Computational Neuroscience Laboratories, Seika-cho, Japan and National Institute of Information and Communications Technology, Suita, Japan, peer inside the brain in what amounts to mind reading. Their paper in the journal, Communications Biology, describes patterns of electrical activity in the brain that correspond to the way we see, either when we imagine things, which are thoughts, or when the things are before us

Images that form on the retina are perceived in the form of pixels, each picked up by a separate nerve cell. And the information from all the cells is conveyed through the optic nerve to the brain, where it is processed, and stored. Processing involves deriving meaning from the image perceived, and we have some ideas of how the brain learns to understand images, through a process of repetition and feedback. The process itself, however, is not known, not is the mechanism of storage and memory.

What we do know, and can measure, however, is that there is electrical activity in the brain whenever there is a stimulus, like an image before the eyes, or a thought. Although the link between the activity and things like images is too complex for ordinary methods to unravel, it turns out that patterns can be discerned by the methods of artificial intelligence, which are, in fact, simulations, in powerful computer networks, of how we believe the animal brain works.

The concept is that when nerve cells receive a stimulus, they first randomly fire signals to other nerve cells, which do the same to further layers of nerve cells. The cells then receive feedback, of whether the signal they sent out led to a desirable result. And every time the feedback is good, the probability of that response increases. And over millions of trials, which may be the case when an infant interacts with surroundings during a year of life, for instance, certain responses, which we could term as ‘intelligent,’ become routine. The infant thus learns to recognise objects to reach out for, sounds that mean food is at hand, then words, then sentences, and so on.

Machine learning, or artificial intelligence, consists of software objects that behave like brain cells, with multipliers of the probability of a response increasing or reducing according to the feedback. The snippets of software are called neural cells, as they stand in for neurons, or brain cells. And they are arranged in groups which receive the different features of the input, to process and send signals to another group, and so on. And the feedback passes in the reverse direction, to tweak the processes of each cell, till the ensemble begins to find the correct responses to inputs more and more often.

Methods of AI were hence applied to the electrical activity in brain cells. As picking up the activity calls for placing electrodes, or metal probes within the brain, the team carried out the experiments with patients of epilepsy, or seizures, where such probes are already in place, as part of investigation and treatment. With the help of the probes, the team could record electrical activity in the region of the cerebral cortex, as elecctrocorticograms, or ECoGs. The cerebral cortex, incidentally, is the outer layering of the brain, which receives most sensory information and connects to brain structures within the cortex.

And then, the paper says, by exposing the AI system to a large number of ECoGs arising from different objects that the participants saw, the system can be ‘trained’ to make out the perceived images that result in particular ECoGs – or neural representations of the images perceived. And further, the paper says, the ECoGs arise not only from perception of images, but also from just concentration, or imagining the image. A case of ECoGs arising from ‘bottom-up’ perception of images and from ‘top-down’ cerebral activity!

How attentive one is or where one’s attention lies, while one is sees and perceives an image, is known to modify the patterns of electrical activity the paper says. In this context, the team looked into what happened when the what is imagined is in conflict with what is perceived. For this trial, the participants were asked to create mental pictures, of faces, landscapes or words that were different from what was being shown to them. Not only were the ECoG readouts when pictures were imagined seen to be distinct from the readouts when the images were perceived, it was found that the difference became wider if the participants received feedback while the trials were in progress.

What this amounts to is that the ECoG can lead to a representation of the thoughts of a person, a means of communication without the intermediate stages of speech or action.

Artificial intelligence has been used earlier, to make sense of the complex nerve signals that drive the larynx and the tongue to generate speech. Persons, like patients of ALS, who lose the capacity to speak, manage for some time with a keyboard, and then with a means of spelling out words by movements of the eyes. But even these methods, known as brain-computer interaction, or BCI, are not possible after a stage. This is where decoding nerve signals could become a means of directly synthesising speech. While this possibility is still to be actualised, the current work, which deals with brain processes, could enable communication even in patients who are more seriously compromised. “Because visual cortical activity persists for a long time even in patients with ALS, rBCI (representational BCI) using visual cortical activity might be used as a stable communication device for patients with severe ALS,” the paper says.

ALS stands for Amyotrphic Lateral Sclerosis. ‘A’ is a Greek prefix for ‘no’, ‘myo’ refers to ‘muscle’, ‘trophic’ means ‘nourishment’. ‘Lateral’ is the part of the spinal cord which controls muscles and movement and ‘sclerosis’ means hardening or damage.
ALS thus leads to a breakdown of the communication from the brain to the muscles. Patients progressively lose the ability to move, eat, speak and even breathe.
Inability to communicate aggravates patient discomfort and increases the challenges before caregivers. Technology that could display a patient’s thoughts as images would hence provide substantial relief.

------------------------------------------------------------------------------------------
Do respond to : response@simplescience.in
-------------------------------------------