Modelling The Grey Matter
“The similarities between a human and a computer are more numerous than the differences.”
We often call our brain the control centre of our body, but controlling bodily functions is an incomplete representation of the human brain’s capabilities. All animals with a fully developed central nervous system have features capable of controlling and coordinating bodily functions. What sets apart the human brain from all others is the power of analysis, the awareness of self-existence, and the ability to be resourceful even in the most unfavourable conditions.
In the past 500 years, science has advanced enough to bring humans back into a full circle of analysis. We started with the brain as the only tool of our study and have come to the point of making it the subject of our investigation. As we keep our thinking engine under an analytical microscope, we observe complexities of levels never imagined before. But as time passes, we make our way to tackling each of these complexities one at a time and try to explain them compactly and understandably.
The most basic functional unit in our nervous system is a neuron. Every single neuron is biophysically complex and has computational powers. How do dendrites and axons form during development? How does the axon know where to conduct the electrical impulse? How does the migration of neurons to the exact position between the central and peripheral systems take place? And the working of synapses? These are questions that still baffle us to no extent. Attempts to replicate the functioning have been made by eminent scientists such as Hodgkin and Huxley. They devised the two sensitive currents model: the two currents consisting of the fast-acting sodium and the inward rectifying potassium. These two currents represent the voltage-sensitive ion channels which are the glycoproteins present across the lipid bilayer, allowing ions to pass through the axolemma under certain conditions. They successfully replicated the qualitative features of action potential but failed to present the most crucial component of a neuron, its adaptability, and its quality of shunting. Investigations into the formation and functioning of synapses are still nascent.
Many software such as GENESIS and NEURON allow rapid and systematic in-silico modelling of realistic neurons. The project Blue Brain aims at constructing a biophysically detailed simulation of a cortical column on the blue gene supercomputer. Although the study of the structure of neurons is enticing, implementing unnecessarily complicated interconnected models is not feasible or implementable when we could instead implement a simulation of a network of neurons.
As the connection between us and our silicon counterparts becomes more and more tangible, we explore other forms of input for the machines to perceive the world and become more interactive. Basing the sensory processing on the model of living organisms, the credit for formulating such a theoretical framework goes to Horace Barlow. Barlow understood the processing of the sensory systems to be a form of efficient coding, where the neurons encoded information to minimize the number of electrical spikes.
Biophysical modelling of specific subsystems and theoretical models of perception form the current basis of sensory processing theories. However, further research into sensory processing suggests that the perception process of the human brain is based on Bayesian inference and integration of information from various sources rather than being dependent on only one form of sensory information.
The amount of data people access daily has seen such an exponential growth that almost 97% of the data that exists today has been generated in the last ten years. The need for efficient memory systems is now more than ever. Earlier models of memory are derived from the postulates of Hebbian learning. Models based on biological systems such as the Hopefield net incorporate the notion of associative, that is, content addressable memory. Models represent the functioning of the prefrontal cortex in terms of context-related memory and look at the relationship between the basal ganglia and the prefrontal cortex and how that contributes to working memory.
One of the essential features of our neurology is its ability to adapt and its ability to learn and unlearn something. Synaptic plasticity, which is the ability of synapses to strengthen or weaken over time, plays a significant role in it. Unstable synapses are easy to train but are also prone to stochastic disruption, while although stable synapses forget less quickly, they are harder to consolidate. A recent computational hypothesis involves cascades of plasticity that allow synapses to function at multiple time scales.
Now we come to a topic that overshadows other fields of computational neuroscience, which is a study into the network pattern of our neurons. The primary computation function of a neuron might not be wholly understood, but the functioning of a neuron turns out not to be as important as their behaviour as a chain of neurons. Biological neural networks are recursive and highly complicated, unlike most artificial neural networks, which are sparse and specific. The basic idea over which the transmission of information across these biological neural networks takes place remains unknown. Models are based on probability and statistics and are used to define the computation of a single neuron. Such models help us make decisions and create predictive models that, rather than giving a direct binary result from a single computation, use multiple probabilistic analyses to drift towards an outcome. The development of more efficient algorithms to lighten the burden on the hardware also furthers us towards our goal of replicating the human neural network.
Although problems such as memory management, physical modelling of neurons and replication of interaction between multiple neurons are already being tackled, computational modelling of higher cognitive function has only begun recently. The frontal and the parietal lobe act as integrators of information from multiple sensory modalities.
Our brain seems to be able to discriminate and adapt particularly well in specific contexts. For instance, human beings seem to have an enormous capacity for memorizing and recognizing faces. Suppose features like these of our memory function have to be understood, and features such as learning, adapting and forgetting, have to be mapped. In that case, mathematical models need to go through an increase in precision, and efficient algorithms implementing the physical computation of neurons have to be developed.
Fifty years back, even the concept of trying to understand the idea of learning or rather any idea relating to the function of the brain was so far off that it almost seemed mystical. But today, with the help of mathematical modelling and an increase in the computation power of machines, we stand at a juncture where new ground is being broken in various fields of computational neuroscience every day. And soon enough, we might be able to replicate and understand the processes of the brain and look upon the dawn of the brain of tomorrow.
By Aditya Pandey
Batch of 2024