Computational neuroscience is a field where many successful researchers have a strong physics background. So far, the physics approach has provided a strong foundation from which to understand the brain. Recently, however, the influence of a computer science perspective has become more prominent. How can we understand the different perspectives that these disciplines bring to the field? Can we observe the influence of physics methodologies on the modern study of the brain? And if so, what is the consequence of our understanding of the brain through the lens of physics versus the lens of computer science?
One consequence may be the way that computational neuroscience models time in the brain. The study of physics generally conceptualizes time as continuous. Time is something to be plotted on the x-axis of a graph where some other quantity of interest is plotted on the y-axis.
In computer science, on the other hand, real time is rarely conceptualized explicitly. Computer scientists do not plot quantities against time unless they are profiling software for performance purposes, and even then, time is more generally thought of as number of operations. Thinking about operations generally leads computer scientists to think about time as discrete events.
I posit that the distinction between continuous and discrete time creates a foundational difference between the physics approach and the computer science approach to understanding how the brain works. Due to the discrete time conceptualization, computer scientists are more comfortable explaining the function of brain systems in terms of chains of events with definite beginnings and definite ends. Physicists, on the other hand, are more comfortable explaining the brain in terms of dynamics, which do not require definite beginnings or definite ends. Computer scientists care more what the consequence of an event is in the brain, whereas physicists are more concerned with an concise account of the dynamics of what is occurring.
This divide is visible in the distinct modeling approaches of neurons that derive from these two disciplines. The canonical neuronal model contributed by the physics philosophy is the multi-compartmental conductance based (Hodgkin-Huxley like) model. This model is concerned with matching waveforms of current and voltage traces with those that are measured in real neurons. This model helps us to understand how changes of the properties of excitable membranes over time result in changes of neuronal behavior over time. The computational complexity of these models is thought to prevent more than a few hundred neurons modeled in this way from being analyzed concurrently.
Alternatively, the canonical neuronal model contributed by the computer science philosophy is the integrate-and-fire neuron. This model does away with modeling conductances explicitly as functions of time and simply performs a weighted sum of its inputs at each time step. Here a time step is a discrete event whose duration is a parameter of the model. The simplicity of this model allows large networks to be constructed, which are useful for modeling systems of many thousands of neurons.
The physics approach provides insight into the activity of single cells and small networks, whereas the computer science approach provides insight into the activity of large networks. Neither approach is optimal and neither approach provides all the tools that are necessary to truly understand the brain. As these two perspectives are better understood, the field of computational neuroscience can benefit from finding creative ways to merge these two conceptions of time into models that capture both small scale and large scale neuronal activity.
In conclusion, I have demonstrated that what begins as a division between discrete and continuous time amounts to a divide between a bottom-up and a top-down approach. Furthermore, I have shown that understanding the relative contributions of different sciences to computational neuroscience is important for understanding the paradigms that pervade the field.