How to Mathematically Measure Consciousness

How to Mathematically Measure Consciousness
How can we measure the phenomenon of human consciousness?
ERIK P. HOEL
02.28.16 2:15 PM ET
A month ago I was being introduced for the public presentation of my neuroscience PhD defense. My mentor, Giulio Tononi, noted that it is remarkable now that there are places, including his own lab, where a young scientist can now pursue a career studying consciousness.
After all, for nearly a century, scientific work on consciousness was considered too strange, too weird, too philosophical, to be “real science.” It used to be that to study consciousness as a scientist you needed the level of immunity afforded not only by tenure, but also a Nobel Prize.
Indeed, the contemporary field was inaugurated in 1990 by Francis Crick (Nobel-Prize-winning co-discoverer of DNA) and Christof Koch. Together, they published the paper “Toward a Neurobiological Theory of Consciousness,” which argued that the time was ripe to scientifically investigate consciousness. In a genius move of diplomacy, they redirected the goal of the investigation to be merely a description of the neural correlates of consciousness (or NCC). Leave aside philosophical positions, they advocated, and concentrated on finding what neural events correlate to consciousness.
After 25 years it’s finally verging on mainstream. When I attended the annual conference in 2010 for the Association for the Scientific Study of Consciousness (the ASSC) it had barely 300 attendants. Last July it was held in Paris and it had doubled in size to over 600 people, so that at the end of the conference there were enough drunken scientists to take over a significant portion of a bank of the Seine and confuse the tourists with talk of consciousness as the sun set.          
The fruits of the field are plenty. For instance, there is now an agreed-upon definition of consciousness: it’s the experiential world of sensations and thoughts you wake up to in the morning, and it’s what disappears when you fall into a deep dreamless sleep. That is, “consciousness” as used in the literature now does not mean meta-cognition or self-consciousness or language use or rationality or any of the things it’s been conflated with in the past.
Like many fields, it begins with this working definition and then becomes more precise as it goes along. This communal reference is just one instance of the field’s greatest achievement, which is the development of a useful shared argot between scientists and philosophers. Scientists who work on consciousness now know what “qualia” means (the quality of experience, such as the redness of red) and can often recite a host of philosophical thought experiments, and philosophers of mind now know the difference between fMRI and EEG and can discuss details of experimental design.
However, there has been less progress in finding a single neural correlate for consciousness. This is not just because of the coarseness of our still-primitive neuroimaging techniques, but also the challenge of interpreting neural activity.
One problem is that consciousness may not be localized to any particular area, and much of the contemporary methodological tools of neuroscience have an implicit design for studying localization. Additionally, no matter how many theoretical sandbags you stack up to try to protect your supposedly philosophically-neutral endeavor, theoretical and foundational questions about consciousness always seem to seep back in through the cracks, reminding us that there is an ocean out there and we are still on the shore.
For instance, the big issue at the latest ASSC conference was how to measure the neural correlates of consciousness separately from the neural correlates of reportability. Until now, researchers have generally relied on a participant’s immediate introspection to track if the participant was aware of a change in his or her perception. If the Necker cube is facing toward you, click the right mouse button. If it flips to be facing away from you, click the left. The problem is that every time we probe consciousness we are forcing a report from someone, so then how can we tell the difference between report and consciousness?
In quantum physics the very act of observation changes the phenomenon that is observed—in consciousness research, the very act of reporting the phenomenon may change its neural signature. Or, farther out along the same track of thought, consider that we are aware of much more than we can report at any one time. This fact, given its academic name by the philosopher Ned Block, is referred to as “phenomenological consciousness overflows consciousness.” So while we are constantly immersed in a lush world of sensations that contain information about locations, color, meaning, motion, and so on down a very long list, we can only briefly pick out and summarize small parts of it at any one time. Our cups runneth over.
It’s often these types of abstract knots that cause eliminativists (perhaps the most notable being the philosopher Dan Dennett) to throw up their hands and declare that consciousness simply must be an illusion. But on further inspection most eliminativist claims end up eating their own tails: after all, an illusion is when something appears different than the reality, but in the domain of consciousness the appearance is the reality. Without going to such extremes and embracing such a counter-intuitive conclusion that shuts down the research entirely, there’s been the rise of a new scientific approach to consciousness.
Bees, octopuses, the artificial neural networks driving around cars now—these are all systems, some biological, some engineered, which we don’t actually know are conscious or not. Is there anything it’s like to be a self-driving car? Imagine a “conscious-o-meter” that you could point at each of those things and it would tell you both the level and content of consciousness.
At the cutting edge of consciousness research people are asking: how would such a hypothetical conscious-o-meter make its decision? This is the search for a mathematical measure of consciousness (or MMC). The field is built on the hope that, like the rest of nature, the book of consciousness is written in the language of mathematics.
For example, consider that in the search for the NCC, the original hypothesis for the correlates was cortical neurons oscillating around the 40-70 Hertz frequency (what neuroscientists call the “gamma band”). It was given by Francis Crick and Christof Koch in their 1990 paper and purposefully acted as a simplistic but useful starting point. The analogous hypothesis in the search for a MMC is that if a system passes a certain level of complexity it becomes conscious. That’s almost certainly not true (just like the 40-70 Hz oscillations almost certainly isn’t true) but it’s a definable starting point. The proposed measures grow in sophistication from there, drawing from computational complexity theory, information theory, and the latest in causal analysis.
What’s so new about this latest line of research is its focus on formalism, which is why physicists, like Max Tegmark, and computer scientists, like Scott Aaronson, are participating in the field. Formalism also helps clear up the hand-wavy notions of “information” or “processing” or “representation” that get thrown around when talking about consciousness, and often fool people into thinking that they have some obvious answer to its mystery.