Skip To Content
Cambridge University Science Magazine
Bluesci: What’s your background?

Jessie Hall: I did my undergrad in physics and philosophy, focusing on the electromagnetism side of physics. During my Master’s, I found the philosophy of talking to computers very strange and interesting. I wrote my thesis about the computational theory of the mind - this idea that brains are computers. For the longest time the prevailing view was that the mind was separate from the body, as stated in the Cartesian dualism. But in the age of physicalism and materialism, the idea is that there is only matter and forces that act on it, and that the mind is only matter. The questions arising from this are: how do we then make sense of the mind and free will? The computational theory of the mind is supposed to be realistic and to explain the mind with the tools of science. My own impression of this literature is that this theory was getting computers wrong. It’s what led me to my PhD project exploring ‘what is a computer?’.

B: Is there a bridge between philosophers and scientists about computational problems?

JH: I think it is part of the problem. For instance, in computer science there are a lot of words used that have fundamentally different meanings for philosophers. And that’s sort of curious because it’s philosophical literature in logic and language and Heideggerian philosophers influenced computer science, especially new models for machine learning.

B: You just talked about the end of Cartesian dualism, giving rise to materialism. David Chalmers, the author of the book Reality+, is known for his theory about the hard problem of consciousness. Could you tell us the main approach of this theory? Does this theory challenge the fact that all brain processes are only physical?

JH: The hard problem has to do with why consciousness arises at all. In a materialist worldview, the idea is that certain arrangements of matter give rise to consciousness, and so the hard problem asks why is one object conscious, but not another? One of the ways to understand the ‘Computational Theory of Mind’ (CTM) is as a hypothesis that if the matter computes a particular computation, then it will be a mind. Note that panpsychists instead bite the bullet, arguing that no explanation can be given for the question because everything is conscious to some degree or other. In some ways, the CTM, through this hard problem ‘lens’, can be related to the thesis of Reality+ because there, Chalmers argues that the virtual is as real as anything we usually call real. He argues this through an examination of what is usually meant by ‘real’ and aims to show that the experiences of ‘virtual reality’ meet those definitions. One of the ways we might mean that something is real is if it is physical. Chalmers does some work (in Reality+) to show that the virtual is still physical insofar as physical substrates are involved in the implementation of the software responsible for virtual experiences. Perhaps a parallel can be drawn between the way ‘virtual’ realities are purported to be real, and then the way consciousness (or more loosely, minds) is purported to be the product of computation.

B: In his book Reality+, Chalmers reflects about the possibility of our reality being a virtual world and lists criteria to define what is real. What are these criteria?

JH: Chalmers gives five ‘versions’ of what is meant by real: reality as existence, reality as causal power, reality as mind-independence, reality as non-illusoriness, and reality as genuineness. He uses these not to say whether the ‘normal’, non-virtual reality we live in might be a simulation (though he considers that earlier on in the book), but to show how known-to-be-virtual experiences could meet the criteria of realness. Summarising this argument would amount to a book synopsis, so I will just comment on the so-called ‘simulation hypothesis’ (the idea that we do live in/our current reality is a virtual world). If a virtual world can meet these criteria of realness, then there are two additional possibilities with interesting consequences:

A) What we take to be the ‘real world’ could also be virtual (since there is nothing in what it is to be real that excludes simulated reality).

B) These criteria do not capture everything it is to be real.

A and B could both be true, or it could be the case that B is true, and the ‘missing criteria’ makes A untrue: there is more to being real than these criteria articulated by Chalmers, and those revisions render the virtual/simulated non-real.

B: In Chalmers research, is there any link made between the hard problem of consciousness and the fact he cannot invalidate the hypothesis of our world being virtual?

JH: Perhaps so. Some versions of the CTM think that the implementation of computation can explain the cognitive capacities of brains, including consciousness (if ‘consciousness’ does in fact pick out a cognitive capacity– philosophers don’t agree on this). They take the implementation of computations to be a hypothesis about what matter does (philosophers are also in great disagreement on this front– and it is this debate which is my research interest!). Under this picture, it is amenable to a physicalist worldview and can be integrated into ‘purely physical’ explanation. So too for virtual worlds – if virtual worlds are the products of computations, and if it is true that computations are functions of matter, then the virtual is as real as the non-virtual, as real as consciousness. The conclusion about virtual worlds, as with consciousness, relies on a lot of ‘ifs’ that, in my view, all turn on making what we mean by ‘computation’ and ‘implementing computations’ clearer.

B: For Chalmers, what is the relationship between consciousness and reality?

JH: The word ‘consciousness’ is sometimes used to mean ‘a continuous awareness of sensory/perceptive stimuli’. In this sense, consciousness is related to reality in that to be conscious is to be conscious of something: events, environments and so on. If you’ve ever seen The Matrix, we could talk about the experience of the characters as at first being conscious of one version of reality, and then conscious of another, one being a production of some kind of grand hallucination, and the other, the ‘true reality’. The former situation is what is sometimes called the ‘brain in the vat’ hypothesis, famously articulated by Hilary Putnam. It is the idea that maybe what we perceive to be reality is completely at odds with real reality: maybe we are just a brain in a vat, with electrical signals and neurochemical potentials surging throughout, creating the fantastic illusion of the lives we believe we are living. The brain in the vat hypothesis asks us to take very seriously the sensory and brain-mediated nature of the experiences that inform our notions of what is real – how could we ever disprove the hypothesis? 

B: What can AI and virtual reality teach us outside of the reality we know? We mostly hear in the news about AI learning from humans and rarely the converse approach.

JH: The prevailing angle we take towards AI now is more of an ‘alarm’; we think about the risks, while the project that I am working on with Professor Karina Vold is trying to look at AI as a tool. For instance, we can think of AI’s potential as a scaffolding learning system, so that human beings could learn from and with machines. This idea is kind of a revival of an idea articulated by J. C. R. Licklider in the 1960s of ‘human machine symbiosis’. There are several examples of that, one being AlphaGo and the famous unexpected Move 37. The intuition here is there are things that these machine learning algorithms do, especially deep reinforcement systems, that produce unexpected but very informative and robust results. In the AlphaGo case, human go players are actually training on AlphaGo’s moves and style of play! Therefore, it is of interest to see the machine as a sort of teacher.

Pauline Kerekes is a post-doc in Neuroscience at the Physiology, Development and Neuroscience department at Cambridge who helps coordinate the art within BlueSci. Artwork also by Pauline Kerekes.