What is the compelling question or challenge?
Discovering models of conscious experience -- not neurological models of how the brain generates conscious experience, but rather more abstract theoretical models of consciousness.
What do we know now about this Big Idea and what are the key research questions we need to address?
Much of today’s scientific research on consciousness focuses on its neurological basis. This work has sometimes been criticized as not addressing what are often called the “hard problems of consciousness”; for example, it is not clear how any work along these lines could possibly explain why there is something it is vividly *like* to be me right now. Whether this criticism is well founded is controversial. But we may also wonder whether we can make progress in other ways. To see why, consider that our understanding of the neurological basis of intelligence is still very limited, but this has not stopped artificial intelligence (AI) researchers from making progress. It is easy to argue that as a result of AI research, we now have a far better conceptual understanding of intelligence in general than we used to -- specifically, what types of problems we might expect intelligence to solve, and what some of the pathways to doing so are. Might we hope for a similar development in the study of consciousness? Can we deepen our understanding of conscious experience even if we fail to make progress on understanding its neurological basis? (This is of course not to say that there could not be fruitful interaction with researchers who work on the neurological basis of consciousness; after all, similar interaction has been very helpful for artificial intelligence.)
There are reasons to be skeptical. One is that it is famously difficult to agree on what exactly we mean by “consciousness.” But definitions of “intelligence” are also controversial, and AI research has taught us that our previous conceptions of the meaning of “intelligence” were often misguided. Indeed, many AI researchers hesitate to produce a definition of “intelligence,” preferring to make progress on concrete problems instead.
Perhaps the bigger concern is whether we will be indeed able to make concrete progress. Scientists in particular often have the impression that, perhaps excepting some progress made on the neurological side, the topic of consciousness is the territory of philosophers, who are making little real progress as they talk past each other and have no way to settle who is right and who is wrong. First, I think this is an unfair characterization of the philosophical work; there in fact continues to be progress there, especially where it concerns clarifying what questions we really face. But progress has been slow compared to that in (say) computer science. I believe that, building on both philosophical insight and new technologies, we can make progress in new, scientific ways.
Such progress may well fail to (completely) resolve some of the hard problems; perhaps these can be sidestepped. For example, in computational complexity theory, progress is often made not by completely resolving how hard a computational problem is to solve, but rather by drawing relationships between different problems -- if we had an efficient algorithm that solves problem A, then we could also use it to solve problem B. I suspect that a similar approach may be fruitful in new theoretical approaches to studying consciousness. We may not know how to generate conscious experiences from scratch, but if we are willing to assume that they exist in certain forms -- e.g., we are comfortable stipulating that a human being with functioning eyesight has certain visual experiences -- then we may ask which other conscious experiences we can create under that assumption, for example by using virtual reality technology, AI-generated images, or even direct stimulation of the brain.
Why does it matter? What scientific discoveries, innovations, and desired societal outcomes might result from investment in this area?
In much of the 20th century, these questions were (arguably) not that pressing because the ways in which we could modify human experiences were limited. However, we are increasingly able to modify our experiences, and there are indications that this trend will accelerate. Virtual reality and augmented reality are coming of age. We are learning how to train blind people to use limited forms of echolocation, or to “see” the world using devices that connect to their tongues. (Intriguingly, one of the most famous papers in 20th-century philosophy argues that we will never know what it is like to be a bat -- bats being creatures that are presumably conscious but whose subjective experience is presumably very different from ours, relying on echolocation.) Such techniques have the ability to significantly improve life for many people, but a lack of understanding of the nature of conscious experience may result in failure to successfully develop them, or in suboptimal development.
Relatedly, in the field of artificial intelligence, ethical issues are becoming increasingly prominent. For now, most of these issues do not require a deep understanding of the nature of conscious experience, because (a) their effect on human experience is mostly clear and (b) there appears to be broad consensus that today’s AI systems are not conscious themselves. But this is likely to change; for one, the technology is likely to start integrating with our human bodies. Also, our tendency to anthropomorphize is likely to lead people to start caring about their AI systems more as they become more sophisticated. As this happens, we will find related ethical issues increasingly hard to resolve.
Finally, with an ageing population, Alzheimer’s and similar conditions will become an ever more pressing national (and international) health concern. As long as we fall short of completely curing these conditions, we are likely to develop technology that increasingly takes over tasks from the patient. But how should we think about the ethics of moving the mental life of the patient into assistive technology? If there is a choice between completely safe assistive technology that replaces much of the patient’s mental life, and a risky treatment that has the potential to completely cure the condition, which should be chosen? Our limited understanding of conscious experience makes it difficult to answer such questions. While presumably the final choice should remain with the patient, we still have a duty to inform the patient in the best possible way.
If we invest in this area, what would success look like?
For the National Science Foundation (NSF), the goal should not be to duplicate what (say) the National Institutes of Health (NIH) might do in this sphere, but rather to remain complementary to NIH. Some intersection may be unavoidable, and likely this agenda will involve more human (and possibly animal) subjects research than is the case on average for NSF. Still, I believe that a different (and, again, complementary) direction from lines of work already undertaken in neuroscience is possible and likely to be fruitful. This direction will be relatively more technology-focused, with virtual/augmented reality and artificial intelligence likely playing major roles. The frameworks of theoretical computer science may also provide useful insights to establish a similar theoretical framework in this context, as hinted at in one of the other sections. The discovery of such a theory would be one of the main goals. It should agree with testable hypotheses, and help guide the development of technology, from both functional and ethical perspectives. The theory may well leave some of the hardest problems related to consciousness unresolved; while it may well provide further insight into them, this would not be necessary for success. It is also unnecessary (and I suspect counterproductive) to insist on a strong deflationary / materialist stance towards these issues; rather, the theory should simply be clear about its assumptions.
Why is this the right time to invest in this area?
Besides reasons already discussed in other sections: One thing that today’s AI is not yet good at is having a broad, flexible, integrated understanding of the world. Instead, AI systems so far focus on narrow domains, where they sometimes achieve superhuman levels of performance but have no real understanding of the broader context. E.g., AlphaGo doesn’t even know anything about the physical properties of stones on the Go board. This seems closely tied to other things that AI cannot yet do, including exhibiting truly out-of-the-box creativity and commonsense understanding. Is our current inability to create AI systems with a broad, flexible, integrated understanding of the world related to our limited understanding of consciousness?
References
Reference #1
Reference #2
Reference #3
Show MoreShow Less