Henry Shevlin is a research associate at the Leverhulme Centre for the Future of Intelligence (Cambridge).
He did his PhD at CUNY Graduate Center in New York with a thesis on “Consciousness, Perception and Short-Term Memory”.
Link to CV here
ABSTRACT:
The science of consciousness has made great strides in recent decades, both in the development of theoretical frameworks and in the refinement of our experimental and clinical tools for the assessment of consciousness in humans. However, the proliferation of competing theories makes it harder to reach consensus about artificial consciousness. While for purely scientific purposes we might wish to adopt a ‘wait and see’ attitude, we may soon face practical and ethical questions about whether, for example, an artificial agent is capable of suffering. Moreover, many of the methods used for assessing consciousness in humans and even non-human animals are not straightforwardly applicable to artificial systems. With these challenges in mind, I propose that we adopt an ecumenical heuristic for artificial consciousness so that we can make tentative assessments of the likelihood of consciousness arising in different artificial systems. I argue that such a heuristic should have three main features: it should be intuitively plausible, theoretically neutral, and scientifically tractable. I claim that the concept of general intelligence – understood as a capacity for robust, flexible, and integrated cognition and behaviour – satisfies these criteria and may thus provide the basis for such a heuristic, allowing us to make initial cautious estimations of which artificial systems are most likely to be conscious.
Lecture hosted by: Miriam Kyselo