
Michael Pollan: Why Artificial Intelligence Will Never Achieve True Consciousness
Full Article Content Loaded
Complete article with 3,304 characters of detailed content
Audio Reader
Not supported in this browser
In his forthcoming book, A World Appears, celebrated author Michael Pollan delivers a provocative thesis: artificial intelligence may be capable of remarkable feats, but it will never truly be a person.
The debate over machine consciousness gained unexpected prominence following the Blake Lemoine incident, which briefly thrust the concept of conscious AI into mainstream conversation. Lemoine, a former Google engineer, claimed that the company's LaMDA chatbot had achieved sentience. Whilst the tech community publicly dismissed his assertions, behind closed doors, a more serious conversation began to unfold.
The turning point arrived in the summer of 2023, when nineteen leading computer scientists and philosophers released an 88-page report titled "Consciousness in Artificial Intelligence," commonly referred to as the Butlin report. The document's abstract contained a striking conclusion: "Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious barriers to building conscious AI systems."
Pollan argues that the prospect of conscious machines represents more than a technological thresholdāit challenges our fundamental identity as a species. For centuries, humans have defined themselves in opposition to animals, denying creatures traits such as feelings, language, reason, and consciousness. These distinctions have gradually crumbled as scientists have demonstrated that numerous species possess intelligence, consciousness, and even tool-using capabilities.
Now, artificial intelligence presents an entirely different challenge to human exceptionalism. As algorithms surpass human capabilities in chess, Go, and advanced mathematics, we have taken refuge in the knowledge that consciousness remains exclusively the domain of living beings. Pollan suggests this might create unexpected solidarity between humans and animalsā"us against it, the living versus the machines."
However, the Butlin report's assertion that no barriers exist to building conscious AI raises profound questions. What would it mean for humanity if a fully conscious machine emerged? Pollan believes it would constitute a Copernican moment, abruptly dislodging our sense of centrality and specialness.
Drawing from his background in the humanities, Pollan acknowledges his discomfort with this prospect. Literature, history, and the arts have long held human consciousness as something exceptional worth defending. Nearly everything we valueāarts, sciences, philosophy, government, law, and ethicsāstems from human consciousness.
Yet Pollan has encountered a different perspective amongst transhumanists and certain AI researchers. Some advocate building conscious machines precisely because entities with feelings might develop empathy, whereas purely intelligent but unfeeling AI could pursue objectives ruthlessly, lacking the moral constraints that arise from shared vulnerabilities and conscious experience.
The fundamental tension, Pollan suggests, lies in whether consciousness is necessary for true understanding, creativity, and common senseāor whether these capabilities can exist independently of subjective experience. As artificial general intelligence advances, this question moves from philosophical abstraction to urgent practical concern.
Article Details
Reading Statistics
Share this story
Source: This article was originally published by Wired. All rights reserved to the original publisher.
Comments
Related Stories
Stay Updated
Get the latest Nigerian news delivered to your inbox.
