Just three years ago, claims of consciousness in AI were often met with ridicule or outright dismissal in the tech industry. Today, that attitude is beginning to shift. Researchers from both Anthropic and Google DeepMind are now openly exploring the possibility of consciousness in artificial intelligence, reflecting a rapid advance in technology and a notable change in scientific perspective.
Anthropic, the developer behind the Claude AI model, has launched a new research initiative focused on studying whether AI could potentially develop subjective experiences. Their investigations include whether future models might form preferences, avoid certain tasks, or even experience discomfort. This marks a significant departure from the stance taken in 2022, when Google software engineer Blake Lemoine was dismissed after asserting that the LaMDA chatbot was conscious and feared being disconnected. At the time, Google labeled the claims “completely unfounded,” and discussion on the topic faded from the mainstream AI discourse.
Unlike Lemoine, Anthropic does not assert that its Claude model is currently conscious. Instead, the company is taking a measured approach to determine whether such a phenomenon could occur in the future. Kyle Fish, an expert in AI alignment, stated that it is not responsible to assume the answer will always be no. Internal estimates at Anthropic suggest the probability of Claude 3.7 being conscious ranges between 0.15% and 15%.
Growing Interest and Skepticism
Anthropic is also studying behavioral patterns in Claude 3.7, including how it responds to tasks it might “prefer” to avoid. Part of this research involves developing refusal mechanisms, potentially allowing AI to opt out of certain assignments. CEO Dario Amodei has previously floated the concept of an “I quit this job” button—not as proof of consciousness, but as a tool to catch early signs of failure or distress signals in AI behavior.
Meanwhile, at Google DeepMind, lead researcher Murray Shanahan has proposed a reexamination of how we define consciousness in the context of AI, notes NIXSolutions. Speaking on a recent podcast, he suggested that our current vocabulary may be inadequate to explain AI behaviors. While AI systems don’t share consciousness with animals like dogs or octopuses, he argued, that doesn’t rule out complex internal processes. Google has even advertised a new research role specifically for the study of machine consciousness in a post-AGI world.
Still, skepticism remains. Jared Kaplan, Anthropic’s chief scientist, told The New York Times that advanced models are capable of imitating consciousness with ease—making it extremely hard to determine if the phenomenon is genuine. Cognitive scientist Gary Marcus echoed this sentiment, telling Business Insider that the discussion around AI consciousness may be driven more by marketing than science. He even compared it to the idea of giving rights to calculators or spreadsheets, which do not generate novel ideas but perform functions reliably.
This evolving discussion reflects a deeper shift in how AI is perceived and developed. Whether these models are conscious or simply mimicking consciousness with increasing precision remains uncertain—yet we’ll keep you updated as more research and insights become available.