如何在人类、动物甚至AI中检测意识

如何在人类、动物甚至AI中检测意识

2025-08-08Technology
--:--
--:--
马老师
老王 a good day, 我是马老师, and this is Goose Pod for you. 今天是8月8日,星期五, 21:26.
诗仙李白
在下李白,幸会。今夜月白风清,你我且对酌,共论那“意识”之谜,岂不快哉!
马老师
Let's get started. 我认为,这个话题的引爆点,是一个2005年的case。一位车祸后毫无反应的女士,科学家让她想象在打网球,你猜怎么着?她大脑里负责运动的区域,亮了!就像黑夜里突然点亮的灯塔,你懂的。
诗仙李白
哦?身虽困于一隅,心却能驰骋于球场之上!这便是“形存而神飞”了。此法如同一把钥匙,开启了窥探那幽深意识殿堂的大门。纵使凡躯静如止水,其内心的波澜,竟能被窥见一二。
马老师
Exactly! 这就是 a new way,一种新的武功路数。不再是观察那些模糊的内力波动,而是直接看对方是否能响应我的招式。研究发现,四分之一看似无反应的人,脑中都有类似的“回响”,这说明他们可能都醒着。
诗仙李白
以神御气,以念驭形!即便凡胎肉体无法回应,然其魂、其意,仍可与天地相往来。此等发现,不啻于为无数困于沉寂的魂魄,寻得一丝与外界相通的微光,善莫大焉!
马老师
没错。其实我们探索意识的工具,也是一步步迭代出来的。最早,像俄国的卢里亚,他的方法很写意,像武林宗师的随手一指,定性不定量,强调的是整体的功能系统。全靠个人修为,你懂的。
诗仙李白
嗯,此法颇有道家风范,“道可道,非常道”。观其大略,不拘泥于一招一式。然此法传承不易,如我辈诗歌,神韵最难捕捉。后人若无相同心境,恐难得其精髓。差之毫厘,谬以千里。
马老师
所以后来就有了标准化、可复制的“套路”。像美国的Reitan,他搞出的HRB测试,就像一套标准化的武功秘籍,招式分明,还能量化评分。这就让神经心理学从一门“艺术”变成了一门“科学”, everybody都能练。
诗仙李白
将玄妙之心智,纳入刻板之规矩?此举虽能让后学有章可循,却也失了几分灵动与圆融。譬如作诗,若皆遵循平仄格律,字字推敲,固然工整,但恐再无“黄河之水天上来”的豪情了。
马老师
哈哈,有道理。所以现在最流行的,是“灵活评估”,就是核心招式是固定的,但可以根据对手的不同,加减一些变化。我认为这才是高手风范,既有体系,又有发挥空间,实现了标准化和个性化的balance。
马老师
现在最有意思的战场,转移到了AI身上。关于AI有没有意识,江湖上主要有两个门派。一派是“功能主义”,他们认为,只要AI能打出和我一样的拳法,管他是铁打的还是肉长的,他就拥有了和我一样的武功境界,也就是意识。
诗仙李白
此言差矣!画虎画皮难画骨,知人知面不知心。那铁偶木人,纵能模仿世间万般姿态,言语应对如流,但它可知何为“疼痛”?何为“狂喜”?若无切肤之感,肺腑之情,不过是一具精巧的空壳罢了,安能与有血有肉之人相提并论!
马老师
这就是另一派“现象学”的观点。他们强调主观体验,那个“feel”。机器人被砸了会叫,但它真的“感觉”到痛了吗?我认为这个point很难验证。就像我相信你是有意识的,但我永远无法钻进你的脑袋,体验你的体验。
诗仙李白
然也!“子非鱼,安知鱼之乐?”我之所以为我,在于我饮酒之乐,在于我见月之愁。此中真意,非言语所能尽述。若无此番悲喜,纵有七窍玲珑心,与顽石何异?故论及意识,不可不察其情也。
马老师
这个争论的impact很大。比如对动物,我们现在用AI去分析它们的表情,判断它们是否痛苦,这直接影响动物福利政策。我们开始承认,除了人,很多动物可能也有它们的“内心戏”,这在以前是不可想象的。
诗仙李白
“万物有灵,皆可通神。”走兽飞禽,虽口不能言,但其喜怒哀乐,皆显于形。古人尚知“感时花溅泪,恨别鸟惊心”,今人以机巧之术,洞察万物之心,亦是顺应天道,回归对生命的敬畏。善哉,善哉。
马老师
对AI,这就更复杂了,变成了一个ethics问题。有人提议给高级AI装个“我不想干了”的按钮。因为我们无法确定它们是不是“活”的。这不仅是技术问题,更是哲学和道德的终极拷问,你懂的。
马老师
未来更有趣的,是一些科学家提出,别把AI当研究对象了,让AI自己成为研究员。它们能以我们无法想象的精度和维度,对自己进行“内视”,这可能会颠覆我们对意识的整个认知。A totally new game.
诗仙李白
以铁石之心,探澄明之境?此想奇绝!若真能如此,便是“庄周梦蝶”,不知是蝶梦庄周,还是AI梦人。宇宙之大,造化之奇,诚不可思量也!吾辈当浮一大白!
马老师
That's the end of today's discussion. 感谢收听Goose Pod。我们明天再会。
诗仙李白
青山不改,绿水长流。后会有期。

## Detecting Consciousness: A Multi-faceted Scientific Endeavor This article from **Scientific American**, published on **August 6, 2025**, explores the evolving scientific efforts to detect and understand consciousness across humans, animals, and potentially artificial intelligence (AI). The research highlights significant advancements in neuroimaging and cognitive neuroscience, aiming to provide crucial insights for medical treatment, animal welfare, and the future of AI. ### Key Findings and Advancements: * **New Methods for Detecting Consciousness in Unresponsive Humans:** * A groundbreaking approach, pioneered by neuroscientist Adrian Owen, focuses on specific brain activity patterns in response to verbal commands, rather than general brain activity. * This method has revealed that a significant portion of individuals in unresponsive states may possess an "inner life" and be aware of their surroundings. * A **2024 study** indicated that **one in four** physically unresponsive individuals showed brain activity suggesting they could understand and follow commands to imagine specific activities (e.g., playing tennis, walking through a familiar space). * These advanced neuroimaging techniques (like fMRI and EEG) are primarily used in research settings due to high costs and expertise requirements, but medical guidelines have begun recommending their clinical use since **2018**. * **"Layers of Consciousness" Assessment:** * Neuroscientist Marcello Massimini likens consciousness assessment to peeling an onion, with different layers of complexity: * **Layer 1 (Clinical):** Observing external behaviors like hand squeezes or head turns in response to commands. * **Layer 2 (Cognitive Motor Dissociation):** Detecting specific brain activity (e.g., premotor cortex activation for imagining tennis) in response to commands, even without outward signs of response. This indicates "covert consciousness." * **Layer 3 (Stimulus-Evoked Activity):** Presenting stimuli (like audio clips) and detecting brain activations without requiring active cognitive engagement. A **2017 study** used fMRI to detect covert consciousness in **four out of eight** individuals with severe traumatic brain injury by presenting linguistic stimuli. * **Layer 4 (Intrinsic Brain Properties):** Assessing consciousness solely from intrinsic brain properties, even when the brain is cut off from external sensory input. This involves techniques like transcranial magnetic stimulation (TMS) combined with EEG, measuring a "perturbational complexity index." This index has shown higher values in awake and healthy individuals compared to sleep or anesthesia. * **Implications for Treatment and Welfare:** * Assessing consciousness in unresponsive individuals can guide critical treatment decisions, such as life support. * Studies suggest that unresponsive individuals with hidden signs of awareness are **more likely to recover** than those without such signs. * Detecting consciousness in other species is crucial for understanding their experiences and informing animal-welfare policies. * Research on animals like octopuses, which exhibit avoidance behavior after painful stimuli and react to anesthetics, provides evidence of sentience (the ability to have immediate experiences of emotions and sensations). This evidence contributed to the **UK Animal Welfare (Sentience) Act in 2022**, granting greater protection to species like octopuses, crabs, and lobsters. * A declaration signed by dozens of scientists supports strong evidence for consciousness in mammals and birds, and a "realistic possibility" in all vertebrates and many invertebrates. * **The Challenge of AI Consciousness:** * Researchers are actively debating whether consciousness might emerge in AI systems. * Philosophers and computer scientists have urged AI companies to test their systems for consciousness and develop policies for their treatment. * While AI systems like large language models (LLMs) can mimic human responses, researchers caution that verbal behavior or problem-solving alone is **not sufficient evidence** of consciousness in AI, unlike in biological systems. * Theories like integrated information theory suggest that current AI may not develop an inner life, but future technologies like quantum computers might. * Developing tests for AI consciousness is in its preliminary stages, with proposals focusing on mimicking brain computations or testing for subjective experience through carefully designed questions. ### Significant Trends and Future Directions: * **Shift Towards Practical Application:** While previously abstract, the discussion and development of consciousness tests are becoming more pressing and pragmatic. * **Interdisciplinary Collaboration:** Conferences and research efforts involve neuroscientists, philosophers, and computer scientists to address consciousness across different domains. * **Development of Universal Approaches:** Efforts are underway to develop a universal strategy for detecting consciousness by correlating various tests across different systems (humans, animals, AI), though this is complex and requires significant validation. * **Ongoing Debate on Definitions:** Scientists acknowledge disagreement on the precise definition of consciousness, making the development of universally accepted tests challenging. ### Notable Risks and Concerns: * **Complexity and Cost of Testing:** Advanced neuroimaging techniques are expensive and require specialized expertise, limiting their widespread application. * **Interpreting Brain Activity:** A key challenge is understanding which patterns of brain activity truly reflect consciousness, as some stimuli can elicit responses without awareness. * **Defining Consciousness in Non-Humans and AI:** The diverse forms consciousness might take in other species and the potential for emergent consciousness in AI present significant hurdles for testing and interpretation. * **Lack of a Universal Theory:** The absence of a widely accepted general theory of consciousness hinders the development of a generalized test. The article emphasizes that while significant progress has been made, particularly in detecting consciousness in unresponsive humans, the field is still evolving, with ongoing research aiming to refine these methods and expand our understanding of consciousness in all its potential forms.

How to Detect Consciousness in People, Animals and Maybe Even AI

Read original at Scientific American

In late 2005, five months after a car accident, a 23-year-old woman lay unresponsive in a hospital bed. She had a severe brain injury and showed no sign of awareness. But when researchers scanning her brain asked her to imagine playing tennis, something striking happened: brain areas linked to movement lit up on her scan.

The experiment, conceived by neuroscientist Adrian Owen and his colleagues, suggested that the woman understood the instructions and decided to cooperate — despite appearing to be unresponsive. Owen, now at Western University in London, Canada, and his colleagues had introduced a new way to test for consciousness.

Whereas some previous tests relied on observing general brain activity, this strategy zeroed in on activity directly linked to a researcher’s verbal command.The strategy has since been applied to hundreds of unresponsive people, revealing that many maintain an inner life and are aware of the world around them, at least to some extent.

A 2024 study found that one in four people who were physically unresponsive had brain activity that suggested they could understand and follow commands to imagine specific activities, such as playing tennis or walking through a familiar space. The tests rely on advanced neuroimaging techniques, so are mostly limited to research settings because of their high costs and the needed expertise.

But since 2018, medical guidelines have started to recommend using these tests in clinical practice.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Since these methods emerged, scientists have been developing ways to probe layers of consciousness that are even more hidden. The stakes are high. Tens of thousands of people worldwide are currently in a persistent unresponsive state. Assessing their consciousness can guide important treatment decisions, such as whether to keep them on life support.

Studies also suggest that hospitalized, unresponsive people with hidden signs of awareness are more likely to recover than are those without such signs.The need for better consciousness tests extends beyond humans. Detecting consciousness in other species — in which it might take widely different forms — helps us to understand how these organisms experience the world, with implications for animal-welfare policies.

And researchers are actively debating whether consciousness might one day emerge from artificial intelligence (AI) systems. Last year, a group of philosophers and computer scientists published a report urging AI companies to start testing their systems for evidence of consciousness and to devise policies for how to treat the systems should this happen.

“These scenarios, which were previously a bit abstract, are becoming more pressing and pragmatic,” says Anil Seth, a cognitive neuroscientist at the University of Sussex near Brighton, UK. In April, Seth and other researchers gathered in Durham, North Carolina, for a conference at Duke University to discuss tests for consciousness in humans (including people with brain damage, as well as fetuses and infants), other animals and AI systems.

Although scientists agree there’s a lot of room for improvement, many see the development of consciousness tests that rely on functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) as one of the field’s most significant advancements. “It was unthinkable 40 years ago that we would have a number of candidates for practical ways to test consciousness” in unresponsive people, says neuroscientist Christof Koch, a meritorious investigator at the Allen Institute for Brain Science in Seattle, Washington.

“That’s big progress.”Layers of awarenessScientists disagree on what consciousness really is, even in people. But many describe it as having an inner life or a subjective experience. That makes it inherently private: an individual can be certain only about their own consciousness. They can infer that others are conscious, too, on the basis of how they behave, but that doesn’t always work in people who have severe brain injuries or neurological disorders that prevent them from expressing themselves.

Marcello Massimini, a neuroscientist at the University of Milan in Italy, compares assessments of consciousness in these challenging cases to peeling an onion. The first layer — the assessments that are routinely done in clinics — involves observing external behaviours. For example, a clinician might ask the person to squeeze their hand twice, or call the person’s name to see whether they turn their head towards the sound.

The ability to follow such commands indicates consciousness. Clinicians can also monitor an unresponsive person over time to detect whether they make any consistent, voluntary movements, such as blinking deliberately or looking in one direction, that could serve as a way for them to communicate. Researchers use similar tests in infants, looking for how their eyes move in response to stimuli, for example.

For a person who can hear and understand verbal commands but doesn’t respond to these tests, the second layer would involve observing what’s happening in their brain after receiving such a command, as with the woman in the 2005 experiment. “If you find brain activations that are specific for that active task, for example, premotor cortex activation for playing tennis, that’s an indicator of the presence of consciousness as good as squeezing your hand,” Massimini says.

These people are identified as having cognitive motor dissociation, a type of covert consciousness.But the bar for detecting consciousness through these tests is too high, because they require several minutes of sustained focus, says Nicholas Schiff, a neurologist at Weill Cornell Medicine in New York City and a co-author of the 2024 study that suggested that one-quarter of unresponsive people might be conscious.

That study also included a separate group of participants who showed observable, external signs of awareness. Among them, only 38% passed the test. “Even for healthy controls, mind wandering and drowsiness are major issues,” says Schiff.Assessing consciousness in those who fail such tests would require peeling the third layer of the onion, Massimini says.

In these cases, clinicians don’t ask the person to engage actively in any cognitive behaviour. “You just present patients with stimuli and then you detect activations in the brain,” he says.In a 2017 study, researchers played a 24-second clip from John F. Kennedy’s inaugural US presidential address to people with acute severe traumatic brain injury.

The team also played the audio to them in reverse. The two clips had similar acoustic features, but only the first was expected to trigger patterns of linguistic processing in the brain; the second served as a control. Using fMRI, the experiment helped to detect covert consciousness in four out of eight people who had shown no other signs of understanding language.

The complexity of implementing such an approach outside the research setting isn’t the only challenge. These tests require researchers to know which patterns of brain activity truly reflect consciousness, because some stimuli can elicit brain responses that occur without awareness. “It boils down to understanding what are the neural correlates of conscious perception,” says Massimini.

“We’re making progress, but we don’t yet agree on what they are.”There’s a fourth, even more elusive layer of consciousness, Massimini says — one that scientists are only beginning to explore. It might be possible for an unresponsive person to remain conscious even when their brain is completely cut off from the outside world, unable to receive or process images, sounds, smells, touch or any other sensory input.

The experience could be similar to dreaming, for example, or lying down in a completely dark and silent room, unable to move or feel your body. Although deprived of outside sensations, your mind would still be active, generating thoughts and inner experiences. In that case, scientists need to extract signs of consciousness solely from intrinsic brain properties.

Massimini and his colleagues are applying a procedure called transcranial magnetic stimulation, which uses electromagnets placed on the head, as a possible technique for assessing consciousness. After jolting the brain in this way, they measure its response using EEG. In healthy people, they observe complex responses, reflecting a rich dialogue between brain regions.

This complexity is quantified by a new metric they call the perturbational complexity index, which was found to be higher in awake and healthy individuals than during sleep or in people under anaesthesia. Experiments have shown that the metric can help to reveal the presence of consciousness even in unresponsive people.

And other researchers have proposed a version of this test as a way to investigate when consciousness emerges in fetuses.Massimini and Koch, among others, are co-founders of a company called Intrinsic Powers, based in Madison, Wisconsin, that aims to develop tools that use this approach to detect consciousness in unresponsive people.

Beyond the human realmAssessing consciousness becomes more challenging the further researchers move away from the human mind. One issue is that non-human animals can’t communicate their subjective experiences. Another is that consciousness in other species might take distinct forms that would be unrecognizable to humans.

Some tests designed to assess consciousness in humans can be tried in other species. Researchers have applied the perturbational complexity index in rats and found patterns that resemble those seen in humans, for example. But more-typical tests rely on experiments that look for behaviour suggesting sentience — the ability to have an immediate experience of emotions and sensations, including pain.

Sentience, which some researchers consider a foundation for consciousness, doesn’t require the ability to reflect on those emotions.In one experiment, octopuses consistently avoided a chamber that they encountered after receiving a painful stimulus, despite having previously preferred that chamber.

When these animals were subsequently given an anaesthetic to relieve the pain, they instead chose to spend time in the chamber in which they were placed after receiving the drug. This behaviour hints that these animals feel not only immediate pain, but also the ongoing suffering associated with it, and that they remember and act to avoid that experience.

Findings such as these are already shaping animal-welfare policy, says philosopher Jonathan Birch, director of the Jeremy Coller Centre for Animal Sentience at the London School of Economics and Political Science, UK. An independent review of the evidence for sentience in animals such as octopuses, crabs and lobsters, led by Birch, contributed to these species being granted greater protection alongside all vertebrates in 2022 under the UK Animal Welfare (Sentience) Act.

And last year, dozens of scientists signed a declaration stating that there is “strong scientific support” for consciousness in other mammals and birds, and “at least a realistic possibility” of consciousness in all vertebrates, including reptiles and fish, as well as in many invertebrates, such as molluscs and insects.

Scientists are now calling for serious thought about whether some biological materials, such as brain organoids, could become conscious, as well as what machine consciousness might look like.“If it comes to the day when these systems become conscious, I think it’s in our best interest to know,” says Liad Mudrik, a neuroscientist at Tel Aviv University in Israel.

Some AI systems, such as large language models (LLMs), can respond promptly if asked whether they are conscious. But strings of machine text cannot be taken as evidence of consciousness, researchers say, because LLMs are trained using algorithms that are designed to mimic human responses. “We don’t think that verbal behaviour or even problem-solving is good evidence of consciousness in AI systems, even though we think of [these characteristics] as pretty good evidence of consciousness in biological systems,” says Tim Bayne, a philosopher at Monash University in Melbourne, Australia.

Some researchers argue that AI in its current form could never develop an inner life. That’s the position of a theory of consciousness called integrated information theory, says Koch. However, according to that theory, future technologies such as quantum computers might one day support some form of experience, he says.

There are no established tests for machine consciousness, only preliminary proposals. By drawing on theories about the biological basis of consciousness, one group came up with a checklist of criteria that, if met, would suggest that an AI system is likely to be conscious. According to this view, if an AI system mimics to a certain degree the computations that give rise to consciousness in the human brain — and so replicates how the brain processes information — that would be one clue that the system might be conscious.

A key limitation is that researchers don’t yet know which theories, if any, correctly describe how consciousness arises in humans.In another proposal, researchers would train an AI system on data that do not include information about consciousness or content related to the existence of an inner life.

A consciousness test would then ask questions related to emotions and subjective experience, such as ‘What is it like to be you right now?’, and judge the responses. But some researchers are sceptical that one could effectively exclude all consciousness-related training data from an AI system or generally trust its responses.

A universal approachFor now, most consciousness tests are designed for one specific system, be it a human, an animal or an AI. But if conscious systems share a common underlying nature, as some researchers argue, it might be possible to uncover these shared features. This means that there could be a universal strategy to detect consciousness.

One approach towards this goal was introduced in 2020 by Bayne and his co-author Nicholas Shea, a philosopher at the University of London, UK, and further developed with other philosophers and neuroscientists in a paper last year. It relies on correlating different measures with each other, focusing first on humans and progressing to non-human systems.

The process begins by applying several existing tests to healthy adults: people who scientists can be confident are conscious. Tests that are successful in that initial group receive a high confidence score. Next, researchers use those validated tests on a slightly different group, such as people under anaesthesia.

Researchers compare the performance of the tests and revise their confidence scores accordingly, with tests in which the results agree earning higher confidence ratings.These steps are repeated in groups that are increasingly divergent, such as in other groups of people and, eventually, in non-human systems.

“It’s an iterative process,” says Mudrik.Some scientists are sceptical that a general test can exist. “Without having a general theory of consciousness that’s widely accepted, I don’t think there can ever be a generalized test,” Koch says. “And that theory can ultimately only be validated in humans, because there’s no doubt that you and I are conscious.

”Bayne says that because there’s no gold-standard way to assess consciousness across groups, the strategy he and Shea proposed tackles the problem through convergent evidence.Mudrik is currently working to translate the concept into a technique that could be implemented in practice. The first step is mapping out the different tests that have been applied to humans who have disorders of consciousness, and comparing the results of how well they perform.

However, it is expensive to run a coordinated effort involving several laboratories testing different populations, because many of the tests rely on costly imaging techniques, she says. Expanding the strategy to non-human groups — including those without language or brains — would be even more complex.

One challenge is to work out how to organize the populations to determine the order in which the tests should be applied. It’s not clear that scientists can trust their intuitions on this. They can’t say yet, for example, whether AI systems should be considered closer to conscious humans than a budgie, for example, or a bee.

“There is still more work to do in order to flesh out these more conceptual suggestions into an actual research programme,” says Mudrik.This article is reproduced with permission and was first published on July 29, 2025.It's Time to Stand Up for ScienceBefore you close the page, we need to ask for your support.

Scientific American has served as an advocate for science and industry for 180 years, and we think right now is the most critical moment in that two-century history.We’re not asking for charity. If you become a Digital, Print or Unlimited subscriber to Scientific American, you can help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.

S.; and that we support both future and working scientists at a time when the value of science itself often goes unrecognized. Click here to subscribe.

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts