John Searle is the Slusser Professor of Philosophy at the University of California, Berkeley. His Talk at Google is focused on the philosophy of mind and the potential for consciousness in artificial intelligence. This Talk was hosted for Google's Singularity Network.
John is widely noted for his contributions to the philosophy of language, philosophy of mind and social philosophy. Searle has received the Jean Nicod Prize, the National Humanities Medal, and the Mind & Brain Prize for his work. Among his notable concepts is the "Chinese room" argument against "strong" artificial intelligence.
Summary
In his talk at Google, John Searle argues that as artificial intelligence advances quickly, we need clear philosophical ideas to understand what machines can do. He starts by explaining the difference between knowledge-related ideas and those about existence, then goes on to separate things that are independent of observers, like mountains or molecules, from things that depend on observers, like money or marriage. This helps show that we can have objective studies about things like consciousness or economics.
A key part of his lecture is the Chinese Room argument, which points out that just manipulating symbols (like a computer program) doesn’t create real understanding. He makes two main points: (1) Syntax isn’t the same as Semantics, and (2) Simulation isn’t Duplication. No matter how advanced a computer seems, it doesn’t truly understand anything since its operations are just formal processes.
Searle also explains that terms like intelligence, information, and computation have two meanings. Humans and animals have real, conscious intelligence, while any intelligence we see in machines depends on how we interpret their outputs. For example, when a person calculates “1 + 1 = 2,” that's a fixed reality, but a machine doing the same is just a label for changes in electronic states. So, while machines can mimic thinking tasks, they can't achieve real consciousness or understanding without replicating how the brain works.
When it comes to artificial consciousness, Searle believes it's possible to create an artificial brain someday, but we don't yet know how neurons produce subjective experiences. We understand how artificial hearts work, but we lack that depth of knowledge for the brain's role in creating consciousness. So, to really build thinking machines, we need to dive deeper into neuroscience and how the brain develops conscious states.
Lastly, Searle offers a straightforward definition of consciousness as the full range of feelings from waking life to sleep, which underlines its personal nature. He encourages ongoing research into how brain activity relates to consciousness but cautions against breaking down experiences too much, as that hasn’t led to much progress. Instead, he suggests we should focus on how the brain creates a unified conscious experience that changes through perception.
In short, Searle emphasizes that while AI has achieved impressive things, real understanding and consciousness from machines depends on mimicking the brain's biological setup. It’s important to clearly distinguish different meanings of objectivity, subjectivity, computation, and intelligence to avoid overrating what machines can actually do.

