Large Language Models are not conscious.


A Thought Experiment

In the early 2000's, philosopher John Searle posed the following thought experiment known as the Chinese Room:

Imagine a room containing an English-speaking person (let's say you) who has a book with instructions for responding to Chinese Characters.

Chinese characters are passed to you through a slot in the room and, using the book, you look up the input characters and find the corresponding output characters. Yes, the book is large. You pass these output characters out back through the slot.

From an outside perspective, it appears as if the room understands Chinese. The room receives Chinese characters and responds appropriately. Appropriately enough that even a Chinese-speaker could not discern the difference between the room and a fluent Chinese speaker.

Strong AI vs Weak AI

Searle postulated this Thought Experiment to distinguish two grades of Artificial Intelligence, Strong AI and Weak AI.

Strong AI

An entity possessing Strong AI is capable of true understanding and true consciousness. Although tricky to describe, it's easy for most of us to consider Homo sapiens possessors of strong intelligence: sapiens demonstrate true understanding of their world. Yes of course we're using ourselves as the bar, but there's no denying that we're somewhat unique in this universe. Strong Intelligence.

Weak AI

An entity possessing Weak AI, sometimes called Narrow AI, is merely intelligence built around a discrete set of tasks. That task could be Chess, as is the case for Deep Blue or Stockfish, or Google Translate. In the case of Large Language Models, the task can be the act of mimicking human intelligence via language.

It appears that LLMs such as GPT-4 possesses high-level understandings of complex systems (engineering, law, medicine, etc.), but we've so far not been able to demonstrate useful generalization beyond their input dataset. In the same way that Stockfish and Google Translate lack meaningful understanding of the world, LLMs lack understanding of the world beyond mimicking trained speech. Weak Intelligence.

Large Language Models

You, the operator, clearly do not understand Chinese. You're merely following instructions from a book. But to any outside observer you're plainly a Chinese Speaker. Large Language Models have displayed exceptional capacity to mimic human language and thought mediated through whatever can be pushed through a Transformer. But these systems do not, so far, display general reasoning outside of the corpus of data on which they were trained. ChatGPT is merely a Chinese Room, responding to text queries with plausible responses.

What Searle did not anticipate was the power of Weak AI. It may be the case that Weak AI trained on human language is more than enough to fulfill any of the prophecies to which we've ascribed it.

Back to posts
TwitterUdemyMy Twitter ProfileMy Instagram

Copyright © Kevin Katz 2023

Privacy