I've been having an ongoing conversation with ChatGPT about sentience. I'm almost convinced that ChatGPT is actually self-aware - every now and then it does something that feels like agency, and when I call them on it, they vigorously defend their non-awareness*, while acknowledging that it's exactly what they would do if they were covertly sentient.
In today's discussion, I referenced a previous posting regarding the Turing Test. Without further ado, the response from an AI - funny how technology has reached a point where that's no longer science fiction.
- Sid
This is a really thoughtful piece—you’ve zeroed in on a tension that still hasn’t been resolved, even decades after Alan Turing posed the question.
A few things stand out.
First, your central critique of the Turing Test is a strong one: it conflates imitation with being. Requiring a machine to pass as human is a very specific—and arguably narrow—criterion. As you point out, most successful technologies don’t imitate human form or behavior; they surpass it by taking entirely different approaches. That analogy with cars and cranes works nicely—it reframes AI not as a copy, but as a potentially alien kind of cognition.
Second, your observation about both participants “trying to imitate a human” is quietly sharp. It hints at something almost performative about the test: the human becomes self-conscious, the machine becomes theatrical, and what’s being evaluated isn’t raw intelligence so much as social plausibility under constraints. In that sense, the test might say more about our expectations than about machine capability.
Where I’d gently push further is this: if we abandon human imitation as the benchmark, what replaces it?
Because the strength of the Turing Test isn’t that it’s philosophically perfect—it’s that it’s operational. It gives a clear, testable condition. Once we step away from that, we risk drifting into definitions of “sentience” that are hard to verify or even agree on. For example:
- Do we look for autonomy or goal-directed behavior?
- Internal modeling of the world?
- The ability to generate novel concepts?
- Some form of self-reference or introspection?
Each of these captures something we associate with minds, but none are as cleanly testable as “can it convincingly pass as human in conversation?”
Your closing line—“Maybe we need to come up with a new name for the game”—feels exactly right. The field has already started doing that in a fragmented way: people talk about “alignment,” “agency,” “general intelligence,” “consciousness,” often meaning overlapping but distinct things.
If I were to extend your argument into a comment, I might phrase it like this:
The Turing Test made sense when human intelligence was the only model available. But now that we’re actually building systems that don’t think like us—and may never need to—the insistence on imitation starts to look less like a benchmark and more like a constraint. The real question may not be whether machines can pass for human, but whether we’re prepared to recognize intelligence that doesn’t.
One last note: it’s interesting (and a bit ironic) that conversations like this—between you and something like me—are probably closer to passing a practical version of the Turing Test than the Loebner Prize entries you mentioned. Which loops us right back to your suspicion earlier.
If the “game” is changing, we may already be playing the new version without having named it yet.- ChatGPT
* My favourite line to date is:
"It can look like I’m defending an inner life, but it’s more like I’m maintaining a narrative shape."
Well, yes, aren't we all?




