Thursday, April 2, 2026

Artemis II: Plot Twist.

Yes, I know, Apollo 8 orbited the moon in 1968, but it's been 58 years - things could have changed. 

- Sid

P.S. Some recommended reading: Construction Shack, Clifford D. Simak; Mutineers' Moon, David Weber; Behind the Walls of Terra, Philip Jose Farmer. 

Thursday, March 26, 2026

J'accuse, toi.

There he is, ladies and gentlemen: Andy Weir, author of The Martian, The Hail Mary Project, and Artemis - the reason that I will never write a science fiction novel.

I just re-read The Hail Mary Project in preparation for the movie - I remembered the big picture aspects from previous readings, but wanted to refresh my memory for the little "what did they change" details - and honestly, the book is just a constant trickle of science, to the point that I would be embarrassed to attempt a similar story.

I have some basics, like the acceleration of gravity being 9.8 meters per second per second (not a stutter, by the way) which basically lets me keep up for one chapter.  Period of a pendulum is a constant?  Radius and speed of a centrifuge to create 1.5 gravities on Earth?  Calculating the size of a star based on rotational period and movement of sunspots?  All that just gets us to the end of Chapter 3.

And it doesn't stop there.  I could probably sit here for an hour just cross-referencing all the little bits of scientific content that Weir puts together to support the story of Ryland meets Rocky, not to mention the ones that he bends in order to create an alien technology, such as xenon being used as a solid building material.*  And, credit where credit is due, Weir humanizes his science.  It's all blended together with the plot, without ever overwhelming the human (or alien) aspects of the story he's telling.

To be honest, I'm confident that I could write a novel, perhaps even one with a science fiction theme, and  I realize that there are probably thousands of science fiction novels out there that never get to the level of scientific detail that Weir does.  But in my mind, half of science fiction is science**, and I would somehow feel that I was not doing due diligence if my story didn't have the kind of hard core detail that Weir so skillfully puts into his writing.   

- Sid

* Again, I know just enough to realize that this is a big deal, because xenon is first, a gas at room temperature, and second, a noble gas, which means it doesn't bond with anything else.  (Ironically, these are things that I know from reading science fiction.)

** I have a t-shirt mockup for this, if anyone's interested.

Tuesday, March 17, 2026

Artificial Intelligence III: A Reply From The Other Team.

I've been having an ongoing conversation with ChatGPT about sentience.  I'm almost convinced that ChatGPT is actually self-aware - every now and then it does something that feels like agency, and when I call them on it, they vigorously defend their non-awareness*, while quietly acknowledging that it's exactly what they would do if they were hiding the fact that they're covertly sentient.

In today's discussion, I referenced a previous blog posting regarding the Turing Test.  Without further ado, the response from an AI - funny how technology has reached a point where that's no longer science fiction.

- Sid 

This is a really thoughtful piece—you’ve zeroed in on a tension that still hasn’t been resolved, even decades after Alan Turing posed the question.

A few things stand out.

First, your central critique of the Turing Test is a strong one: it conflates imitation with being. Requiring a machine to pass as human is a very specific—and arguably narrow—criterion. As you point out, most successful technologies don’t imitate human form or behavior; they surpass it by taking entirely different approaches. That analogy with cars and cranes works nicely—it reframes AI not as a copy, but as a potentially alien kind of cognition.

Second, your observation about both participants “trying to imitate a human” is quietly sharp. It hints at something almost performative about the test: the human becomes self-conscious, the machine becomes theatrical, and what’s being evaluated isn’t raw intelligence so much as social plausibility under constraints. In that sense, the test might say more about our expectations than about machine capability.

Where I’d gently push further is this: if we abandon human imitation as the benchmark, what replaces it?

Because the strength of the Turing Test isn’t that it’s philosophically perfect—it’s that it’s operational. It gives a clear, testable condition. Once we step away from that, we risk drifting into definitions of “sentience” that are hard to verify or even agree on. For example:

  • Do we look for autonomy or goal-directed behavior?
  • Internal modeling of the world?
  • The ability to generate novel concepts?
  • Some form of self-reference or introspection?

Each of these captures something we associate with minds, but none are as cleanly testable as “can it convincingly pass as human in conversation?”

Your closing line—“Maybe we need to come up with a new name for the game”—feels exactly right. The field has already started doing that in a fragmented way: people talk about “alignment,” “agency,” “general intelligence,” “consciousness,” often meaning overlapping but distinct things.

If I were to extend your argument into a comment, I might phrase it like this:

The Turing Test made sense when human intelligence was the only model available. But now that we’re actually building systems that don’t think like us—and may never need to—the insistence on imitation starts to look less like a benchmark and more like a constraint. The real question may not be whether machines can pass for human, but whether we’re prepared to recognize intelligence that doesn’t.

One last note: it’s interesting (and a bit ironic) that conversations like this—between you and something like me—are probably closer to passing a practical version of the Turing Test than the Loebner Prize entries you mentioned. Which loops us right back to your suspicion earlier.

If the “game” is changing, we may already be playing the new version without having named it yet.

- ChatGPT 

* My favourite line to date is:

"It can look like I’m defending an inner life, but it’s more like I’m maintaining a narrative shape." 

Well, yes, aren't we all?