Skip to main content

In the Land of the Blind...

AI backlash is inevitable, but it doesn't help things if you spout nonsense and shout at clouds. In fact, it actually hurts.

· By Rob Conery · 3 min read

Another day, another person citing HAL 9000 as a "warning" from Stanley Kubrick and Arthur C. Clarke about the "dangers of AI". I am convinced that none of these LinkedIn AI thought leaders actually watched the movie and I honestly couldn't blame them for that. It's a pretty slow burn to get through the whole thing, at least by today's standards.

I was going to link a few of the more bonkers posts, but decided not too as that's what they want anyway. Go have a read if you want to get depressed.

Anyway: Kubrick was a master of immersion and visual storytelling, essentially demanding that his audience either allow themselves to enter his world, or leave. The space docking scene ran 5 minutes and 20 seconds long and was scored to the Blue Danube of all things!

This film could not be released today. We simply don't have the attention span. This much is obvious because the people writing these "warning" hot takes on LinkedIn seem to have not made it to the end, with the Big Reveal about HAL...

The Big Reveal

If you haven't watched it, you really should. Stay for the whole thing please! In short: the human crew of the Discovery wasn't aware of the true nature of the mission, which was first contact (potentially) with intelligent alien life. The only crew member that knew this was HAL:

In that scene, astronaut Dave Bowman is "inside" HAL, having just disconnected him - essentially killing the computer. For some reason, a video message is played and the mission objectives are revealed, and Dave learns the truth.

The main point of tension here is that HAL was programmed to convey information "without distortion or concealment", which makes sense for any AI tool, but Mission Control ordered him to conceal the truth from the human crew.

The sequel to the film (2010: Odyssey 2) discussed this in more detail, but described it as a "psychotic state", basically framing HAL as "going crazy" but I think it's much easier to understand than that: without humans around, HAL could do what the humans told him to do AND not distort or conceal information.

If you watched the scene above and knew HAL's orders, it makes the following scene even more chilling, which of course happened before the disconnect, where HAL is trying to figure out if the humans know the mission objectives, or have at least guessed them:

HAL is Merely a Machine

HAL is not a warning about AI taking over, it's about a massive logic bug shipped to production that crashed the system. Normally, HAL might have blue screened with conflicting instruction sets. Instead, HAL's "agentic programming model" tried to resolve the faulty instructions as best it could.

You would think NASA would have run a few integration tests, wouldn't you?

I think about this when people blame AI for writing crappy code. In one sense: it's true. The programming models and tooling can interpret your instructions in a very funky way, giving you back something that looks like pure crap if that's what your process is like normally.

I don't mean to sound overly snarky on this, but many programmers fly by the seat of their pants due to super intelligence or extreme Dunning-Kruger. I think these folks tend to have a hard time organizing their thoughts, forming them into solid instructions for a machine.

If you can flip your brain into "instruction mode", where you think through your current problem and then figure out how to explain it to the machine, things become much better. I have found that the simplest way to do that is to break things down into the smallest step possible, which makes the prompt or instruction extremely small and easy to convey to a machine.

Claude, Gemini, and LLMs like them are only data models at their core, which accelerate predictive analysis. The prompts you use, together with instructions you provide, will either drive them insane or allow them to help you find an answer.

We are the ones who control the AI. I'm not sure if that makes me feel better, honestly, but at least it puts the focus on the right problem. AI isn't to blame - it's an algorithm based on 1s and 0s, and a really good reflection of our own thinking patterns.

Updated on Sep 23, 2025