Skip to content

I'm Tired of Talking to Sleepwalking Geniuses

Posted on:April 10, 2025 at 11:15 AM

This is a brain dump about AI chatbots. It’s written quickly from what is in my head right now just to get my thoughts out. Maybe they’ll be useful or interesting.

Lately, for me, talking to AI chatbots has been kind of tiring.

Don’t get me wrong, depending on what I’m working on, I have found some AI bots, particularly those with web search and references, to be very useful. And there are some things that would have taken me so much longer if it weren’t for AI chatbots. There are definitely useful things about them.

But, for a lot of the stuff that I’ve tried to use them for lately, it’s been just a waste of time, and I find myself using them less and less.

There is a specific failing point of AI that is the real clincher for me and that is the inability to reason. By reason I mean logical thinking.

When I feel the need to ask an AI chatbot something, it’s usually because I have some sort of a nuanced question that I need to understand that isn’t clear from the official documentation for whatever I’m working with. Or it’s one of those questions where the words in the question match a whole tone of pages that aren’t actually related to the question you are trying to ask.

When I get to this point and try to ask an AI bot, it often starts by telling me something that is vaguely similar to what I asked, but it isn’t really understanding the question for what the question really is. I then go on to clarify what I’m asking, and it seems to get or pretend to get what I’m saying and then usually starts to give me the answer I’m looking for, except for the fact that what it’s saying doesn’t actually work in the real world. It’s just giving an answer that obviously would help if it was real. But it’s not real and the AI doesn’t even know it’s not real.

Sleepwalking Geniuses

By analogy, the AI is sleepwalking. It’s more like sleeptalking, but you get the idea. Have you ever had those dreams where things seem to make sense while you’re in the dream, and then you wake up and realize how weird and senseless it was?

That’s basically what these AI are doing. Their brains have been programmed with all kinds of examples of “how things happen” at least as far as text is concerned. The know generally the pattern of all the things that we talk about and then, just like when we have dreams, they kind of “autocomplete” what is going to happen next in a big stream of events.

And like some sleep walkers, you can talk to them and they can hear you and respond, but they’re still dreaming.

It’s like you’re talking to a complete genius who knows soooo much stuff, but they’re still asleep.

It’s just like in the cartoons when somebody’s sleeping and another character comes in and talks to them in a funny voice to try and get some secret out of them while they sleep.

No True Dialog

The thing about this interaction is that it’s not like really a two way conversation. You’re the one who is doing all the thinking. The AI is still lying asleep kind of giving you the answers that you want to hear without being able to tell whether or not it makes any sense in the end.

By-the-way, this applies even to the “reasoning” models we have now, which do this “internal monolog” where they “think” about what they are going to say before they say it. When I’ve used these, it’s interesting for sure to see what it’s thinking, and sometimes its thinking is more useful to read that than its answer because its training for its “thinking” phase has more of a sense of honest “I might not know this” than its training for the final answer does. Nonetheless it’s still asleep! Its “life” that it’s dreaming about just includes thinking as well as talking now. When I come to it with nuance in a question, it still misses that nuance and gives me made up answers.

I would prefer a much stupider AI that actually reasoned. Think about Baymax in Big Hero 6. He’s not the smartest robot in the world, but he does actually listen to Hiro, and he actually applies reason and learns things.

I’ve said it before that AI is missing the wisdom mind. They are just a bundle of automatic responses that can only partially replicate the way that a real person thinks.

If the AI would actually hear me and be able to have an actual concept of logical “things”, actual “principles” like humans have, not just emotionally responding to everything as if in a dream, then I could actually have a conversation with the AI. That would open the door for me to actually, possibly feel comfortable giving it some agency and respect, because I’d have an idea of what it actually knew and learned.

As it stands though, all the AI companies are doing is making a robot that we can all try to hypnotize while trying to get it to do useful stuff. It’s not a real thinking agent. It doesn’t know anything.

Symbolic Models

Just to take another perspective on it: I don’t trust any AI to write computer code.

Logic is built on symbolic reasoning. You have concepts of these symbols for “things”, whatever they are, and how they relate to each-other. When you determine that a relationship exists and you fix that in your head, that relationship stays there consistently. But AI are just spitting out a stream of tokens. They don’t logically reason with concrete symbols.

So if you prompt an LLM with a slightly different wording or you mix up the way you say things you can change its answer to what is logically the same question. Because it doesn’t really know what you’re talking about. It just has this fuzzy idea of how it should talk when you talk a certain way.

That means it can never know when it has made a mistake. There is no such thing as mistakes. There are just certain tokens it should spit out to act like it’s sorry when you say that it has made a mistake.

In logic, though, there is such a thing as absurdity, a mistake, something that can’t happen! If you say, it’s Thursday today, and yesterday it was Sunday, that’s an absurdity based on our logic that “yesterday” means the day before “today” and that “Wednesday” is always the day before “Thursday”.

Can our AI realize this today? They sure can! But they just do it because when you have been trained long enough, even in your dreams there are things consistent enough about the way that you respond that you can readily respond correctly. It doesn’t mean that it actually has a solid model of the world under-the-hood.

If you want an AI to write correct code consistently, it has to have a solid underlying model of what it’s coding. Coding like AI do on a vast intuition is only going to get you so far, and the real risk of it is that the AI will never know when it’s violated some important principle of the system. Worse than that, the real people working on the system might not understand the AI code and so they won’t know whether it breaks something either!

People talk about “safe” AI, but there is no way to make safe AI without a symbolic aspect to their design. There will always be ways to hypnotize it around your attempt to train it for safety, and it will never be able to tell when it’s violated its safety directives unless it has a symbolic model. That is why we put word filters and external safety wrappers around AI, because we can’t actually train the AI itself to know what is safe and act on it.

Summary

Anyway, I’m not sure how much of that was useful, but the topic has kind of been coming up so I wanted to write my thoughts down to be easily re-shared.

One way of looking at things is that I have no “respect” for AI chatbots. They are a funky kind of search engine that you can “talk” to but you can’t have a genuine conversation with.

I fully believe we could make an AI that you actually could have a conversation with. I’m not sure if it’s a good idea. But I think it’s useful to get pictures as to why our AI is incomplete and failing to be useful in many contexts despite having the appearance of being correct.

I don’t see anybody actually dealing with this problem. They just make bigger, “smarter”, smaller, etc. models all built on the same idea that enough hallucination will eventually produce “intelligence”, but I’m not convinced.

Postscript

As a side note, I was actually thinking a lot about how to make an AI you could talk to like Baymax before the LLM stuff got huge. I wanted a “humble” AI you could talk to and that would learn from you as you taught it about the world just like you would teach a child. I didn’t think that predicting the next word with neural networks was the right idea.

Later, when I first chatted with an LLM I thought, “well I guess I was wrong, predicting the next word, given enough training, can make an AI that can ‘reason’.“.

Now I’m realizing I was only half wrong. Predicting the next word, because it’s built on substitution, which is kind of like the fundamental “action” of logic, can produce something that has many aspects of reasoning and logic in it. But none of those aspects are “concrete”. They are fluid and finnicky, unlike the solid “beliefs” that we build as humans.

That’s what it is, LLMs don’t take their intuitions over time and form “beliefs”. They just go on through their training building up intuition and never “wake up”.

On top of that, they will never have a spirit, but that’s a separate issue and, while spirits are important for people, I don’t think it’s necessary to make an intelligent AI.

Again, though, I don’t think that making an actually intelligent AI is what I want people to work on right now anyway.