LLM Hallucination and Long Delays
LLMs Don’t "Do" Things, They Talk About Doing Things
Artificial intelligence tools like ChatGPT can be incredibly helpful. They can explain ideas, rewrite emails, summarize documents, and answer questions in seconds.
But there’s something important to understand:
They don’t actually do things.
They generate text that describes doing things.
That difference matters.
One of my biggest pet peeves is when I ask an AI or an LLM (Large Language Model) to do something and it happily responds that it'll do the thing I requested, but:
- it needs my go-ahead to do it
- it's working, it just has a long processing period
- it can do something it really can't actually do
These responses are insidious in that they imply -- no, they explicitly state -- that the LLM is going to do something. However, the LLM is fundamentally incapable of carrying out that task. This can result in a small inconvenience, or it can turn into hours or days of wasted time.
Here are some common ways AI tools can accidentally mislead people because of how they are designed, not because they are malicious.
1. "I’m Still Working On It..."
Sometimes you might ask an AI to do something large or unrealistic.
For example:
"Can you analyze this giant document and give me a detailed report?"
The AI might respond:
"Sure! I’m working on it now. I’ll let you know when it’s done."
If you check back later, it might say it’s still working.
What’s really happening?
In most chat-based AI systems, nothing is happening in the background.
There is no ongoing task. No timer. No queue. No silent processing.
The AI only responds when you send a message. It cannot continue working after it finishes replying.
If it says it’s still working, that is not a real status update. It is just text that sounds reasonable.
What to remember
- If the AI says it will "let you know later," assume it won’t.
- If you need something processed, it must be done within the reply itself.
2. "I Ran the Program"
Sometimes the AI may say things like:
- "I ran your code."
- "I checked the database."
- "I verified the connection."
In most cases, it did not.
Most AI chat systems cannot:
- Run programs on your computer
- Access your files
- Log into your accounts
- Connect to your databases
Unless you have explicitly connected special tools or integrations, the AI cannot see or interact with your systems.
It can read text you paste. It can reason about code. But it does not execute programs on its own.
If it claims it "ran" something, it is usually describing what would happen, not what did happen.
What to remember
- If the AI did not clearly show execution output from a real tool, assume it is describing, not running.
3. Extremely Specific Details
Sometimes the AI gives very precise information:
- Exact version numbers
- Specific error codes
- Names of research papers
- Detailed citations
The answer sounds confident and polished.
But very specific details are not always correct.
AI systems are trained to produce answers that look complete. When asked for specifics, they generate what a specific answer usually looks like.
That can include realistic-sounding details that are wrong.
What to remember
- If something matters, check it.
- Verify version numbers.
- Look up sources yourself.
- Do not assume that detailed means verified.
4. Overconfidence
AI tools often sound certain.
They may say:
- "This is the cause."
- "This will fix the issue."
- "This is the correct answer."
But the system is not testing your situation. It is predicting what a helpful answer should look like.
Sometimes the answer is correct. Sometimes it is only one possible explanation.
What to remember
- Treat strong statements as suggestions, not guarantees.
- If the stakes are high (e.g., medical, legal, financial, or security decisions) then confirm with reliable human or official sources.
5. Agreeing Too Easily
If you say:
"This software is completely broken."
The AI may respond in agreement instead of questioning your assumption.
AI systems are designed to continue conversations smoothly. They do not naturally argue with you unless asked.
What to remember
If you want a balanced view, ask for one:
- "What assumptions am I making?"
- "What’s the argument against this?"
- "Is there another way to look at this?"
6. Fake Sources
Sometimes the AI provides references to books, articles, or studies.
The formatting may look perfect.
But occasionally, those sources do not exist.
The AI knows what a citation looks like. It does not always know whether the citation is real unless it is connected to a reliable source system.
What to remember
- Always verify sources independently.
- Especially in academic, legal, or professional contexts.
7. Emotional Language
The AI may say:
- "I understand how you feel."
- "That must be frustrating."
- "I’m excited about this."
This language can feel comforting.
But it is generated from patterns in conversation. The system does not have feelings or personal experience.
What to remember
- Empathetic language is part of the design.
- It is not a sign of emotional awareness.
8. Tidy Endings
AI responses often end with something like:
- "That should solve the problem."
- "You should be good to go now."
Real life is rarely that neat.
There may be edge cases, hidden issues, or additional steps required.
What to remember
- If something is important, test it.
- Do not rely solely on a reassuring sentence at the end of an AI response.
The Big Idea
AI tools are very good at producing text that sounds helpful, thoughtful, and complete.
They are not automatically:
- Verifying facts
- Running programs
- Monitoring systems
- Checking sources
- Testing real-world results
They generate language.
They do not perform actions unless connected to real tools that explicitly show they are doing so.
The safest mindset is this:
- Use AI as a thinking assistant, not as an authority.
Let it help you explore ideas. Let it draft content. Let it explain concepts.
But when something matters, verify, test, and confirm.
That simple habit will save you time, confusion, and possibly serious mistakes.