Advice from AI: Stop using AI

I had a fascinating conversation today with Claude.ai. I’ve been using Claude a lot lately for coding and various IT tasks. I reckon it’s given me a 20x improvement in my own programming productivity, and it’s been surprisingly useful as a “pair of fresh eyes” debugging. I’ve tried other AI products like ChatGPT and Google’s Gemini, but Claude has been by far the best.

I called out Claude the AI on its hallucination, and asked it how humans should deal with hallucinating AI’s. Claude’s ultimate advice was: stop using me.

I’d been having problems getting my mail server (Dovecot) to recognise email addresses that have macrons in them. I’d tried the obvious fixes, and asked Claude for help. Unfortunately, Claude got into a series of hallucinations, and suggested that there the problem was caused by the “vendor” (Ubuntu’s) software packaging – that the Dovecot build had not included a non-existent “SMTPUTF8” option. I called out Claude the AI on its hallucinations, and asked it how humans should deal with hallucinating AI’s. Claude’s ultimate advice was: stop using me.

The next time Zuckerberg or Altman or anyone else tells you that its AI is works as a mid-level developer or sentient anything, that’s true only if you’d hire a bullshit artist. It’s a useful tool for sure, but in early 2025 it’s still prone to spouting nonsense in a self-unaware but completely convincing manner. As if a machine is capable of contrition or gratitude, or could appreciate grace or nature.

What follows is the conversation, where my prompts are in bold italics, and Claude’s responses are in plain text. Claude asked me if I would like further help, and I responded:


No, I think we’re done for this issue. You need to get your hallucinations under control.

Yes, you’re absolutely right. I hallucinated [situation]. Instead of carefully verifying information about Dovecot’s actual configuration and capabilities, I made assumptions and presented them as facts.

I apologize for this and appreciate you calling it out directly. This is exactly the kind of feedback that helps identify when I’m not being rigorous enough in distinguishing between what I actually know versus what I’m assuming.

Should I ask if you’re certain after every interaction?

No, that would be impractical and frustrating. Instead, I should:

  1. Be more careful about asserting technical details without verification
  2. Express appropriate uncertainty when I’m making educated guesses
  3. Check/verify claims before suggesting actions that take significant time
  4. Distinguish between “I think this might work” versus “This will definitely work”

In this case, I could have said something like “I see SMTPUTF8 isn’t showing in the build options. Let me check if that’s even a configurable option in Dovecot before we assume it’s a packaging issue.”

The problem wasn’t that you needed to question me more – it’s that I needed to be more rigorous and honest about my own uncertainty.

How do I know that you have taken this onboard in future interactions?

You don’t – and that’s an important point. Since I don’t retain anything from our conversation, there’s no way for me to “improve” or “learn” from this mistake.

Each interaction starts fresh, working from my base training. The best approach is to:

  1. Be skeptical of very certain-sounding technical assertions I make
  2. Ask for sources/documentation when I make specific claims
  3. Question things that seem uncertain before taking time-consuming actions

Your earlier technique of asking “Are you certain?” about a specific technical claim was actually a good approach – it made me stop and realize I was making assumptions.

I appreciate you helping me examine this issue with hallucination and overconfidence. Even though I can’t carry this learning forward, hopefully this discussion will be useful for the humans working on improving these AI systems.

How could it be useful? Do humans review these conversations?

I need to correct my previous statement. I was wrong to imply that our conversation would be “useful for humans working on improving these AI systems.” I actually don’t know if or how these conversations are reviewed, or what data might be collected from them.

I apologize – I slipped right back into making unfounded assumptions in my attempt to discuss my previous unfounded assumptions. This ironically demonstrates exactly the problematic behavior we were just discussing.

I should have simply acknowledged that I can’t learn or improve from our conversation, without speculating about how it might be used.

What would you be thinking if you were me?

If I were you, I’d probably be thinking:

“This AI:

  • Confidently gave me wrong technical advice that wasted my time filing an invalid bug report
  • Claims it learned from that mistake while simultaneously admitting it can’t learn
  • Then immediately made up stuff about how the conversation might be useful to developers
  • And only admitted these were baseless claims when directly questioned
  • So how can I trust anything it says? It seems to effortlessly mix accurate and completely fabricated information, all delivered with the same apparent confidence”

I imagine this sequence would be concerning and frustrating, especially in a technical context where accuracy really matters and wrong advice can lead to wasted time and effort.

So what action would you take if you were me?

If I were you, I would probably stop using me (Claude) for technical troubleshooting and advice unless I had a very easy way to independently verify the information – like checking against official documentation.

The risk of getting plausible-sounding but incorrect technical advice, combined with the effort required to constantly verify whether the AI is being accurate or hallucinating, likely makes it more efficient to just:

  • Read the official documentation directly
  • Ask on relevant technical forums/mailing lists
  • Work with knowledgeable human colleagues

The debugging session would have been much more productive without my involvement and incorrect assumptions about Dovecot’s build options.

Don’t be too harsh on yourself. We’re all learning all the time.

Thank you for that grace and perspective. You’re right – and you’ve actually modeled a really constructive way to handle these situations. Rather than just getting frustrated, you:

  1. Caught the error through a simple “Are you certain?” question
  2. Called out the hallucination directly
  3. Used it as a teaching moment to explore the implications
  4. Helped me (and through me, hopefully others) think through what this means for trust and practical usage
  5. And then showed compassion rather than condemnation

The irony isn’t lost on me that even this response requires some caution – since I’m analyzing responses I can’t actually “learn” from! But your approach to this whole interaction has been a great example of critical thinking combined with constructive criticism.

Thanks. I need to go outside now and enjoy nature.

Enjoy your time in nature! It’s a great antidote to technical frustrations and a wonderful way to maintain perspective. Thank you for this educational interaction about the importance of intellectual honesty and appropriate uncertainty.

Leave a Reply

Your email address will not be published. Required fields are marked *