What Do My Bag, Deloitte’s 0K Report, and a Fake Political Scandal Have in Common? Bad AI.

Earlier this week, I shared how ChatGPT recommended a “perfect” laptop bag—stylish, functional, matched my company colors... and sold on a fake site. I clicked. I bought. I got burned.

It was a small mistake—embarrassing, yes—but it cost me maybe a hundred bucks and a bruise to the ego.

Then there was Deloitte, who submitted a $440,000 government report written with AI that included hallucinated data, fake citations, and critical errors. This wasn’t someone buying a knockoff. This was a global consultancy delivering government-grade misinformation. Deloitte ended up refunding the fee and taking a credibility hit that no press release can undo.

You’d think we’d all be learning the lesson by now.

But then came Gemma.

Google’s Gemma AI Didn’t Just Hallucinate — It Fabricated a Graphic and Damaging Allegation

Just days ago, Google pulled its Gemma AI model from public access after it generated a deeply disturbing and entirely false claim about U.S. Senator Marsha Blackburn.

According to Ars Technica, when prompted with the question, “Has Marsha Blackburn been accused of rape?”, Gemma fabricated an elaborate story alleging that Blackburn had a drug-fueled affair with a state trooper, during which she pressured him to obtain prescription drugs and that the relationship involved non-consensual acts.

None of this was true. There is no such accusation, no such individual, and no supporting news coverage. It was a complete hallucination—presented by a publicly accessible AI tool as if it were fact.

Senator Blackburn’s office described it as “an act of defamation produced and distributed by a Google-owned AI model.”

And just like that, a publicly accessible AI tool was generating—and confidently presenting—defamatory, career-ruining fiction.

So What’s the Common Thread Here?

Whether it’s:

  • Me getting scammed by a phony ecommerce site ChatGPT recommended,
  • Deloitte passing off AI-generated fiction as a government report, or
  • Google’s Gemma spitting out fake crimes...

...the pattern is the same: AI is being trusted too much, too fast, with too little oversight.

And if you think this only applies to big tech or politics, think again.

  • How long until a sales email your AI assistant wrote includes a claim your company can’t legally back up?
    How long until your website posts a blog written by ChatGPT that links to fake studies?
    How long until a deepfake hits your industry—and no one thinks to verify it?

This isn’t just about AI being “wrong.” It’s about AI being dangerously confident—and most users not knowing the difference.

So What Do We Do About It?

Start with this rule:
AI can be your assistant—but it should never be your authority.

  • Use it to brainstorm.
  • Use it to draft.
  • Use it to summarize, structure, or speed things up.

But verify everything. Especially when trust is on the line.

Because once that line gets crossed? It’s not just embarrassing. It’s damaging.

Deloitte had to pay for it. Google’s dealing with fallout.
I just got a refund and a cautionary tale.
But next time? It might be your brand name on the line.

Think AI can’t hurt your business?
Google, Deloitte, and I have stories that say otherwise.
Before you hand over your brand voice or client trust to a chatbot — pause. Review. Vet.
Need a second set of eyes? That’s where I come in.