AI vs. Common Sense

Don’t get bot hurt. Get bot even.

Glue on pizza. Rocks for dinner. Google’s AI has had some interesting takes.

Last year, Google quietly rolled out its AI Overviews feature to U.S. users—a tool that summarizes search results using large language models. The idea? Save you a few clicks with a chatbot-style answer at the top of your search page.

The result?

🍕 Glue that cheese down (and eat some rocks)

One viral example told users to mix glue into their pizza sauce to keep cheese from sliding off. This suggestion originated from an 11-year-old Reddit joke, highlighting the AI's inability to discern satire from fact.

Another AI Overview recommended consuming rocks daily as a health hack, as reported by the Financial Times and many, many other news outlets.

🧪 The human cost of AI “efficiency”

AI is built to save time but sometimes it skips over reality in the process.

Sure, glue-on-pizza and “eat rocks” are obvious red flags. But not all bot blunders come with warning labels. Some are quieter, sneakier and way more dangerous.

Blindly trusting AI-generated content doesn’t just lead to bad dinner ideas—it can lead to real harm. It’s a reminder that AI should augment human decision-making, not replace it.

🔎 Why did this happened?

Google's early AI Overviews rely on large language models that generate responses based on patterns in data, not an understanding of truth. This means they can regurgitate misinformation if it's prevalent online.

Google acknowledged these issues and has since improved over the year. So much so over the year that, according to the The Verge, “Google is already preparing for a world where all search is AI search.”

🤖 Final Bot thought

AI tools are powerful but not infallible. They still lack the nuanced understanding that humans possess. As we integrate AI into more aspects of our lives, maintaining a critical eye is essential. Sometimes, common sense is the best filter.

🗣️ Bot Talk: This just in…
Your news anchor may be a bot

Move over, Ron Burgundy—there’s a new face in the newsroom. And it doesn’t blink.

AI news anchors are officially a thing. From India to Hawaii, broadcasters have rolled out digital talking heads with names like Lisa, Sana and James. They’re always on time, never flub their lines and don’t need hair and makeup. Honestly? Dream employee.

Except… they’re not great at nuance. Or tone. Or, you know, being human.

India’s award-winning Sana (India Today Group) and Lisa (Odisha TV) can read news around the clock—but sometimes they hallucinate facts, struggle with names or deliver tragic headlines like they're reading a lunch order. Hawaii's Garden Island newspaper launched its own AI anchors earlier this year—only to quietly fire them weeks later. 

AI anchors might look sharp, but when it comes to trust, empathy and not sounding like a haunted GPS… they’ve got some upgrades to make.

So yes, AI anchors can report the news. But should they? We’ll let you—an actual human—use your common sense on that one.

🚀 Coming up next week …

It’s graduation season—but are students walking across the stage or just copy-pasting their way there?

A human Bot Hurt reader wrote in with a sharp question: If students are using AI to do their lessons, homework and every group project under the sun… are they actually learning anything?