• Bot Hurt
  • Posts
  • Most of this shouldn’t ship

Most of this shouldn’t ship

AI made everything publishable. That’s the problem.

Don’t get bot hurt. Get bot even.

Drop in a prompt. Get back a clean draft. Try a different angle, get another one. Within minutes there are three polished versions of an email, five headlines that all technically work, a slide deck that somehow makes sense on the first pass.

None of them are wrong.

Congratulations. You've never been more productive. You've also never been less sure about any of it.

AI raised the floor on everything — rough drafts got better, obvious mistakes got rarer, and work that used to take an hour now takes eight minutes. What it didn’t raise was the ceiling. Turns out the ceiling was doing more work than anyone noticed.

The friction is gone too. The versions that fell apart. The paragraph that needed four rewrites before it said what it actually meant.

That process was slow and occasionally miserable — but it was also the signal. It told you when something was landing and when you were just filling space with sentences that sounded good in the moment.

Now everything sounds good in the moment.

The question used to be: Is this right?

Now that almost everything is technically right, a harder question shows up uninvited: Is this worth saying at all? AI doesn't have an answer for that. It will keep generating — more options, more versions, more things that could ship — cheerfully indifferent to whether any of it deserves to exist. It has no skin in the game. You do.

Readers are already noticing, even if they can't articulate why.

The Nieman Lab, Harvard University’s journalism think tank, flagged it late last year: in 2026, AI-written is expected to outpace human-produced work not just in spam corners of the web, but across the mainstream channels where people actually pay attention.

According to global agency Billion Dollar Boy, consumer preference for AI-generated creator content has dropped to 26%, down from 60% just three years ago.

Authenticity, it turns out, isn’t a soft concept. It's a competitive advantage.

The truck is full. Most of it should stay on the curb.

Bot Tips: The out-loud test

Before sending anything AI helped write, read it out loud. Full sentences. No skimming.

  • If a phrase causes a stumble, it probably isn't yours.

  • If the ending trails off, there wasn't a real point yet.

  • If it sounds like something a reasonably well-prompted bot could have written at 2 a.m. on a Tuesday — it did.

Good is table stakes now. The bar is: does this need to exist, and are you the one who needed to say it?

Final Bot Thought

AI made it easy to get to good enough. Deciding what deserves to exist? Still on you.

Bot Talk: The meter is running

At networking events and in developer circles, there’s a new word showing up in AI conversations: tokens.

They’re how tools like Anthropic’s Claude and OpenAI’s ChatGPT measure usage — every prompt in, every response out. For most of the past two years, nobody paid much attention.

That's changing.

As adoption picks up, so does usage. More prompts. More drafts. More iterations. Companies are starting to track tokens like any other resource, with dashboards, quotas and internal benchmarks. Some teams even have a name for pushing it: tokenmaxxing.

The shift is subtle. AI used to feel unlimited. Now it’s metered.

The shift is simple: AI used to feel unlimited. Now it's metered. And once something is metered, behavior follows. You think twice before running that fifth variation. That instinct is worth trusting.

Tokens aren’t just a pricing model. They’re a forcing function.

The meter is running. Prompts are cheap. The pileup isn’t.

🚀 Coming up next week …

Everyone’s using it. Nobody’s saying so. AI rewrote the email, polished the résumé, and softened the rejection.

Next week, we explore why everyone quietly moved on as if the assist never happened — and what that silence is actually saying.

Don’t get bot hurt. Get bot even.