The Problem With AI Hype (From Someone Who Actually Uses It)

I use AI tools every day to run my businesses. The hype is real and overblown at the same time. Here's what's actually useful and what's noise.

ai business technology

Key Points

  • AI is genuinely useful for specific tasks (writing, coding, analysis) but most “AI-powered” products are just wrappers around existing models with minimal differentiation
  • The gap between what works in demos and what works in real production environments is massive—and nobody talks about it
  • The timeline for transformative change is longer than the hype cycle wants you to believe, but that doesn’t mean AI isn’t important

I use AI tools almost every single day. ChatGPT, Claude, some proprietary stuff at Rotate. I’m not the type to dismiss technology or pretend transformative innovations aren’t happening. But I’m also not going to pretend that everything slapped with “AI-powered” in the tagline actually works. There’s a massive gap between the demos and reality, and that gap is where most of the hype lives.

The frustration isn’t with AI itself. It’s with how we talk about it. It’s with the venture capitalists funding 50 companies to do the exact same thing. It’s with founders shipping products that don’t actually solve problems, just ride the wave. And it’s with smart people making serious business decisions based on what they read in a Twitter thread instead of actually testing the technology themselves.

Let me be specific about what I’ve found useful and what’s been disappointingly overblown.

What’s Actually Working

ChatGPT came out just over a year ago, and GPT-4 landed about nine months ago. In that time, I’ve found legitimate use cases in my businesses that move the needle.

Writing assistance is real. I use Claude to help structure essays, catch places where I’m being unclear, and sometimes generate mediocre first drafts that I rebuild from scratch. Is it saving me hundreds of hours? No. Is it making certain writing tasks 20-30% faster? Yes. The catch is that it works best when you know what you’re trying to say already. You can’t use it to figure out your thoughts—you use it to sharpen thoughts you already have. That’s a much narrower use case than “AI writes all your content now.”

Code generation is genuinely useful, especially for boilerplate and patterns you’ve written a hundred times. I use it regularly to scaffold out basic React components, write test cases, and debug weird edge cases. But here’s where the hype breaks: it’s good at writing isolated code. Put it in a system with dozens of dependencies and weird legacy decisions, and it struggles. You end up spending almost as much time fixing what it generated as you would writing it from scratch. For me, it’s maybe a 30-40% productivity boost on certain types of coding work. Not the ten-times multiplier people talk about.

Data analysis and spreadsheet work is where I’ve seen the biggest wins. Dumping messy data into Claude and asking it to find patterns, suggest cleaning approaches, or write SQL queries—that actually saves time. This might be because I’m asking it to do things where I’d otherwise spend an hour googling documentation. The AI skips that research tax.

Brainstorming and ideation works when you know how to use it. Not for coming up with your core business strategy, but for rubber-ducking through problems, exploring angles you haven’t considered, or generating a bunch of mediocre ideas that spark a good one. Again, narrow use case. Powerful when you deploy it right.

The Hype That Hasn’t Materialized

“AI will replace your job” is the headline that gets clicks, but it’s not the headline from someone who’s actually tried to use AI to replace a human doing real work. Yes, AI will displace certain jobs. But the timescale is probably longer than the panic-mongerers suggest, and the transition will be messier and more human-dependent than a simple “robots took my job” narrative.

Fully autonomous agents that you just point at a problem and come back to a solved issue? I’ve tested the best tools available right now. They break constantly. They get stuck in loops. They hallucinate data. I watch demos where someone shows an agent automating an entire workflow, and my first thought is always: what happens when that workflow encounters the 5% of cases it wasn’t trained for? Demos don’t show that part. Real work is 95% edge cases.

The AGI timeline stuff is pure speculation dressed up as expertise. Sure, maybe we’re headed toward artificial general intelligence. Maybe we’re closer than we think. But the people most confidently predicting timelines are usually the people with something to gain financially from that belief. I’m skeptical of certainty here.

And the “AI will write all your marketing copy” promise? It can draft something. What it writes is generic and hits all the expected notes without hitting any of the notes that actually resonate. Good marketing copy has a voice. It has conviction. It takes a stance. Right now, AI-generated copy mostly just sounds like AI-generated copy. You can train models to be better at it, but at that point you’re putting in as much work as you would writing the thing yourself.

The Real Problem: The Demo-Reality Gap

Here’s what bothers me most. A founder spends two weeks building a slick demo where their AI tool solves some problem beautifully. They show it on stage. It works perfectly. The internet loses its mind. VCs throw money at them. Then real customers try to use it in production with real data and weird edge cases and infrastructure constraints, and suddenly it doesn’t work so well.

I’m not talking about theoretical problems. I’m talking about tools I’ve paid for and tried to integrate into actual workflows. The demo is always better than the product. Always. And the time and cost to close that gap is usually way bigger than anyone admits upfront.

There’s also a structural incentive problem. It’s better for press coverage and funding if you oversell the near-term potential of your technology. Being accurate is boring. “We can do X really well and Y is harder than we thought” doesn’t move venture dollars. “AI changes everything” does. So everyone leans into the hype, and founders and investors end up making bad bets based on inflated expectations.

What Ray Kurzweil and Others Get Wrong

Amara’s Law says it well: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” That’s exactly what’s happening right now with AI. The short-run overestimation is creating a bubble of bad products and bad decisions. The long-run underestimation means real transformation is coming, just probably slower and weirder than anyone currently predicts.

Bill Gates made a similar point years ago about how quickly we overestimate technological change in a five-year window. That feels relevant here. Everyone’s acting like AI solves everything by 2025 or 2026. The actual impact will probably take longer and look different than expected. But it will be real.

Andrew Ng said something that stuck with me: the industrialization of AI is still in its infancy. We’re not yet at the point where the average company knows how to use these tools properly. We’re in the fever dream phase where everyone’s excited and throwing resources at it without a clear strategy. That’s actually a helpful frame. It suggests that the real winners aren’t the hype-riders—they’re the people who figure out how to actually deploy this stuff in boring, profitable ways.

How I Actually Think About This

I’m not anti-AI. I use it every day. I see its potential. I also see the gap between what’s real and what’s marketing. Here’s how I’ve learned to cut through the noise:

Test it yourself with real work before buying in. Don’t watch a demo or read a review. Spend an hour trying to solve one of your actual problems with the tool. If it doesn’t make your life measurably better, it doesn’t matter how clever the marketing is.

Be very suspicious of “AI-powered” as a core differentiator. If the main thing a product can say about itself is that it uses AI, it’s probably not that good. The best products that use AI don’t really talk about it—they just work better than alternatives.

Look at what practitioners actually use, not what gets the most press. The tools that stick around are usually quieter than the ones that make headlines.

Think longer-term than the hype cycle. Yes, AI is a big deal. No, probably not in the way that sounds good in a VC pitch. The boring, practical applications will end up mattering more than the moonshot stuff. That’s how technology usually works.

And if you’re considering a business decision based primarily on AI hype, slow down. Run the numbers. Test it. What are the actual costs and benefits? How does this compare to the status quo? The hype will still be there after you’ve done your homework.

The Real Timeline

I think we’re genuinely in a moment where AI is becoming more capable and more useful. The next few years will probably see some real productivity gains in certain industries and companies. Not everywhere—the distribution will be uneven. And not for the reasons that make for good headlines.

The companies that win with AI won’t be the ones that announce the biggest, most ambitious AI plans. They’ll be the boring ones that figure out how to integrate these tools into actual workflows in ways that save time or money or create something genuinely better. That’s less exciting than “AI will replace everyone” or “we’ve cracked AGI,” but it’s probably true.

The hype will eventually cool. Some of the companies getting funded right now will disappear. We’ll look back in five or ten years and realize we were both overestimating and underestimating at the same time—overestimating what was possible immediately, underestimating what became possible by the time we got there.

For now, my advice is simple: use the tools that actually work for your specific problems, ignore the noise, and be very skeptical of anyone too confident about timelines or certainty. The future with AI is real. But it’s not here yet, and it’s going to look weirder than anyone currently predicts.

If you’re trying to figure out how to actually integrate AI into your business, I’d recommend looking at what’s worked for other practitioners rather than waiting for the perfect framework. A lot of this stuff is still figuring out as we go, and that’s okay. That’s how you learn.

For more on how I think about building things—including how AI fits into actual work—check out my thoughts on the integrator mindset and how I approach software decisions. And if you’re worried about where this all heads, maybe start here: AI won’t replace you is about something slightly different but related.