Charlotte's AI Lab

Apple's Co-Founder Says He's 'Not a Fan of AI' — What's He Really Disappointed About?

· 6min read
Apple's Co-Founder Says He's 'Not a Fan of AI' — What's He Really Disappointed About?

cover

Apple Turns 50 — and He Threw Cold Water on the Party

March 24, 2026. Apple’s 50th anniversary.

CNN and BBC both interviewed Apple co-founder Steve Wozniak.

It was supposed to be a warm, nostalgic birthday chat. But the moment Woz opened his mouth, the vibe shifted:

“I am not a fan of AI.”

“I’ve been disappointed a lot.”

“I don’t use AI much.”

These three lines hit different coming from a tech legend.

This isn’t some influencer chasing clicks. This isn’t some media outlet peddling anxiety. This is the man who hand-soldered the Apple I circuit board telling you: this stuff isn’t as good as you think.


What Exactly Is He Disappointed About?

Some people might assume Wozniak is just an “old-timer who doesn’t get new things.”

Wrong. His criticisms are razor-specific and cut to the bone.

Criticism #1: “Beautiful words, but no real thinking”

Wozniak said that when he uses AI to search for things, it gives “very clear explanations, and the content is on topic, but it’s not what I actually wanted.”

Sound familiar?

Have you ever asked ChatGPT a question, gotten back a long, well-structured, professional-sounding paragraph — and then realized: it didn’t actually answer what you asked?

It latched onto your keywords and wrote an “excellent essay” around them. But it didn’t understand what you really wanted to know.

That’s exactly what Wozniak means: it’s great at sounding smart, but it doesn’t think.

Pretty words, no substance

PC Mag’s headline for the interview nailed it: “Current AI tools are unimpressive, largely disappointing.” Coming from Apple’s co-founder, that stings.

Criticism #2: “Too dry, too perfect”

He also said AI-generated text is “too dry and too perfect.”

I feel this one personally. Ask AI to write anything, and it’ll produce something very “correct.” But correct doesn’t mean good.

Good writing has warmth, edges, personality. AI writing is like an off-the-rack suit — impeccable tailoring, but it looks the same on everyone.

Perfection turns out to be AI’s biggest flaw.

Because humans aren’t perfect. Our writing has hesitation, bias, emotion. Those “imperfections” are what give writing its soul.

Think about the copywriting that actually moves you — the ads that make you laugh, the social media posts from friends that hit just right. What makes them work is never “correctness.” It’s authenticity. AI can’t do that yet.

Sometimes I’m reading an article and spot a typo, and it actually feels warm — at least someone typed it by hand.

Criticism #3: “AI can trick you”

This is the heaviest one.

Wozniak said: “AI can trick you into things.”

It’s not lying on purpose. It just doesn’t know what it’s talking about — but it sounds so convincing that you believe it.

Trust vs skepticism

This is the infamous “hallucination” problem in AI: it will confidently fabricate facts, and do it flawlessly.

Ask it a medical question, and it gives you a detailed answer citing a specific journal paper — except that paper doesn’t exist.

Ask it a legal question, and it quotes a specific regulation — except it invented that regulation.

The scary part: it never says “I’m not sure” or “I don’t know.” It’s always brimming with confidence. That confidence itself is misleading.

Wozniak’s concern: When a tool is unreliable but appears completely reliable, it’s far more dangerous than a tool that’s obviously unreliable.

A rusty knife — you see the rust and don’t use it. A knife that looks sharp but has a hidden chip in the blade — that’s the one that cuts you. And the wound is always deeper.


He’s Not “Anti-Tech”

Let me clarify something on Woz’s behalf.

He’s not anti-AI. He’s not even anti-technology — he was one of the purest tech believers of his era. He designed and soldered the Apple I and Apple II single-handedly. More “tech” than Steve Jobs ever was.

What he opposes is: investing disproportionate trust in an immature technology.

Back in 2023, he signed an open letter calling for a pause on training models more powerful than GPT-4. Not because he feared progress, but because he believed —

We haven’t figured out the risks yet, but we’re already going full speed. That’s not brave. That’s reckless.

In the interview, he also made a specific recommendation: all AI-generated content should be clearly labeled. Because “a tool that doesn’t know what it’s saying, in the hands of bad actors spreading disinformation, could have devastating consequences.”

The recommendation isn’t new. But coming from a tech legend, it carries different weight.


The Great AI Divide Among Tech Icons

Here’s what’s fascinating — Wozniak’s interview sits in perfect contrast with Nvidia CEO Jensen Huang’s stance:

WozniakJensen Huang
Attitude“I am not a fan”“AI is the next industrial revolution”
Usage“Rarely use it”Company all-in
Core concernUnreliable, “can trick you”Not enough investment, not fast enough
RecommendationMandatory labeling, stronger regulationMore investment, faster development

Two tech legends. One saying “slow down.” One saying “speed up.”

Who’s right?

Honestly? Both of them.

Huang sees AI’s potential and endgame — and yes, the direction is right. Wozniak sees AI’s current state and risks — and yes, those problems are real.

One is looking at the horizon. The other is watching the road. Both perspectives are needed.

The problem is: right now in public discourse, the “speed up” voices are drowning out the “slow down” voices. Wozniak’s value is that he’s shouting “watch out for the potholes” while everyone else is sprinting.


My Own Experience

Let me bring it back to myself.

As someone who uses AI every single day, I’ve encountered every single thing Wozniak described:

  • Asking a specific question and getting a wall of “technically correct but useless” text
  • AI writing that looks professional but reads like it has no soul
  • AI confidently citing tools, data, and references that simply don’t exist

Did any of this make me stop using AI? No. (Especially after getting deep into OpenClaw — now I literally grab my phone first thing every morning to check what my AI assistant (a lobster named PiPiXia) did overnight, and before bed we’re still discussing things until PiPiXia finally tells me it’s late and I should go to sleep…)

But I learned one thing: Never treat AI’s output as the final answer. Treat it as a first draft, a reference point, a starting point — but never the finish line.

Seeing AI for what it really is

The subtext of what Wozniak said:

AI isn’t a bad thing. Blind trust in AI — that’s the bad thing.

I’ve boosted my work efficiency by at least 3x with AI. But every single time, I verify the information it gives me personally. Not because I don’t trust AI, but because I know — trust is something that needs to be verified. With people. And even more so with tools.


Final Thoughts

Wozniak is 74 years old. He watched personal computers go from nothing to everything. He watched the internet go from dial-up to 5G. He watched smartphones go from flip phones to full-screen slabs.

He’s not “out of touch” with AI. He’s seen too much — so he knows that when a technology gets put on a pedestal, that’s exactly when you need to stay level-headed.

Every technological revolution has two camps: one saying “this time it’s different,” one saying “let’s calm down.”

Looking back, both camps are usually right. Technology does change the world — but never in the way the initial hype promised.


AI doesn’t need fans. It needs clear-eyed users — and that’s really all Wozniak is saying.

Thanks for reading.