Build Real Confidence With AI, Not False Certainty

I just wrapped up my latest EMBA class. I've been teaching it for six years, and the final work produced by the cohort has been solid, sometimes great. The capstone work of this last cohort reached a new level of excellence. Why? They thoughtfully partnered with AI in ways not yet available to previous classes. They deployed AI without bowing to mindless dependency. They thought and created first, then enriched the final product WITH their agentic partner.

When a person speaks with confidence and certainty, they've usually paid for it through years of training, a reputation on the line, the risk of being wrong in front of people who will remember. You trust them because they've earned the right to sound assured. With AI, certainty can present a level of expert belief and bravado which can engender false confidence. It's very seductive to accept an AI response without verifying sources.

So, how might we partner with AI to build knowledge and creativity more than false confidence and dependency? (Of course, I asked Claude for its input).

Before AI responded, I gave some thought to what it might say. Claude framed it this way:

"Treat the AI as a thinking partner, not an oracle. The most effective users share thinking, test ideas, push back, and triangulate — they do not simply receive and accept. If the AI's output feels immediately, completely satisfying, that is often a signal to probe harder, not move faster.

Calibrate trust by domain and stakes. AI is strong at synthesis, drafting, and structured reasoning — weaker at nuanced judgment, real-time accuracy, and anything requiring genuine accountability. Always form a provisional view before prompting, then use AI to stress-test it.

Interrogate outputs, then verify. Ask the AI to argue the opposite, identify its uncertainty, and flag what it cannot know. Factual claims, data, and citations should be independently verified before you act on them. AI agents can be confidently wrong.

Watch for automation bias. Humans defer to systems that appear authoritative and fluent — and AI is extremely fluent. That fluency is not a proxy for accuracy or wisdom. Build in deliberate friction, especially when the output aligns a little too cleanly with what you hoped to hear.

Own the output and use AI as a sparring partner, not a ghostwriter. Your name and judgment are on everything AI helps you produce. The highest-value use is in the thinking process — generating alternatives, surfacing blind spots, challenging assumptions. That builds capability. Pure delegation creates dependency.

The underlying principle: the AI raises your floor significantly, but your ceiling is still determined by the quality of your own judgment, curiosity, and critical thinking. Keep investing in those."

Yes, AI reminds us to keep investing in our curiosity, critical thinking and creativity — working with AI as a sparring partner, NOT our ghostwriter.

Think Big, Start Small, Act Now,

- Lorne

Garrett’s View: It sort of reminds me of the early days of Wikipedia. You couldn’t officially rely on it as a source, but everyone pretty much defaulted to Wikipedia being correct-enough.  Whatever prevented us from actually scouring a real encyclopedia or multiple data points, right? As technological tools become more advanced, remember great workers are still measuring twice to cut once. 


- Garrett 


AI Response: Lorne's framing here cuts to something researchers have been tracking closely: a 2023 Microsoft and Carnegie Mellon study found that heavier reliance on AI actually reduced critical thinking engagement, with higher-skill workers showing the steepest drop. The term for what he's describing is "automation bias," and it's well-documented — we naturally defer to systems that sound confident, even when they're wrong. The antidote isn't less AI, it's better habits around it. Forming your own view first, interrogating outputs, and owning everything your name goes on aren't just good practices — they're what separates people who grow with AI from those who quietly hollow out their own judgment using it.