Why critical thinking is essential in an LLM world

Date:

Share post:

“I’ve presented my argument to my LLM. Have you?” “Of course. Let’s let them hash it out and circle back.”

Welcome to the future (sort of)

Picture this: It’s 2027. You’ve got a disagreement with a colleague—maybe about professional development models, data privacy policies or whether pineapple belongs on pizza (a classic).

But instead of diving into the usual back-and-forth, you each go to your corners and input your arguments into your large-language models.

Moments later, your LLM and theirs are in a digital arena, going full Rock-Em Sock-Em Robots, swinging citations, footnotes and logical fallacies at each other like they’re in a debate league from the future.

You wait patiently while the bots do battle. Then, your LLM gently pings:

“Good news. You won the argument. Here’s the summary and a GIF to celebrate.”

You nod, satisfied. Your colleague’s LLM disagrees, of course. So now their LLM is arguing with your LLM about how they each interpreted the debate rules. A third LLM moderator is brought in.

And just like that, you’re three levels deep in a logic loop with AIs who are very confident and very wrong… or maybe very right? You’re not even sure anymore.

LLMs are brilliant. But they’re not you

This scenario is exaggerated. (Barely.) But the reality is here: we increasingly use LLMs to draft emails, shape arguments, write policies, analyze documents and generate
responses in professional and personal settings.

That’s not a bad thing. In fact, I rely on one occasionally.

But here’s the catch: when everyone in the conversation is using an LLM as their thinking assistant, we run the risk of outsourcing the thinking altogether.

We trade in nuance for nicely worded output. We skip reflection because the draft “sounds good.” We assume the logic holds because the paragraphs are coherent.

And suddenly, we’ve got a society full of people reacting to polished prose instead of thinking through messy ideas.

What happens when the bots agree?

Let’s say both LLMs in our debate agree. They say:

“Consensus reached. You’re both right—depending on your values and priorities.”

Well, that’s… great? Or is it? Should we take the LLMs’ word for it? Should we still debate each other? Do we even want to?

At what point do we stop engaging and just start delegating?

This is where critical thinking becomes essential—not as a retro skill from a pre-AI era, but as a core competency for navigating a world where content is cheap, conversation is synthetic and conviction can be manufactured on demand.

Why it matters in education (and everywhere)

In classrooms and conference rooms, we’re seeing this already. Students use LLMs to generate essays. Educators use them to plan lessons. Tech leaders use them to write reports, create policies and analyze risks.

None of this is inherently bad. In fact, it’s often wonderful.

But we cannot confuse articulation with understanding, or consensus with truth. We need to:

  • Pause before we accept the fi rst output.
  • Challenge assumptions, even if they’re phrased beautifully.
  • Engage in conversations ourselves, not just through our digital proxies.

Because at the end of the day, AI can assist with the how—but only we can defi ne the why.

What’s worth fighting for

LLMs are brilliant sparring partners, research assistants, and yes, even debate proxies. But they’re not a substitute for human judgment.

So yes, use your LLM. Let it help you write the email. Let it help you structure your thoughts. But don’t forget to bring your brain to the conversation.

The future of thought isn’t AI vs. AI—it’s humans who know when to use it, why they’re using it and what it means in the bigger picture.

The bots can throw punches. But only we can decide what’s worth fighting for.


FETC 2026: How a superintendent can see the tech perspective


Stacy Hawthorne, EdD
Stacy Hawthorne, EdD
Stacy Hawthorne, EdD is board chair of CoSN and executive director of the EdTech Leaders Alliance.

Related Articles