Is “Thinking” in AI an Illusion?
- Chris Perumal
- 2 days ago
- 1 min read

Recent advancements in generative AI are undeniably impressive. These systems can write, code, summarize, and reason in ways that often feel almost human. It’s easy to imagine we’re on the brink of machines that are truly super-smart — capable of replacing or at least competing with human thinking.
Using these tools myself, I’ve been amazed by how much they can do. Which naturally raises the question: are we getting close to true Artificial General Intelligence (AGI) — a system that can learn and adapt across domains like a human brain — or are today’s models still just collections of specialized tricks?
According to Apple researchers, the answer leans toward the latter. Despite the marketing of “reasoning” models, today’s AI systems remain sophisticated pattern-matching machines trained over vast datasets… not entities that actually “think.”
In their recent paper, Apple scientists tested both LLMs (Large Language Models) and LRMs (Large Reasoning Models) — the “thinking” versions designed to handle complex problem solving. Instead of relying on standard benchmarks like math tests, they built controlled puzzle environments where complexity could be increased step by step.
The results were striking:
Low Complexity: even regular LLMs outperformed the supposed “reasoning” models.
Medium complexity, LRMs had an edge, thanks to their longer chain-of-thought style reasoning.
High complexity, both collapsed entirely — accuracy dropped to zero, even when the solution algorithm was explicitly given in the prompt.
This collapse highlights a crucial point: whether “thinking” or “non-thinking,” these models still behave like pattern-matchers. They can mimic reasoning within a certain range, but they do not yet generalize or execute logical steps the way a human mind does.
.png)

