Asking whether human intelligence is actually computational intelligence is at top-of-mind for leading edge researchers.

getty

In today’s column, I examine the crucial debate about whether human intelligence is actually a form of computational intelligence.

The premise is this. Some fervently assert that we have already figured out how to get AI to be on par with human intelligence, as evidenced by modern-era LLMs, generative AI, and computational transformers. Furthermore, and here’s the kicker, human intelligence is claimed to be the same as computational intelligence. The brain and mind are computational mechanisms, albeit occurring in a biochemical manner versus conventional digital bits and bytes.

This heady topic was the focus of Harvard’s Berkman Klein Center (BKC) kickoff event of its Fall Speaker Series on September 24, 2025. I was recently honored to have been an invited participant at a special AI Workshop at Harvard University that explored the expected advent of AGI, an outstanding get-together that took place on September 12-14, and I had an opportunity to learn about BKC and connect with BKC researchers, affiliates, and faculty.

The esteemed speaker who opened their invigorating Fall Series, Blaise Agüera y Arcas, serves as the CTO of Technology & Society and a VP and Fellow at Google. During his engaging talk, he vigorously espoused and strongly supported the mind-is-computation premise. His newly published book, entitled “What Is Intelligence? Lessons from AI About Evolution, Computing, and Minds” (MIT Press), tackles this provocative stance. In the jam-packed 624-page book, he hammers home the basis for making such claims. The worth-watching memorable opening session was astutely moderated by BKC Executive Director Alex Pascal. For details about BKC, see the link here, and for the recorded video of the event, see the link here.

What is the articulated case for human intelligence as computational intelligence?

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Metaphors About The Mind

A myriad of metaphors are used to describe how the human mind works. I’m sure that you’ve heard a lot of them. Consider some popular examples. The mind is a library that stores knowledge and allows you to retrieve content as needed. The mind is an orchestra, such that different parts of the brain perform like various specialized instruments, and they need to function together harmoniously to do well. Etc.

In contemporary times, the mind is metaphorically portrayed as a computer. When we want to gain a fresh perspective, we reboot our brains. The act of sharing our thoughts is akin to providing output to others around us. If your mind gets filled with too many ideas, your internal memory is overflowing. You might glitch or need to debug your thoughts.

Get ready to have your mind blown, since there is an alternative viewpoint about brain-related metaphors that might be somewhat surprising to you.

As stated by Blaise Agüera y Arcas during his BKC talk: “It is not a metaphor to say that the brain is a computer. They are not like computers; they are computers.” Yes, the claim being made is that there isn’t a distinction between the brain and the nature of computers. The brain is, in fact, a computational entity. It doesn’t use mechanical gears or machine parts; nonetheless, the brain functions on a computational basis.

Be aware that not everyone concurs with this brazen supposition.

In any case, the topic has immense value as a means of furthering our understanding of the human mind and, likewise, spurs our efforts to push AI computational intelligence toward artificial general intelligence and ultimately arrive at artificial super intelligence or ASI.

The Predictive Brain Hypothesis

Let’s briefly dive into the inner workings of LLMs and see what takes place overall.

Here’s what happens at a 30,000-foot level when you opt to use ChatGPT, Claude, Gemini, Grok, or any of the major large language models. You enter text as a prompt into the AI. The text is encoded into tokens, which are numeric representations of words and parts of words. The tokens flow through a large-scale structure that is referred to as an artificial neural network (ANN). The ANN has been data-trained on possibly trillions of words scanned from the Internet, and pattern-matched mathematically and computationally on how humans write.

The AI seeks to predict what words ought to be conveyed back to you, based on the prompt that you entered. This is being done via the computational process of using the tokens; it’s all about numbers. Step by step, via leveraging the ANN, the response is put together as each next token is selected. The tokens are then decoded back into text as words. Voila, the LLM produces a response that might be an essay, a narrative, a poem, or some form of writing that is essentially a mimicry of the way humans compose words.

For an in-depth explanation of this process, see my discussion at the link here.

All in all, AI researchers refer to this overarching approach as a word predictor. The AI is trying to computationally ascertain what the appropriate next word ought to be, an inch at a time, and those words hopefully gel into a coherent passage of natural language.

Those who believe the brain is a computational entity would proclaim that the mind works similarly. This is often referred to as the predictive brain hypothesis. The assumption is that when someone talks to you, your brain and mind take in the data, convert the sounds into internal biochemical tokens (as it were), which flow through your biological neural network in your noggin. Next, your internal mind-based neural network seeks to predict the suitable outputs, doing so first via biochemical tokens and then converting those into intelligible spoken words.

The crux is that the brain and mind are suggested as functioning exactly akin to how computational intelligence occurs. Toss out the metaphor. The brain isn’t simply analogous to computers; it is a form of computational intelligence.

The Beauty Of The Theory

There is an inherent beauty associated with the belief that the brain smacks of being computational.

You can assert that human intelligence and cognition are explainable as information processors. The brain and our mind are essentially computational algorithms running on biological hardware. This takes us to the alluring conceptual kinship that the mind falls in line with the famous Church-Turing thesis that states all effective computational processes can be computed (see my detailed discussion at the link here).

Computationalism provides a clean and straightforward framework to grasp what the brain and mind are doing. I’ve previously noted that this same mind-is-computation can be used to illustrate that Theory of Mind (ToM) is not solely the province of human minds. AI computational intelligence can be seen as a simulated version of ToM, as I unpack at the link here.

Another big plus is that this seems to reduce the friction between ANNs and the biochemical neural networks that make our wetware. At this time, ANNs are currently a far cry from what true neural networks do. An ANN is at best a crude representation of a highly simplified computational model of the real thing. Despite that massive gap, we presumably are on the right path, trying to emulate the brain’s neurons as computer-based information-processing units. The parlance of ANNs is that the mathematical and computational aspects entail neural firings, synaptic weights, and use weighted input-output transformations that we observe in the mapping of brain circuitry.

Is that merely a convenient way to phrase things, or have we hit the nail on the head?

It would be grandly reassuring to believe that our present-day approach to AI is fully aligned with human intelligence. The premises are a huge relief. Computational intelligence is doing the same things that human intelligence does. Human intelligence is doing the same things as computational intelligence.

We have struck gold and found the hidden gold mine.

The Scaling Gets Us There

If we have perchance already landed on the right kind of architecture and design for AI, namely that it is akin to how human intelligence arises, the question then comes to the fore as to why current AI isn’t performing fully at the level of human intelligence. We do not yet have AGI. Shouldn’t we have AGI among us right now?

Aha, some say, the reason is due to scale.

The idea is that we need to scale up existing AI. We’ve got to build massive computer centers and have a tremendous volume of high-end servers and computing power. The logic underlying this ploy goes this way. So far, it seems that by allocating greater amounts of computer processing, faster GPUs, and the like, LLMs and generative AI are getting better and better. Ergo, let’s keep doing the same.

AI insiders are intimately familiar with the now-classic argument that by adding more computing, the AI field has historically seemed to make progress.

In a famous short essay by a renowned pioneer in AI, Richard Sutton, which he posted some six years ago on March 13, 2019, he gave abundant credence to the belief that more computation would be the wisest path to advancing AI:

  • “The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore’s law, or rather its generalization of continued exponentially falling cost per unit of computation.”
  • “Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance), but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available.”
  • “Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation.”
  • “We have to learn the bitter lesson that building in how we think we think does not work in the long run.”

Worries About Hitting The Wall

Wait for a second, some AI insiders exhort, it could be that scale isn’t going to be the vital differentiator that you think it will be. Maybe we are on the wrong course. It could be that the architecture and design of LLMs and generative AI are a dead-end when it comes to achieving AGI.

Perhaps we are foolishly pursuing a primrose path right now. Billions of dollars of computing are going to be aimed at scaling a design and architecture that is going to slam into an unyielding wall. Sadly, by the time we realize this is occurring, we will have put all our eggs in one unsuccessful basket.

Instead of mindlessly following the Pied Piper’s siren call of contemporary LLMs, we must diversify and actively explore worthwhile alternatives. Unfortunately, all airtime and dollars are flowing almost exclusively to the existing AI approaches. Little incentive and only marginal funding remain for thinking outside the box. I highlight various voiced alternatives that take AI architecture and design in a different direction, at the link here.

Interestingly, Richard Sutton seems to be singing a similar tune, indicating that we are walking down an avenue that isn’t going to fruitfully get us to AGI. In a recent podcast aired on September 26, 2025, he stated that we need new architecture for AI that goes far beyond LLMs and computational transformers. His comments included that LLMs fail to encompass ground truths and that the act of predicting the next token is not a proper goal for attaining AGI.

His expectation, which is shared by a growing contingent of others in the AI field, is that a new paradigm is needed. LLMs, as we know them today, will inevitably be obsolete. That is a gut-wrenching punch for those who have whole hog opted to tie their research, fame, and fortune to generative AI and computational transformers.

Interpretability And Explainability

Assume for the sake of discussion that the brain is based on computational intelligence. If that is the case, there is an exciting prospect at hand. The possibilities for gleaning how the brain and mind get things done are presumably within our grasp.

Allow me to elaborate.

We are faced with two mighty unknowns that have yet to yield to our Sherlock Holmes endeavors to demystify them.

First, in terms of the brain, it is still a tremendous mystery how the approximately 86 billion neurons and 100 trillion synapses give rise to human thinking. Numerous and exceedingly intense neuroscience research continues to measure brain activity and seek to showcase the connection to our ability to imbue thoughts and be conscious beings. I am especially excited about some recent new brain circuitry mapping AI-powered foundational models that might give us a leg up on the unexplained enigma (I’ll be covering this in an upcoming posting).

Second, in terms of LLMs and generative AI, there is a tremendous mystery concerning how these large ANNs give rise to seemingly human thinking or at least the appearance of human thinking. Sure, you can laboriously trace the flow of tokens and numbers from here to there within an artificial neural network, but we are still in the dark ages about where and how the elements within an ANN are logically giving rise to such impressive results.

Avid readers of my column are well aware that I am an impassioned advocate for being able to crack the code and demystify the inner workings of AI models. We must decipher AI and figure out how to interpret what is happening internally, and make AI transparent and explainable. Our future and the future of AI depend on this.

State-of-the-Art on AI Interpretability

Interpretability and explainability of AI are still an emerging field of inquiry.

I have deeply analyzed ways to move the needle on AI interpretability, for example:

  • AI interpretability using the IRT method and a Thurstonian utility model, see my coverage at the link here.
  • Building explainability into AI at the get-go, known as XAI (explainable AI), see my depiction at the link here.
  • Doing conceptual mapping of features via computational intermediaries of ANNs and leveraging monosemanticity, see my exploration at the link here.
  • Leaning into the identification of linear directions in activation space as represented by persona vectors, see my assessment at the link here.
  • And other postings.

The outside-the-box thinking here is that whatever we learn about deciphering AI might be applied to deciphering the human mind. Plus, this goes both ways, namely that whatever we learn about deciphering the human mind might be applied to uncovering the inner mechanisms of contemporary AI.

That would be especially the case if you adopt the premise that the human brain is ingrained in computational intelligence.

Another quick point worth noting is that many do not realize that the field of psychology and the field of AI have had a historically intertwined, co-collaborative bond. Psychological theories and mind-probing methods can synergistically aid progress in AI. Similarly, AI theories and practices can synergistically boost progress in psychology and the nature of the human mind.

If that duality topic interests you, consider these readings:

  • Ways that AI and psychology continue to boost each other, such as the merits of performing psychoanalysis of AI, see my discussion at the link here.
  • Latest progress in psychology toward pursuing a unified theory of cognition and how this reveals insights about interpreting and explaining AI, see my analysis at the link here.

The Future Is Ours To Decide

For those of you who fervently believe that human intelligence is indeed computational intelligence, I wish you the best in your efforts to prove this contention. Keep going. Let us know what you have to say.

Meanwhile, for those of you who adamantly believe that human intelligence is not computational intelligence, I equally urge you to proceed and make your case known. Identify where the mind-is-computation side has gone awry. Maybe the metaphor is right, whereby we can only sensibly think of the mind-as-computation, i.e., the mind is decidedly not actually computational intelligence. It’s just a metaphor. Stick with your position and share your insights.

A final thought for now comes from John F. Kennedy: “Change is the law of life. And those who look only to the past or present are certain to miss the future.” The same can be rightfully said about the future of humankind, particularly when it comes to revealing and understanding the nature of human intelligence and the nature of computational intelligence.

Look to the future, learn from the past, but don’t get mired in the past. An open mind on these pressing matters is worth its weight in gold.


News Source Home

Disclaimer: This news has been automatically collected from the source link above. Our website does not create, edit, or publish the content. All information, statements, and opinions expressed belong solely to the original publisher. We are not responsible or liable for the accuracy, reliability, or completeness of any news, nor for any statements, views, or claims made in the content. All rights remain with the respective source.