James Bachini

LLM vs AGI | Limiting Reality of Language Models in AGI

llm vs agi

These remarks by Sam Altman, former CEO of OpenAI, highlight a fundamental limitation in the current approach to developing Artificial General Intelligence (AGI) through the advancement of large language models (LLMs) like ChatGPT.

“We need another breakthrough. We can still push on large language models quite a lot, and we will do that. We can take the hill that we’re on and keep climbing it, and the peak of that is still pretty far away. But, within reason, I don’t think that doing that will (get us to) AGI. If super intelligence can’t discover novel physics I don’t think it’s a superintelligence. And teaching it to clone the behavior of humans and human text – I don’t think that’s going to get there. And so there’s this question which has been debated in the field for a long time: what do we have to do in addition to a language model to make a system that can go discover new physics”

Sam Altman – former CEO of OpenAI (Creators of ChatGPT)

There was a lot of speculation recently that he was ousted from OpenAI because they made a breakthrough in AGI which simply isn’t true.

LLM vs AGI OpenAI Sam Altman

In this post I want to discuss why simply scaling up LLMs isn’t enough to create AGI.

  1. Prediction vs. Understanding
    LLMs, including ChatGPT, are designed to predict and generate text based on patterns learned from vast datasets. While they can mimic human-like responses, their capabilities are fundamentally different from true understanding or reasoning. They don’t possess an internal model of the world or genuine comprehension. LLM’s fundamentally work more like the predictive text that autocompletes your words on Google.
  2. Lack of Novel Discovery
    As Altman points out, a key characteristic of AGI is the ability to discover novel concepts or create new knowledge, such as breakthroughs in physics. Current LLMs are limited to reiterating, remixing, or extrapolating from their training data. They lack the capability to innovate or discover something truly new outside of their training scope.
  3. Emulation vs. True Intelligence
    LLMs are proficient in cloning human like text responses, but this is not equivalent to possessing intelligence. AGI would entail a broader spectrum of cognitive abilities, including self-awareness, intuition, and the capacity to understand abstract concepts in a way that goes beyond mimicking human text. It needs memory and multithread processes to explore ideas and concepts.
  4. Additional Breakthroughs: Achieving AGI likely requires fundamental breakthroughs beyond just refining language models. It might involve integrating other forms of AI, such as spatial, causal, and logical reasoning, or developing entirely new approaches to machine intelligence that are not currently in the realm of LLMs.
  5. Safety Considerations
    There’s also the aspect of ensuring that AGI, if achieved, aligns with human values and ethics. This is a complex challenge that goes beyond technical advancements and involves creating long term goals and ethics, then ensuring models don’t deviate from them.

While LLMs like ChatGPT represent significant advancements in AI, their nature as text prediction models limits their potential to evolve into AGI.

Achieving true AGI would require multiple breakthroughs that enable models to genuinely understand, reason, and innovate, going beyond the capabilities of current language models.

The LLMs we see today might act like interfaces on top of the processor to bring complex patterns in to human readable input/output but they can’t, in their current state, be considered anything like AGI.


Get The Blockchain Sector Newsletter, binge the YouTube channel and connect with me on Twitter

The Blockchain Sector newsletter goes out a few times a month when there is breaking news or interesting developments to discuss. All the content I produce is free, if you’d like to help please share this content on social media.

Thank you.

James Bachini

Disclaimer: Not a financial advisor, not financial advice. The content I create is to document my journey and for educational and entertainment purposes only. It is not under any circumstances investment advice. I am not an investment or trading professional and am learning myself while still making plenty of mistakes along the way. Any code published is experimental and not production ready to be used for financial transactions. Do your own research and do not play with funds you do not want to lose.


Posted

in

, ,

by

Tags: