[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  Johannes Miertschischk [@SeriousStuff42](/creator/twitter/SeriousStuff42) on x 1333 followers Created: 2025-07-20 17:54:08 UTC The only meaningful benchmark for Artificial General Intelligence (AGI) is human general intelligence, which is defined primarily by an outstanding ability to generalize. Applied to computer science, this would correspond to data efficiency. The data efficiency of Large Language Models (LLM's) is very low and fundamentally limited. Therefore, for technical reasons, transformer-based AI-models (Large Language Models like ChatGPT, Gemini, Claude, Lama, Grok etc.) will never even come close to anything resembling general intelligence. These are easily verifiable, irrefutable facts! This means for the further development of the various common AI models, which are all based on the same technical principle, that they will soon reach insurmountable limits, regardless of the effort and scaling methods. The exponential development of the past will not continue for much longer in the future. Therefore, transformer-based AI-models (Large Language Models) will never achieve Artificial General Intelligence (AGI)! XXX engagements  **Related Topics** [applied](/topic/applied) [agi](/topic/agi) [artificial](/topic/artificial) [Post Link](https://x.com/SeriousStuff42/status/1946992030380646786)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Johannes Miertschischk @SeriousStuff42 on x 1333 followers
Created: 2025-07-20 17:54:08 UTC
The only meaningful benchmark for Artificial General Intelligence (AGI) is human general intelligence, which is defined primarily by an outstanding ability to generalize. Applied to computer science, this would correspond to data efficiency. The data efficiency of Large Language Models (LLM's) is very low and fundamentally limited. Therefore, for technical reasons, transformer-based AI-models (Large Language Models like ChatGPT, Gemini, Claude, Lama, Grok etc.) will never even come close to anything resembling general intelligence. These are easily verifiable, irrefutable facts! This means for the further development of the various common AI models, which are all based on the same technical principle, that they will soon reach insurmountable limits, regardless of the effort and scaling methods. The exponential development of the past will not continue for much longer in the future. Therefore, transformer-based AI-models (Large Language Models) will never achieve Artificial General Intelligence (AGI)!
XXX engagements
Related Topics applied agi artificial
/post/tweet::1946992030380646786