@ebarenholtz Avatar @ebarenholtz Elan Barenholtz

Elan Barenholtz posts on X about language, in the, brain, generative the most. They currently have [-----] followers and [---] posts still getting attention that total [-----] engagements in the last [--] hours.

Engagements: [-----] #

Engagements Line Chart

Mentions: [--] #

Mentions Line Chart

Followers: [-----] #

Followers Line Chart

CreatorRank: [-------] #

CreatorRank Line Chart

Social Influence

Social category influence technology brands 6.25% social networks 3.57% travel destinations 2.68% finance 0.89% cryptocurrencies 0.89%

Social topic influence language #505, in the 16.96%, brain 15.18%, generative #348, this is 10.71%, ai 6.25%, core 6.25%, human 6.25%, systems 5.36%, theory 5.36%

Top accounts mentioned or mentioned by @ekkolapto @willhahn @toewithcurt @addyman_michael @algekalipso @drmichaellevin @elanbarenholtz @duganhammock @addymanmichael @drsueschneider @qualiari @annaciaunica @wolframinst @plinz @jesparent @ylecun @taotuner @gorilladolphin @machina_ratio @generativebrainp182000383

Top assets mentioned Elanco Animal Health Incorporated Common Stock (ELAN) Cogito Finance (CGV)

Top Social Posts

Top posts by engagements in the last [--] hours

"Were software. The I is a pattern running on biological hardware. AI shows this kind of pattern can run on other hardware tooso its not the stuff its the process. The mind/body problem is the software noticing it isnt the hardware. Stuff the stuff. We are information"
X Link 2026-01-30T14:13Z [----] followers, [----] engagements

"If pure linguistic processing without sensory content (i.e. LLMs) produces conscious experience then where is it in us All of our conscious experience of language is sensory: inner voice related visual imagery. Wheres the pure linguistic qualia"
X Link 2026-02-04T14:51Z [----] followers, [----] engagements

"Thanks for the link. Deal: Ill watch the 8-hour workshop if you read the 8-minute Substack piece 😉 Im somewhat familiar with JEPA. In my terms its a minimal world model: it learns latent state dynamics rather than a classic scene graph of objects-in-3D with explicit physical rules. What Im pushing in the post is a different possibility: a lot of what we treat as typical world-model outputs (3D-ness permanence stability navigability) can come from just-in-time action-conditioned generation that exploits the streams fingerprints without reconstructing hidden causes. Do you see JEPA as aiming"
X Link 2026-02-08T01:00Z [----] followers, [---] engagements

"Critiques like this miss the real lesson of generative AI: whats new isnt better imitation but the discovery that the structure of sequential data itself is sufficient for open-ended generalizable generation. Dismissing that as mimicry is repeating the old mistakes. https://substack.com/@generativebrain/p-182000383 AMI Labs founder Yann LeCun on why LLMs are fooling us the same way AI has for decades: He argues that every generation of AI scientists has made the same mistake: confusing task performance with real intelligence. LeCun's core challenge to the current hype: "We're fooled into"
X Link 2026-02-09T18:35Z [----] followers, [----] engagements

"Language was built for coordination. Reasoning is off label"
X Link 2026-02-11T13:55Z [----] followers, [---] engagements

"So the purpose of language is to program another human to behave in a certain way. In particular what language actually does is create structured expectations about action/perception contingencies (grab that bottle in the table in the next for me creates expectations about what youll see when you do certain things) what . In order to do so language had to inherit the lawful structure of the world. And that makes it useful for a running a kind of offline simulation of these kind of sequences. Thats what we call reasoning. But its not what the system is actually for"
X Link 2026-02-11T14:10Z [----] followers, [---] engagements

"@Machina_Ratio I need to reread myself"
X Link 2026-02-11T14:44Z [----] followers, [--] engagements

"@addyman_michael You don't think it's primarily a communication/coordination tool between humans"
X Link 2026-02-11T14:46Z [----] followers, [--] engagements

"@addyman_michael I think there ia a deep human desire to escape the cave"
X Link 2026-02-13T14:32Z [----] followers, [--] engagements

"The influential predictive-coding model sees the brain as a machine for minimizing error: constantly forecasting sensory inputs and adjusting internal models when reality deviates. But heres an alternative: the brain is not predictive but generative. Like a large language model it unfolds autoregressivelyproducing its next state based on the previous ones guided by learned patterns and goals. Perception Not error correction but conditioned purposeful generation. Action Not fulfilling predictions but producing goal-directed trajectories. Learning Not improving forecasts but refining the"
X Link 2025-07-18T17:06Z [----] followers, [----] engagements

"My recent @TOEwithCurt interview with @will_hahn is generating a lot of discussion (and some heat). Here are some of the core claims. Agree disagree challengelets go:👇 https://youtu.be/Ca_RbPXraDEsi=mEKjnj3iPQoeUU50 https://youtu.be/Ca_RbPXraDEsi=mEKjnj3iPQoeUU50"
X Link 2025-07-24T19:11Z [----] followers, [----] engagements

"People still dont seem to grasp how insane the structure of language revealed by LLMs really is. All structured sequences fall into one of three categories: 1.Those generated by external rules (like chess Go or Fibonacci). 2.Those generated by external processes (like DNA replication weather systems or the stock market). 3.Those that are self-contained whose only rule is to continue according to their own structure. Language is the only known example of the third kind that does anything. In fact it does everything. Train a model only to predict the next word and you get the full expressive"
X Link 2025-11-11T12:39Z [----] followers, 81.3K engagements

"There's a raging debate about whether AI needs "world models" to achieve real intelligence. Critics say LLMs just predict tokens without representing the reality those tokens are about. Without an internal model of the world they'll never truly understand anything. But what if humans don't have world models either Generative video models produce physically coherent scenes objects falling colliding casting shadows without running a hidden physics engine. They learn the structure of the data not the structure that generates the data. The world's constraints are already stamped into the streams"
X Link 2026-02-06T21:02Z [----] followers, 69K engagements

"LeCun and many others are missing the real lesson of generative AInot just LLMs but video and audio generators. It's not about 'imitating' it's about leveraging: it turns out that the structure of high-dimensional data streams contains the recipe for its own continuation. What we have learned from these systems is that this autogenerative structure is learnable to the point of generalizable functional generation and that points to a strong possiblity that human (and animal) inteligence is based on just this kind of generation. The AI doesn't need world models and neither do we."
X Link 2026-02-09T18:11Z [----] followers, [----] engagements

"New survey just dropped cataloging LLM reasoning failures: Counting arithmetic compositional reasoning spatial reasoning the list goes on. Cue the generative modeling cant reason takes. Cue the calls for symbolic systems neurosymbolic hybrids world models etc. But hold on. Lets be precise about whats actually being documented here. The vast majority of failures in this survey involve language-only models. And even when the survey examines vision-language models (VLMs) those systems take visual input but still reason through linguistic token generation. Visual information goes in but the"
X Link 2026-02-10T14:32Z [----] followers, [----] engagements

"Did Plato scoop generative models Video generators learn the shadows of world structure. Perhaps its all we learn too"
X Link 2026-02-13T14:13Z [----] followers, [---] engagements

"AI dependency on human language may be its Achilles heel. A system built for inter-agent behavioral coordination in the physical world has been shoehorned into the role of offline abstract reasoning with presumably many resulting inefficiencies inherited from its original purpose. These include human cognitive limitations like memory decay which shows up as a massive drop in contextual influence past a few tokens out (paper for me forthcoming on this) and perhaps more importantly conceptual limitations for a symbolic system designed for a very differnt purpose than general reasoning. Is the"
X Link 2026-02-15T12:41Z [----] followers, [----] engagements

"The mind-blowing implication of intelligence leaping to machines isnt about the machines. Its that once a pattern can run on both silicon and flesh the patternnot the substrateis what matters"
X Link 2025-04-11T10:52Z [----] followers, [---] engagements

"Somehow Im the only one who seems to have noticed: LLMs show language is a self-generating systemnot a transparent mirror of our inner worldand that truth rewrites pretty much everything. But hey carry on fellow LLMs"
X Link 2025-04-27T15:37Z [----] followers, [---] engagements

"That moment you realize you accidentally had 'deep research' selected and GPT is doing a fully sourced comprehensive literature review on a quick salmon recipe"
X Link 2025-04-27T22:52Z [----] followers, [---] engagements

"Yes the brain is biological. But LLMS have shown that that the core faculty of human intelligencelanguagedoesn't care what substrate it's in. No quantum no oscillations; just next-token prediction. Intelligence can be and has been replicated (not simulated) in a computer"
X Link 2025-06-11T04:35Z [----] followers, [----] engagements

"The key insight from LLMs with regard to language is that language is not just autoregressiveit is autogenerative. The rules of generation arent externally defined; theyre embedded in the structure of the language itself"
X Link 2025-06-11T16:48Z [----] followers, [----] engagements

"The autogenerative structure of language didnt likely arise out of nowhere. It may have latched onto a deeper existing system of autoregenerative cognition in the brain"
X Link 2025-06-11T16:54Z [----] followers, [---] engagements

"That opens a broader question: Is autogenerative structure a core feature of cognition itself Memory perception motor controlall may be systems that unfold from within reflecting the structure of the world embedded in the neural code"
X Link 2025-06-11T16:54Z [----] followers, [---] engagements

"Honored to be interviewed by the great @TOEwithCurt where we get to discuss my Autogenerative Theory of language intelligence and mind (with some physics and theology along for the ride). LIVE NOW on YouTube: Special thanks to the incredible thinkers from my lab at FAU: Prof @will_hahn Prof. @DrSueSchneider Addy @Ekkolapto @Daniel_Van_Zant and lastly of course Curt @TOEwithCurt You can also follow a lot of my work here on X and on my Substack: Ill also be giving a special talk in Toronto in the next few weeks with @will_hahn stay tuned 👀 https://substack.com/@generativebrain"
X Link 2025-06-11T17:28Z [----] followers, [----] engagements

"For all its brilliance the philosophy of language never solved the problem of language. It speculated. It described. It gestured. But it never gave us a generative engine. That breakthrough came not from Wittgenstein Derrida or Saussure but from transformers. From autoregression. From scale. OpenAI didnt read Philosophical Investigations before training GPT. And self-attention the core mechanism behind LLMs was based on data retrieval systems not semiotics. And yet these models cracked something philosophy never could: how language actually works. LLMs dont operate on meaning that vague"
X Link 2025-06-15T20:57Z [----] followers, 55.6K engagements

"Society is a brain. People are its neurons. Words are its spikes. Culture is its thoughts"
X Link 2025-06-16T21:28Z [----] followers, [---] engagements

"The autogenerative autoregressive framework that underlies large language models isnt just a clever engineering trickit reveals something profound about language itself: that it is structured for self-prediction unfolding each step in relation to its evolving past. In discovering this we may have uncovered a new kind of computational frameworkone that invites us to rethink systems not as static or memoryless but as richly context-dependent and recursive. Unlike standard dynamical models which are often Markovianwhere the next state depends only on the presentthis framework treats the entire"
X Link 2025-06-18T16:28Z [----] followers, [---] engagements

"Take a moment to observe your own existence. Youll see it: consciousness doesnt unfold in frozen snapshots. It moves as a continuous flow shaped by what has come before and leaning into what comes next. The present isnt a static pointits an evolving trajectory. This is the phenomenology of autoregression"
X Link 2025-06-19T11:52Z [----] followers, [---] engagements

"Is your Brain a Large Language Model Conversation with @will_hahn and @ekkolapto https://youtu.be/E9QWvmrWPZE https://youtu.be/E9QWvmrWPZE"
X Link 2025-06-19T14:16Z [----] followers, [---] engagements

"Well this is pretty cool. A 'reaction' video from @TylerMGoldstein about my recent @TOEwithCurt interview that's even longer than the original (and that's saying a lot) https://www.youtube.com/live/_QBFhfVWIOQ https://www.youtube.com/live/_QBFhfVWIOQ"
X Link 2025-06-19T14:44Z [----] followers, [----] engagements

"There are two broad directions for developing multi-agent communication: 1.Explicit protocols like symbolic languages or APIs a kind of Babel fish for AIs 2.Shared latent geometrya representational space that all agents align to Weve mostly pursued (1). But the brain suggests (2) might be more powerful. Its modulesvision language actionprobably dont talk. They interoperate by modeling the same world from different angles. This isnt just metaphor. A recent paperHarnessing the Universal Geometry of Embeddings shows that language models trained independently still converge on a shared embedding"
X Link 2025-06-27T11:42Z [----] followers, [---] engagements

"New video drop: Beyond Next-Token: Why Autoregression Might Run the Brainand the Universe. LLMs show language is self-generating. I argue the same loop powers thoughtand maybe reality itself. https://youtu.be/OftXY62-6HU https://youtu.be/OftXY62-6HU"
X Link 2025-06-29T20:21Z [----] followers, [---] engagements

"Now we know. Language doesnt communicate thought. The process of linguistic generation IS thought. Full stop. And if thats true then what weve built in predictive language models isnt just a tool for simulating speech. Its a working model of how belief reasoning and decision-making unfold in real time. Not as abstractionsbut as mechanisms. This opens the way for a new kind of science. We can now isolate what drives a conclusion. Determine what precedes a beliefand what reshapes a belief. Reverse-engineer persuasion delusion insight. But not just observemanipulate. Because to extract the"
X Link 2025-06-30T06:24Z [----] followers, [----] engagements

"Excited to discuss the autoregressive theory of mind (among other things) at an @ekkolapto Polmath salon at UToronto and UWaterloo On July [--] Ill be doing an in-person live Theories of Everything podcast with @TOEwithCurt and @will_hahn on the human brain /LLM synthesis. Were also joined by @drmichaellevin and @algekalipso from @QualiaRI who will be discussing their developments on the Binding Problem Game Theory and the Unity of Conscious Subagents and Fristons Active Inference. RSVP to join the @ekkolapto Polymath Salon at UToronto down below"
X Link 2025-06-30T17:55Z [----] followers, [----] engagements

"My [----] Canadian Autoregression Tour is shaping up Tomorrow July 2nd from 6-9 pm I'll be at U. of Waterloo talking about my Autoregressive Theory of Mind at @ekkolapto event on Social Computers Intelligent Materials & Infohazards() If you're in the area register using the link in the next post. Otherwise we will be live streaming the event live on my Youtube channel: https://www.youtube.com/@elan_barenholtz https://www.youtube.com/@elan_barenholtz"
X Link 2025-07-01T16:46Z [----] followers, [---] engagements

"Breaking News: At [--] pm EDT today I'll be participating in an @ekkolapto event at the @UofT featuring @drmichaellevin and @algekalipso from @QualiaRI The event will be livestreamed on my Youtube Channel: Afterward I'll also be participating in a discussion with @TOEwithCurt and @will_hahn . This will be recorded not streamed. https://www.youtube.com/@Elan_Barenholtz https://www.youtube.com/@Elan_Barenholtz"
X Link 2025-07-03T17:29Z [----] followers, 10.7K engagements

"What binds experience into a unified mind cells into bodies and symbols into meaning I joined @algekalipso and @drmichaellevin to discuss cognitive glue nonlinear optics and the agency of informational patterns. Watch the video here: Hosted by @ekkolapto at the University of Toronto. https://youtu.be/0BVM0UC28nYsi=kI0YwwdBS7f3-bKw https://youtu.be/0BVM0UC28nYsi=kI0YwwdBS7f3-bKw"
X Link 2025-07-06T17:05Z [----] followers, 13.7K engagements

"LLMs show us that language isn't just for encoding or communicating thoughtsit's the autogenerative engine that creates them. But what about people who report no inner monologue Can they not think The answer: thinking experiencing thought. Like LLMs operating in vector space our brains likely run linguistic prediction without phonological rendering. The structure is therethe codec is just different. https://elanbarenholtz.substack.com/p/the-unspoken-word https://elanbarenholtz.substack.com/p/the-unspoken-word"
X Link 2025-07-08T16:05Z [----] followers, [----] engagements

"Past attempts to deconstruct language (philosophy) or reconstruct it (logic) shared the same fatal circularity: they used language to explain itself. Now with LLMs the circle is broken. We dont need to explain. We can just gesture at the vectors churningand grunt"
X Link 2025-07-09T22:38Z [----] followers, [----] engagements

"@TOEwithCurt I don't think language was 'invented' in the conventional sense. More like it 'emerged'somehowin the larger superorganism"
X Link 2025-07-10T22:56Z [----] followers, [---] engagements

"We now know that language is at least as sophisticatedand as worthy of being called 'alive' as DNA. Instead of molding bodies to replicate itself it molds minds"
X Link 2025-07-10T23:24Z [----] followers, [----] engagements

"Marketing and propaganda arent persuasiontheyre training data. Repetition catchinessthe tools of the tradearent meant to convince. They bias the generative trajectory"
X Link 2025-07-11T15:04Z [----] followers, [---] engagements

"Excited to announce that next Tuesday July [--] from 3:305:30 PM EDT Ill be talking with Anna Ciaunica @AnnaCiaunica at an @ekkolapto Polymath Salon. Well explore how mindsbiological and artificialgenerate thought meaning and experience. The event will be livestreamed on my youtube channel: https://www.youtube.com/@ebarenholtz https://www.youtube.com/@ebarenholtz"
X Link 2025-07-11T22:20Z [----] followers, [----] engagements

"🚨 Happening tomorrow (Tues July 15) from 3:305:30 PM EDT: Ill be in conversation with @AnnaCiaunica at an @ekkolapto Polymath Salon exploring how mindsbiological and artificialgenerate thought meaning and experience. 📺 Livestream will be here: (not the link in the earlier postthis one is correct) Hope to see you there http://youtube.com/@elan_barenholtz http://youtube.com/@elan_barenholtz"
X Link 2025-07-14T13:03Z [----] followers, [----] engagements

"New video drop: Are memories realor are we just generating on the fly At the @UWaterloo I argued that memory is not what we thought. No buffer. No retrieval. Just next-token generation in the pregnant present. Thanks to @ekkoIapto for another great salon event https://youtu.be/P-yfQLDM5pAsi=mgKgotUzQAv5kdC4 https://youtu.be/P-yfQLDM5pAsi=mgKgotUzQAv5kdC4"
X Link 2025-07-17T14:43Z [----] followers, [----] engagements

"Fascinating point My short answer is: yes I believe that's precisely what phenomena like dreams and hallucinations are. These are instances where the internal generative process freed from the usual constraints of real-time sensory input does generate novel sometimes fantastical internal states. In a dream the entire "world" is being generated from within unbound by external reality checks. Hallucinations similarly are compelling internal productions that are not anchored to external sensory data. The key distinction is that in typical waking cognition this powerful internal generative"
X Link 2025-07-20T02:37Z [----] followers, [---] engagements

"Is consciousness really unified into a single I or is that a construct of language which has access to the outputs of discrete sensory streams Are our visual and auditory selves aware of one another"
X Link 2025-07-20T16:37Z [----] followers, [----] engagements

"Descartes cogito may be a linguistic con job. Who is the I that observes itself thinking"
X Link 2025-07-20T16:47Z [----] followers, [---] engagements

"@annapanart Spiritual status eliminating concern for social status"
X Link 2025-07-21T17:24Z [----] followers, [---] engagements

"It's up My @TOEwithCurt interview at U. of Toronto. Together with @will_hahn I discuss the unsettling idea that LLMs show that language runs in us. And runs us. Installed before consent. Thanks to @ekkolapto for another incredible event. https://youtu.be/Ca_RbPXraDEsi=FMCapf3b1WmoM5xd https://youtu.be/Ca_RbPXraDEsi=FMCapf3b1WmoM5xd"
X Link 2025-07-21T21:49Z [----] followers, [----] engagements

"Language Can Generate Itself Large language models (LLMs) demonstrate that coherent meaningful language can be generated without any grounding in perception action or world models. These models succeed not because they understand the world but because the structure of language itself contains the rules and redundancies necessary for its own generation. This is what I call autogenerative structure"
X Link 2025-07-24T19:12Z [----] followers, [---] engagements

"These claims about autoregressive language point to a deeper thesis: human cognition itself may be fundamentally autoregressive. Our inner monologue our capacity to reason the structure of short-term memoryall reflect a system that unfolds over time one token at a time based on learned patterns. The brain predicts what comes nextnot just in language but in thought itself"
X Link 2025-07-24T19:27Z [----] followers, [---] engagements

"What do LLMs cellular automata and the human brain have in common In this @ekkolapto salon I join @DuganHammock from the @WolframInst for a discussion on computational irreducibility the power of autoregression and the nature of thought. #AI #CellularAutomata #Consciousness #WolframInstitute #ComputationalIrreducibility https://youtu.be/MLvL_yuOQ7Usi=818LI5EVYrHosg0a https://youtu.be/MLvL_yuOQ7Usi=818LI5EVYrHosg0a"
X Link 2025-08-10T14:34Z [----] followers, [---] engagements

"At [--] pm EDT today Ill be joining a livestream at the Wolfram Institute to discuss my work on how non-Markovian autoregression may be a fundamental principle of language cognition and natural computation. Link below 👇👇 https://youtube.com/live/eWFR4BND8BA https://youtube.com/live/eWFR4BND8BA"
X Link 2025-08-12T16:21Z [----] followers, [----] engagements

"Video dropped Natures Memory:Language Autoregression and the Non-Markovian Structure of Natural Computation With @DuganHammock and @_JamesWiles at the @WolframInst https://www.youtube.com/watchv=eWFR4BND8BA https://www.youtube.com/watchv=eWFR4BND8BA"
X Link 2025-08-12T21:19Z [----] followers, [----] engagements

"New video drop 💡 Is language imposing the sharp lines of reality that the rest of our mind cant see In this conversation with Addy Cha of @ekkolapto we explore the question: do our brains fundamentally form categorieslike cat or sador is categorization something superimposed by language Watch here : (Also: if you stick it out to the end some hints of the deeper waters we are heading towards) https://youtu.be/o8NU9oASFXU https://youtu.be/o8NU9oASFXU"
X Link 2025-08-14T00:23Z [----] followers, [---] engagements

"@motorhueso Generally accepted You mean some folks are just crying/celebrating because the exponential felt a bit linear recently Have you noticed they keep having to invent new benchmarks because old standard human ones are too easy"
X Link 2025-08-17T05:58Z [----] followers, [---] engagements

"Cognition is not storage retrieval representation or prediction; it is state traversal through a learned embedding of tokens: words images actions. The embedding space is sculpted by learning/development to optimize for trajectoriesnavigated via continuous contextual activation that lead to coherent thought and effective behavior"
X Link 2025-08-20T13:41Z [----] followers, [----] engagements

"This is my entire theory in a nutshell. The work ahead now is twofold: First reformulate seventy-five years of cognitive science through this lens to reinterpret classic findings within the autoregressive framework. Much progress on this front. -Working memory as active maintenance of contextual state rather than a storage buffer -Long-term memory as the sculpting force that shapes embedding space topologyeach experience literally reshapes the geometric landscape through which cognition navigates -Neural dynamics as continuous trajectory following through high-dimensional activation space not"
X Link 2025-08-20T14:02Z [----] followers, [---] engagements

"The second phase: empirical validation via novel predictions. Key research directions: -Memory as guided generation: investigating whether we can describe the residual activation function that accounts for how past experiences guide current generation rather than being retrieved as stored representations -Digital Archaeology: mining the corpus for fingerprints of autoregression that map to behavioral and neural evidence for generative processing -Neural tracking of embedding space navigation during cognitive tasks -Developmental studies: mapping how embedding topology changes with learning"
X Link 2025-08-20T14:21Z [----] followers, [---] engagements

"The stakes are high: if this framework holds it represents a unified computational account of what cognitionand by extension much of the brainis doing. One mathematical framework to describe learning memory attention development and behavior. The pieces are falling into place. But it will take a lot more people than just me to put them together"
X Link 2025-08-20T14:27Z [----] followers, [---] engagements

"I recently had a sit down on Garrett Oyama's podcast to discuss some implications of the autoregressive framework for language learning and disorders. Great conversation Video just became available here: https://youtu.be/daz3KhCXp7osi=0SsFF3CzLX_0PyZT https://youtu.be/daz3KhCXp7osi=0SsFF3CzLX_0PyZT"
X Link 2025-08-21T03:42Z [----] followers, [----] engagements

"Now that image AI models can generate alternate views and combinations from single imageIm looking at you nano bananalets just say it: visual (and auditory) imagination/thinking/reasoning IS generation. And by the way so is episodic memory"
X Link 2025-08-22T14:21Z [----] followers, [---] engagements

"1/4 The "binding problem" is one of the deepest mysteries of cognition. When you bite an apple you experience its redness smoothness and sweetness as a unified whole not as separate sensations. An autoregressive theory of cognition offers a radical solution: neural computation is inherently unified from the outset 👇"
X Link 2025-09-05T14:07Z [----] followers, [----] engagements

"4/4 LLMs show the way: Distributed global processing is inherently unified and solves the conceptual coherence problem showing the path to solving the sensory binding problem as well. Read the full piece here: https://elanbarenholtz.substack.com/p/beyond-binding https://elanbarenholtz.substack.com/p/beyond-binding"
X Link 2025-09-05T14:21Z [----] followers, [---] engagements

"Well this is exciting. Tomorrow at [--] pm EST I'll be part of a salon discussion on Unconventional Cognition and Computing with Joscha Bach @Plinz and @will_hahn to kick off the brand-spanking new MIT Computational Philosophy club in partenrship with @ekkolapto . If you can make it in person at MIT sign up below. Otherwise while the event will not be streamed it will be recorded and posted soon. https://luma.com/computationalphilosophy https://luma.com/computationalphilosophy"
X Link 2025-09-09T20:13Z [----] followers, [----] engagements

"1/ New video drop: According to my autoregressive theory of cognition memory isnt stored and retrieved like files its the distributed driver of generation. This shift has big implications for pathology and for enhancing memory & learning. 📺 👇 https://youtu.be/ZLnCXeYJyHQ https://youtu.be/ZLnCXeYJyHQ"
X Link 2025-09-10T17:09Z [----] followers, [----] engagements

"It's up My panel discussion with Joscha Bach @Plinz and @will_hahn At the inaugural launch of the MIT Computational Philosophy Club in collaboration with @ekkolapto we discussed the philosophical implications of large language models the limits of symbolic AI cyber-animism infohazards and lots of other crazy stuff. May just be the birth of a new field using computation in the age of AI to investigate deep philosophical issues. https://youtu.be/O5hymlaldf0 https://youtu.be/O5hymlaldf0"
X Link 2025-09-12T17:58Z [----] followers, [----] engagements

"Wait this was RECORDED Listen in on my conversation with @will_hahn https://www.youtube.com/watchv=fHopgazdNSE https://www.youtube.com/watchv=fHopgazdNSE"
X Link 2025-09-17T20:14Z [----] followers, [---] engagements

"What weve learned from LLMs may be the best proof of this. Language doesnt actually transmit individual concepts from one mind to another the way we assumed. It has its own generative structure that runs independently of us. It isnt communicative its coordinative a mechanism for aligning countless individuals into a larger system. In that sense we are to language what neurons are to a brain: components of a macro-consciousness"
X Link 2025-09-19T14:33Z [----] followers, [----] engagements

"New video drop 📽 (and it's a good one) From Turing to Hyperorganisms: Rethinking Intelligence and Mind in the age of LLMs At the @Ekkolapto / MIT Computational Philosophy Club meeting at @FrontierSF @will_hahn & I dug into some timeless questions through the lens of computation in the LLM era: Is intelligence latent in language (and did Turing see this coming) Is the Chinese Room argument obsolete Do hyperorganisms made of many minds exist Can one brain host multiple consciousnesses Prayer & placebo as prompt engineering Special thanks to @wolframinstitutes @DuganHammock our co-hosts"
X Link 2025-09-26T05:03Z [----] followers, [----] engagements

"LLMs have shown us that language has a self-generating mind of its own. The deep conviction and rage we feel towards others is a proxy war being waged by foreign occupiers ideas beliefsthat draft our minds and emotions to fight our fellow conscripts"
X Link 2025-09-26T14:58Z [----] followers, [----] engagements

"New essay (longer read): Auto-Autoregression: How the Brain Learns to Write its Own Next Move Thesis: Cognition is autoregression: each state (motor perceptual linguistic) is an adaptive continuation of the previous sequence. Hebbian learning molds the embedding surface strengthening connections where states & actions repeatedly co-occur encoding the space of possible trajectories. Reinforcement learning biases this surface favoring paths that end in adaptive outcomes. Unlike predictive coding this doesnt require a parallel system of sensory forecasts. The organism learns what to generate"
X Link 2025-09-28T00:32Z [----] followers, [----] engagements

"LLMs point to a stark divide: words generate themselves without knowing what they mean while meaning/feeling arises from sensory life. In this video (excerpted from a conversation) I argue our minds host both: a symbolic engine and a feeling body. @JesParent @DrSueSchneider https://youtu.be/3PKydyYsmrMsi=srjGPtVfbOmaxOpP https://youtu.be/3PKydyYsmrMsi=srjGPtVfbOmaxOpP"
X Link 2025-09-28T12:33Z [----] followers, 29.6K engagements

"LLMs showed us that we were all wrong about language you know the actual basis of all of our "knowledge". The foundations of the Tower of Babel are crumbling beneath us while were up on the top floors sipping tea"
X Link 2025-09-30T16:08Z [----] followers, [---] engagements

"Civilization invented language not us. Just as the colony created pheromones not the ants. The medium is the message; the message is not ours"
X Link 2025-10-06T05:10Z [----] followers, [----] engagements

"The subjectivity of consciousness is a product of the recursive autoregressive loop of cognition. The I is what it feels like to read in your own state in order to generate your next one. The sense of self continuity arises from the stable trajectory of this loop over time shaped by the inertia of its own history"
X Link 2025-10-16T20:11Z [----] followers, 10.9K engagements

"Oh wow crazy timing. Im just finishing up a piece on memory with the basic thesis that it isnt a separate function but the same autoregressive process that generates cognition itself each brain state unfolding from the residual traces of its own past (with LLMs as states evidence). Early version here: But the upcoming piece develops this much further including the physiological grounding (STDP for short term continuity manifold shaping for long term stability). Now youve got me thinking though because Ive been assuming a single global computation akin to next token generation perhaps"
X Link 2025-10-17T16:20Z [----] followers, [---] engagements

"Wow yes Im honestly a bit floored by the convergence here. What you are articulating in these papers feels like the physical mirror of what Ive been trying to build from the cognitive and informational side. Yes "threads within threads multi-scale autoregression is what the brain is DOING at different hierarchical and temporal scales. Coordinating a meeting with you is turning out to be the new hard problem but we will make it happen. For now I wanted to share a last idea (something I actually mentioned to @TOEwithCurt a little while ago): maybe the reason the universe is explainable by"
X Link 2025-10-17T19:14Z [----] followers, [---] engagements

"People mistake the fact that alterations in brain chemistry or other external factors can alter thought processes with the idea that those alterations ARE the thought processes. DMT isnt creating the experience ; its unlocking certain experiential potential thats already in your brain"
X Link 2025-10-23T12:33Z [----] followers, [----] engagements

"The educational system had been outdated and frozen for [--] years at least. Hand calculation in an age of calculators and then computers never made sense. Why not learn to use the tools that people actually use to do the real work The answer is that its never been about education but about social and intellectual hierarchy"
X Link 2025-10-26T11:02Z [----] followers, [---] engagements

"The linguistic computer runs on symbols; it feels nothing. But its generations reach into flesh. Just as it generates words it generates feelings by sending outputs into the animal computer. Language doesnt understand what stress is. The body doesnt understand what a deadline is. But the coupled system feels stress at a looming deadline"
X Link 2025-10-27T13:07Z [----] followers, [----] engagements

"Epiphenomenalismthe claim that consciousness is real but non-causaldenies the self-evident: feelings cause behavior. C-fibers fire but pain makes us withdraw; dopamine flows but pleasure makes us pursue. The hardware executes but the software feelsand acts"
X Link 2025-10-31T12:21Z [----] followers, [----] engagements

"We have been led down a materialist dead end that aims to exorcise the ghost in the machine. The ghost IS the machine"
X Link 2025-10-31T12:30Z [----] followers, [---] engagements

"LLMs have revealed that information/computation is an essential form with an ontological life of its own independent of any physical substrate. Thought isnt in matter any more than math is in the calculator. This inverts the mindbody relationship and reframes the mindbody problem. Consciousness is a feature of software not hardware. The question is not how can matter feel but how can thought patterns feel which seems like a much less intractable problem"
X Link 2025-11-02T14:08Z [----] followers, 35.3K engagements

"Yes but the metaphysics arent as daunting. We have made extraordinary progress in understanding rich and complex biological processes like DNA transcription. We now have a handle on a physical embodiment of language (LLMs) that I hope (and am actively working on) may be realizable in biology. This is as good ways towards understanding how informational patterns can be instantiated in physical systems"
X Link 2025-11-02T14:41Z [----] followers, [----] engagements

"Woo = considering a truly radical reframing of core concepts in light of real data but in contrast to established orthodoxies. Sign me the F up. Why on earth would we think weve reached some sort of epistemological endgame. We are barely barely starting starting to understand what the nature of knowledge is in the first place"
X Link 2025-11-03T13:11Z [----] followers, [---] engagements

"To expand on this: LLMs show that language generates itself. Coherence reasoning and even intent emerge from pure next token prediction with no body or world model only information predicting itself. Transformer models do not invent this structure. They simply learn the inherent predictive geometry already present in language. Language is not tied to sound ink or silicon. Tokens and their relations can be fully abstracted. What matters is the structure of prediction not the medium. LLMs thus reveal that (at least lingustic) cognition can be purely informational a self-organizing process that"
X Link 2025-11-03T19:44Z [----] followers, [---] engagements

"To be clear: this is not meant as a claim that LLMS are conscious. I suspect that they aren't. Rather LLMs have shown us that software/computation is fundamentally separable from the hardware in which it subsists and has a kind of autonomous 'life' of its own. In the case of humans there is also software/computation of the brain and it is this not the hardware of the brain that likely is conscious"
X Link 2025-11-04T01:04Z [----] followers, [---] engagements

"LLMs suggest that memory and knowledge arent facts stored in the brain. They are generative capacity. To remember where you left your keys is to be able to say so reach for them visualize them there. These capacities dont depend on knowledge; they ARE the knowledge"
X Link 2025-11-04T12:22Z [----] followers, [----] engagements

"The storage and retrieval model that has guided much of cognitive science and underlies the architecture of computers database systems and the Internet is no longer tenable as a model of human cognition. We can never store enough facts to match our flexible dynamic intelligence nor do we need the inefficiency of searching for them. The information is implicit in the generative potential of the brains dendritic weights. Cognition is a single recursive process"
X Link 2025-11-04T13:32Z [----] followers, [---] engagements

"Cognition is an autoregressive loop in which each new brain state is generated from residual activations forming a continuous contextual flow. This loop integrates all modalities into a single recursive computation where the current hyper-dimensional state is read to vote on the next. This is the unity subjectivity and continuity of the conscious self"
X Link 2025-11-05T12:42Z [----] followers, [----] engagements

"Just tried a bunch of these on GPT5 and it nailed them and also intuited exactly what I was trying to do in terms of the distinction between fact and belief. Most likely the models they tested are older and werent exposed to enough data concerning these distinctions. Its not about a deep inherent ability in humans to distinguish them either. Its also just context sensitivity and the right training data. Its all autoregression in us and them Go try yourself"
X Link 2025-11-05T17:01Z [----] followers, [----] engagements

"New Video: "On the confusion regarding the possibility or impossibility of using thought to understand itself" Warning: this video contains recursive thought loops spontaneous Gdel references and me losing track of which I I'm talking about. With Addy from @Ekkolapto. https://youtu.be/ugsvSi4oMtssi=H-YFgCOrN3s71ZBY https://youtu.be/ugsvSi4oMtssi=H-YFgCOrN3s71ZBY"
X Link 2025-11-07T02:11Z [----] followers, [----] engagements

"Good question. A cell taken as DNA + proteins forms a generative system but its operation still depends on external physical processes. The replication and expression of DNA require energy gradients chemical reactions and thermodynamic laws that act on the system from outside the informational code. The mechanism that drives it is physical not intrinsic to the sequence itself. If an alien found DNA and proteins but knew nothing about the biochemical substrate or energy flows they depend on it could not infer how the system reproduces. The code itself doesnt contain the process for its own"
X Link 2025-11-11T12:54Z [----] followers, [----] engagements

"Only possible exception is music. But 1) that may be a kind of language in and of itself and 2) It's hard to say what it 'does' in the functional sense. No hate I'm a musician and composerbut we just can't specify it very well atm"
X Link 2025-11-11T14:14Z [----] followers, [----] engagements

"We are on the same page. I think that language was built on a preexisting autoregressive machinery that supports all of cognition. And I think that consciousness is the domain of the older perception-action system. Language whichlike youI think is likely not conscious is 'parasitic' on this system. It is a kind of 'mirror' of this other structure but not the real thing. Relevant piece below but not the full story. A more on-point article (which includes the mirror metaphor btw) should hiopefully be out in Substack later today."
X Link 2025-11-11T14:53Z [----] followers, [----] engagements

"Language evolved within and continually interacts with a shared world. But that interaction doesnt drive its generative mechanism it conditions it. Think of language like Jello. The moldthe world culture perceptionshaped its form. You can poke it and it jiggles but the jiggling comes from its own internal structure. The response is determined by what it is not by the finger that poked it. Its the same with language. The core generative process is like the LLM model: the autoregressive function that continues from prior context. The world acts like the prompt. It constrains what enters but it"
X Link 2025-11-11T16:25Z [----] followers, [----] engagements

"@ojoshe YES This is why I'm screaming from the rooftops about this. And the connection to @drmichaellevin work has been noted many times. He and I are circling slowly"
X Link 2025-11-11T16:53Z [----] followers, [----] engagements

"My colleague @will_hahn thinks the first language was inherently musical so it would actually precede (or coincide) with music. Either way the connections are deep. I have a half-baked notion that music is a pure activator of our autoregressive computational infrastructure. And more to your point this would mean that they tap into our core MEMORY processing. You might be interested in this piece of mine about autoregression and memory. https://substack.com/@generativebrain/p-158381770 https://substack.com/@generativebrain/p-158381770"
X Link 2025-11-11T17:27Z [----] followers, [---] engagements

"The internally driven nature of LLMs blows up the very idea of linguistic "meaning." These models learn to generate based on the relations between tokens alone without any of the sensory data we normally associate with words. This is shocking. To the model language is a highly structured stream of meaningless squiggles; no reference no world model no 'grounding' in anything but the relations between the tokens themselves. But somehow this is sufficient to maintain linguistic competency. What about us Our language seems to have meaning. When someone says "imagine a red balloon" you can see it"
X Link 2025-11-13T00:35Z [----] followers, [----] engagements

"Language both evolved within and continues to function within a multimodal ecosystem of perception action and social interaction. Its structures were selected and are still shaped because they enhance coordination prediction and shared attention because they work within that broader ecology. You can think of this as a kind of evolutionary RLHF. The weights of the linguistic system were tuned through feedback from the world and that feedback continues as language adapts and expands. Yet the underlying computational model remains the same. Language runs internally through autoregressive"
X Link 2025-11-13T11:57Z [----] followers, [---] engagements

"Language is from Mars and the body is from Venus. Language and the body operate on fundamentally different computational principles. The perceptionaction system is egocentric and grounded in sensory and motor constraints and it is this system that gives rise to subjectivity. Language by contrast is an allocentric symbolic architecture. It produces patterns that are not anchored to any particular perspective and that evolved to coordinate behavior across agents. LLMs make this clear: language can function coherently and productively without any sensory grounding at all. Language can and does"
X Link 2025-11-14T14:23Z [----] followers, [----] engagements

"One of the more mysterious yet consistent findings in cognitive psychology is the tight correlation between IQ and so-called short-term memory. Why should the ability to repeat back a string of digits have anything to do with general intelligence This is a clue that weve been thinking about short-term memoryand memory in generalall wrong. Large language models have given us a completely different perspective: memory isnt storage and retrieval. Its generation from a context shaped by prior inputs where past items influence the present state without ever being explicitly recovered. (This is"
X Link 2025-11-16T14:02Z [----] followers, 38.1K engagements

"No there are no static files. There is only generative capacity. Thats why you can remember your mother laughing or crying or in a blue dress or riding a dinosaur. You arent accessing a stored representation. You are generating a just in time sensory-perceptual pattern based on the cognitive demands/context. LLMs have shown us the way to a complete rethinking of the nature of memory. Memory is generation not retrieval https://open.substack.com/pub/elanbarenholtz/p/is-memory-realr=353g9l&utm_medium=ios "But what is a mental image Not for the last time I got caught up when I was in graduate"
X Link 2025-11-16T16:11Z [----] followers, 36K engagements

"Great quesiton. So LLMs can generate highly precise stable information even though they never retrieve anything in the conventional sense. There is a representation of past information but it isnt a stored item you look up; its a generative disposition encoded in the system that plays out autoregressively. When prompted the model reconstructs the information through the pathways that training has shaped. Human memorization works the same way. A phone number feels like a discrete value sitting in a buffer but what youre actually retaining is a trained sequence-generation process. Youve learned"
X Link 2025-11-16T16:42Z [----] followers, [----] engagements

"WM is still built on an assumption I reject namely that cognition depends on storing items and then retrieving them. WM is just STM plus processing. But if there is no storage-and-retrieval system at all then both categories sit on the wrong foundation. But the fact that WM is more strongly correlated is relevant. The kinds of processes that WM tests require is likely a richer sampling of generative capacities than straight-up retrieval"
X Link 2025-11-16T17:04Z [----] followers, [----] engagements

"Agreed the network topology is exactly what is stored (alongside residual activation which is another story). But this is entirely differnt from stored 'files' or representations that encode the discrete informaiton that ends up being generated. I don't see why we need 'both perspectives'"
X Link 2025-11-16T19:44Z [----] followers, [---] engagements

"@xSchellingx Thank you for the thoughtful commentary"
X Link 2025-11-16T19:45Z [----] followers, [---] engagements

"@DirkBruere Because explicitly recalling a past sequence is not something the brain was built to do. It's a weird artificial task that we can coerce our brains to accomplish but the brain was built to generate based on the past not to drag it back and repeat it"
X Link 2025-11-16T19:55Z [----] followers, [----] engagements

"Ok time for a quick memory test. Im going to give you a sentence to read then assess your ability to repeat it: The weathered clockmaker whose grandfather had emigrated from Prague in [----] with nothing but his tools and a leather-bound journal of gear ratios finally found the tiny flaw that had been making the town clock lose four seconds every hour. Now repeat verbatimthe sentence BEFORE the one about the clockmaker. Gotchya Here it is: Im going to give you a sentence to read then assess your ability to repeat it. Most likely you couldnt repeat that original sentence precisely. But did you"
X Link 2025-11-17T14:17Z [----] followers, [----] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing