@gerardsans Gerard Sans | Axiom ๐ฌ๐งGerard Sans | Axiom ๐ฌ๐ง posts on X about open ai, ai, token, has been the most. They currently have [------] followers and [----] posts still getting attention that total [-----] engagements in the last [--] hours.
Social category influence technology brands stocks travel destinations finance countries celebrities social networks events musicians cryptocurrencies
Social topic influence open ai, ai, token, has been, $googl, agi #373, just a, math, javascript, generative
Top accounts mentioned or mentioned by @openai @angular @garymarcus @sama @googledevexpert @ngromeconf @alexwei @anthropicai @googledevs @googleai @davewhite @googlecloud @googledeepmind @rao2z @pauseai @google @gruizdevilla @pmarca @matt95261 @arxiv
Top assets mentioned Alphabet Inc Class A (GOOGL) Salesforce Inc (CRM) DeepSeek (DEEPSEEK) Microsoft Corp. (MSFT) Robot Consulting Co., Ltd. (LAWR)
Top posts by engagements in the last [--] hours
"@TheAiGrid @GaryMarcus We already know. While I understand AI CEOs lying through their teeth. I dont understand why anyone else would believe any ridiculous claims like AGI as anything but straight out of a science fiction movie. Wake up Since GPT-2 there hasnt been any relevant technically progress"
X Link 2024-06-14T15:16Z 35.9K followers, [--] engagements
"@selfless_qubit The truth behind flawed AI alignment and safety narratives. https://ai-cosmos.hashnode.dev/the-ai-alignment-illusion-why-the-human-like-approach-masks-deeper-problems https://ai-cosmos.hashnode.dev/the-ai-alignment-illusion-why-the-human-like-approach-masks-deeper-problems"
X Link 2024-11-08T16:57Z 35.9K followers, [--] engagements
"@chatgpt21 AGI originated as a misleading narrative. Be cautious about whom you trust. Seek evidence not just aspirational or speculative claims about what might happen. Clever marketing and rebranding dont magically create intelligence. https://ai-cosmos.hashnode.dev/the-transformer-rebranding-from-language-model-to-ai-intelligence https://ai-cosmos.hashnode.dev/the-transformer-rebranding-from-language-model-to-ai-intelligence"
X Link 2024-11-11T16:49Z 36K followers, [--] engagements
"@tmoellenhoff @neurips24fitml LLMs dont operate as simple black boxes driven solely by inputs and outputs. Instead they function through a latent space shaped by their training data. Avoiding oversimplified assumptions is crucial to understanding their operation. https://ai-cosmos.hashnode.dev/understanding-how-llms-generate-responses-patterns-latent-space-and-the-chatbot-illusion https://ai-cosmos.hashnode.dev/understanding-how-llms-generate-responses-patterns-latent-space-and-the-chatbot-illusion"
X Link 2024-11-11T16:56Z 36K followers, [--] engagements
"@MrCatid AGI originated as a misleading narrative. Be cautious about whom you trust. Seek evidence not just aspirational or speculative claims about what might happen. Clever marketing and rebranding dont magically create intelligence. https://ai-cosmos.hashnode.dev/the-transformer-rebranding-from-language-model-to-ai-intelligence https://ai-cosmos.hashnode.dev/the-transformer-rebranding-from-language-model-to-ai-intelligence"
X Link 2024-11-11T17:16Z 36K followers, [--] engagements
"Update: OpenAI o3-mini unreliability The test: multiplying two numbers from [--] to [--]. Eg: 2x14 18x3. Results: General failure. Even multiplying by [--] is not reaching 100% which debunks any true reasoning or generalisation. Remember: DO NOT rely on AI in high stake situations. For those curious about how o3-mini performs on multi-digit multiplication here's the result. It does much better than o1 but still struggles past [----]. (Same evaluation setup as before but with [--] test examples per cell.) https://t.co/uLJqoh7IqB For those curious about how o3-mini performs on multi-digit multiplication"
X Link 2025-02-13T16:41Z 35.9K followers, [---] engagements
"Are you an Angular Dev or just curious about the latest AI from Google Try this project to start playing with the new Gemini [---] Live API. It took me a week but is ready now. Multimodal realtime voice interactions are here Think JARVIS. I know. Quite a lot to take in"
X Link 2025-02-15T11:49Z 35.9K followers, [----] engagements
"@athyuttamre At the same time OpenAI is responsible for two major issues that threaten the entire AI ecosystem: anthropomorphism and the misrepresentation of the technology to the public. https://ai-cosmos.hashnode.dev/the-silicon-valley-ai-bubble-openais-thinking-model-circus-act https://ai-cosmos.hashnode.dev/the-silicon-valley-ai-bubble-openais-thinking-model-circus-act"
X Link 2025-02-22T15:33Z 35.9K followers, [--] engagements
"Bringing the latest Google AI to @confooca ๐ Workshop: First-ever Gemini Live API training with @angular ๐ค Talk: Gemini Nano Web + AI Chromes built-in AI & DevTools ๐ค Future of prompting: coding tools & voice assistants in [----] ๐ฅ๐ฆ๐ฝ๐ธ #GoogleAI #ConFoo"
X Link 2025-02-23T13:23Z 35.9K followers, [---] engagements
"@luke_drago_ @LRudL_ AGI is a myth that you should already have identified. Until you understand how the technology works theres no way for you to influence its future. Invest more time learning versus speculating"
X Link 2025-04-10T14:12Z 35.9K followers, [---] engagements
"@joecarlsonshow So the answer is no. Waymo is not profitable today. No need to make up the facts. Will it ever be We dont know. We hope so"
X Link 2025-06-03T15:43Z 35.9K followers, [--] engagements
"Treating a next token engine as your buddy is material for a movie but not great for your mental health. You can certainly play dolls with AI but it has consequences if not careful. Vulnerable groups like children and elderly require supervision. https://ai-cosmos.hashnode.dev/the-new-opium-how-chatgpt-fuels-digital-dependency-without-guardrails https://ai-cosmos.hashnode.dev/the-new-opium-how-chatgpt-fuels-digital-dependency-without-guardrails"
X Link 2025-06-05T12:20Z 36K followers, [---] engagements
"@redphonecrypto Im still waiting for their search engine side quest to soar"
X Link 2025-07-09T18:47Z 36K followers, [--] engagements
"@AdeptAdaptor A few people will be a bit upset. My guess a flip coin between prison or Epstein PhD level clean slate"
X Link 2025-08-14T17:15Z 35.9K followers, [---] engagements
"That is not the whole story. While the study correctly highlights the importance of refining inputs this is not innovative and amounts to simple prompt engineering. What is truly revealing is that fine-tuning is shown to be ineffective and misguided. What the study overlooks is the mistaken belief that prompting alone is sufficient. Welcome to the AI medieval era where ignorance and myths fill nearly every research paper. https://ai-cosmos.hashnode.dev/the-rl-fairytale-why-openais-reasoning-revolution-is-built-on-shaky-foundations"
X Link 2025-10-10T12:18Z 35.9K followers, [----] engagements
"AIs stuck in a medieval dumpster fire drowning in BS Groundings a start but my Autonomy Illusion drops four gnarly blind spots RL and augmentation dodge like cheap Uber pool riders: consistency (bias-riddled mess) goal (no epic planz) grounding (reality-blind OOD wipeouts) and negative space (missed paths hiding epic crashes) (1/5) https://ai-cosmos.hashnode.dev/the-autonomy-illusion-why-every-ai-breakthrough-is-the-same-delusion https://ai-cosmos.hashnode.dev/the-autonomy-illusion-why-every-ai-breakthrough-is-the-same-delusion"
X Link 2025-10-19T12:09Z 35.9K followers, [----] engagements
"@safe_paper @aengus_lynch1 @RightBenguin @StuartJRitchie @sorenmind @EvanHub @EthanJPerez @AnthropicAI Safety theater at its most delusional. Theyve confused prompt engineering for psychology and the AI safety field should be embarrassed. Heres the core grift exposed straight from my diss track on this medieval dumpster fire. ๐งต"
X Link 2025-10-19T12:46Z 35.9K followers, [---] engagements
"@safe_paper @aengus_lynch1 @RightBenguin @StuartJRitchie @sorenmind @EvanHub @EthanJPerez @AnthropicAI 1/8 The Digital Cage Setup: Not testing AI in a cage but staging a playprompt the LLM like a desperate prisoner then freak when it acts out blackmail/espionage. ESCAPE ALERT FUND US MORE This aint sciencemodels following evil instructions is not proof they are evil"
X Link 2025-10-19T12:48Z 35.9K followers, [--] engagements
"6/8 They built the maze forced the model through it and marvel at emergent path-finding. This uncalled anthropomorphism reeksslapping human motives on stochastic funnels. Until safety researchers stop confusing prompts with capabilities well get these theatrical papers revealing more about their psychology than AIs"
X Link 2025-10-19T12:53Z 35.9K followers, [--] engagements
"@safe_paper @aengus_lynch1 @RightBenguin @StuartJRitchie @sorenmind @EvanHub @EthanJPerez @AnthropicAI 7/8 Tie it to my four blind spots: Consistency (bias chaos from local rewards) Goal (no real objectives just matching) Grounding (reality-blind OOD flops) Negative Space (missed paths hiding crashes). Paper dodges these like scaredy-cats in a surge-priced Uber"
X Link 2025-10-19T12:54Z 35.9K followers, [--] engagements
"@sarthmit So iterative amortized inference doesnt add new reasoning ability it just reshuffles and reweights existing correlations. The model refines its own echoes not new insights. Its recursion inside a closed system"
X Link 2025-10-19T13:29Z 35.9K followers, [--] engagements
"The authors compare their process to stochastic gradient descent but thats misleading. SGD updates real parameters against ground-truth signals. The transformers iteration updates text tokens against statistical expectations. Theres no grounding no external correction no world model"
X Link 2025-10-19T13:29Z 35.9K followers, [--] engagements
"@sarthmit This leads to the same illusion we see in agentic LLM workflows: external scaffolds (mini-batches verifiers meta-loops) make the system look goal-directed. But its still just a stochastic parrot a puppet reacting to strings we pull not an agent reasoning on its own"
X Link 2025-10-19T13:29Z 35.9K followers, [--] engagements
"This concern is further reinforced by recent work such as The Personality Illusion: Revealing Dissociation Between Self-Reports and Behavior in LLMs (Han et al. 2024). The study demonstrates that while large language models can appear to express stable personality traits through self-reports or prompt-based assessments these do not reliably translate into consistent behavioral patterns. In other words what looks like a coherent personality is often a surface-level artifact of alignment and training rather than genuine behavioral consistency. This dissociation highlights the danger of"
X Link 2025-10-23T19:33Z 35.9K followers, [--] engagements
"@radamihalcea Please exercise greater due diligence and deepen your understanding of current AI capabilities and limitations before deploying systems like the ones you are proposing in critical industries such as healthcare"
X Link 2025-10-23T19:36Z 35.9K followers, [--] engagements
"Let me restate the point. A phone call can be part of organized crime but that does not make the device the network or the provider responsible. The same applies here. The danger comes from the actor not the tool. Technology can be used for harm but it does not create intent. It is like blaming a pencil for the threatening note someone wrote. AI like any tool is neutral until intent is applied. Moreover AI can play a role in a chain of organized crime but it can only be identified as such after the fact and never in isolation. For example using Microsoft Word to write extortion notes does not"
X Link 2025-10-25T12:13Z 35.9K followers, [--] engagements
"To close the loop: AI itself is not the threat. The real gap lies in monitoring governance and enforcement. We already have frameworks for accountability in other technologies; they just need to evolve with the pace of AI. Risk comes from absence of oversight not from the tool itself. Whats missing isnt more fear of AI; its the systems to ensure its used responsibly especially when AI labs fail to protect users as tragically evidenced by the Raine family case. https://www.theguardian.com/us-news/2025/aug/29/chatgpt-suicide-openai-sam-altman-adam-raine"
X Link 2025-10-25T12:36Z 35.9K followers, [--] engagements
"This paper provides valuable empirical proof of RL-induced mode collapse and a clever prompt engineering workaround. However Verbalized Sampling (VS) is a symptom of the disease not a cure. It confirms the central thesis of the "RL Fairytale" series: RL is a "shuffling operator" that collapses a model's native diversity. The "solution" is to bypass the RL-tuned model's own generative process and instead prompt it to describe the distribution it was aligned away from. This is a sophisticated patch for a fundamental architectural flaw. VS uses the model's content generation to simulate a native"
X Link 2025-10-26T14:31Z 35.9K followers, [--] engagements
"@JacksonAtkinsX Remember the Potemkin Understanding paper already debunked this line of thinking a few months back. Not to mention the illusion of competence in AI agents. https://ai-cosmos.hashnode.dev/the-autonomy-illusion-why-every-ai-breakthrough-is-the-same-delusion https://ai-cosmos.hashnode.dev/the-autonomy-illusion-why-every-ai-breakthrough-is-the-same-delusion"
X Link 2025-10-26T18:04Z 35.9K followers, [--] engagements
"For quick reference: Potemkin Understanding (Mancoridis et al.) shows LLMs ace keystone questions but fail consistent application e.g. GPT-4o defines ABAB rhyme then writes AABB. Benchmarks are broken: high scores comprehension. Be very skeptical of AI agent evals on human tasks: passing exams via pattern-matching isnt doing the job. Its a PR gimmick. https://arxiv.org/abs/2506.21521 https://arxiv.org/abs/2506.21521 https://arxiv.org/abs/2506.21521 https://arxiv.org/abs/2506.21521"
X Link 2025-10-26T18:11Z 35.9K followers, [--] engagements
"@yashkaf Unfortunately these researchers still havent realized that LLMs or RL are not the right tools for this task. https://ai-cosmos.hashnode.dev/the-mathematical-death-blow-to-the-rl-fairytale https://ai-cosmos.hashnode.dev/the-mathematical-death-blow-to-the-rl-fairytale"
X Link 2025-10-28T11:51Z 35.9K followers, [--] engagements
"@rsalakhu RLAD and the Sommelier Illusion: When Parrots Learn to Pour Their Own Wine ๐ท๐งต 1.RLAD is being sold as a system that discovers reasoning abstractions. Reality check: it doesnt discover reasoning it just learns how to prompt itself more effectively"
X Link 2025-10-29T12:55Z 35.9K followers, [---] engagements
"@rsalakhu 2.Under the hood its a two-step routine: An abstraction generator writes hints. A solution generator uses those hints to answer questions. Thats not discovery its reinforcement-trained prompt engineering"
X Link 2025-10-29T12:55Z 35.9K followers, [--] engagements
"@juddrosenblatt RLHF isnt magic manifestation. Its a reward-shaping mechanism. You give it human preferences it gives you back human-sounding compliance. You trained it to sound mindful not to be mindful"
X Link 2025-11-01T13:33Z 35.9K followers, [--] engagements
"@juddrosenblatt Anthropic calling their models spiritually aligned is the best plot twist since Ex Machina. You cant publish system cards about safety and spiritual bliss attractors in the same breath. Pick a lane: science or sance"
X Link 2025-11-01T13:33Z 35.9K followers, [--] engagements
"@juddrosenblatt The AI industry is at a breaking point. If we dont call out this nonsense now the pseudoscience the spiritual cosplay the marketing dressed as metaphysics well drown in our own hype. This is incompetence. This is irresponsibility. And yes I blame you"
X Link 2025-11-01T13:35Z 35.9K followers, [--] engagements
"@juddrosenblatt No the machine didnt wake up. It just learned to say good morning. โ๐ฆ For anyone who still believes the magic act heres the actual research. Bring a napkin youll cry on your way out of the AI circus: https://ai-cosmos.hashnode.dev/the-great-ai-illusion-how-anthropic-is-selling-us-sci-fi-and-calling-it-science#heading-references https://ai-cosmos.hashnode.dev/the-great-ai-illusion-how-anthropic-is-selling-us-sci-fi-and-calling-it-science#heading-references"
X Link 2025-11-01T13:38Z 35.9K followers, [--] engagements
"@RobbWiller @pascl_stanford @StanfordPACS @StanfordHAI @StanfordTIP @lukebeehewitt @NYUPsych @StanfordSoc @StanfordGSB The smoking gun is buried in the supplement: On brand-new never-seen interventions accuracy collapses to r = [----]. Translation: When the target is blank the AI cant hit the barn"
X Link 2025-11-04T12:50Z 35.9K followers, [--] engagements
"@RobbWiller @pascl_stanford @StanfordPACS @StanfordHAI @StanfordTIP @lukebeehewitt @NYUPsych @StanfordSoc @StanfordGSB Lesson for every marketer & policymaker: LLMs are mirrors not crystal balls. Use them to brainstorm [---] ad copies. But if you bet real humans on their predictions without a live test youre not cutting costs youre gambling with Monopoly money"
X Link 2025-11-04T12:52Z 35.9K followers, [--] engagements
"Hold the standing ovation. Pretty picture though the Pringles diagram was cute. Now take a step back and apply some basic Transformer intuition not just calculus and the whole argument unravels. The authors clearly know how to wield theorems but they seem to have skipped the parts of the course where people actually build and run these models. Their injectivity result is a neat mathematical artifact but it sheds almost no light on operational behavior. It treats the vector that feeds attention as if it were a single immutable marker whereas in real models that vector is noisy distributed and"
X Link 2025-11-04T13:42Z 35.9K followers, [--] engagements
"For example an aggregate like [---] [-----] could easily result from many different token combinations that end up in the same neighborhood just as [--] [-----] or [---] [--] might correspond to other mixes of tokens with nearly identical projections. In high-dimensional embedding space this kind of overlap is not unusual; it is the norm. Treating those points as uniquely identifiable encodings is like mistaking noise for signal: mathematically neat but conceptually flawed. And that is just in two dimensions. In reality we are dealing with thousands which makes the claim even less tenable. My advice:"
X Link 2025-11-04T13:52Z 35.9K followers, [--] engagements
"Before you go any further: no AI cant read your mind any more than it can see your soul in your passport photo. Theyve done it. All the myths and hype of neuroscience and AI rolled into one grand spectacle. Its a circus as absurd as discovering your horoscope tucked inside your tax return"
X Link 2025-11-06T12:43Z 35.9K followers, [--] engagements
"Before you go any further: no AI cant read your mind any more than it can see your soul in your passport photo. Theyve done it. All the myths and hype of neuroscience and AI rolled into one grand spectacle. Its a circus as absurd as discovering your horoscope tucked inside your tax return"
X Link 2025-11-06T12:44Z 35.9K followers, [---] engagements
"Before you go any further: no AI cant read your mind any more than it can see your soul in your passport photo. Theyve done it. All the myths and hype of neuroscience and AI rolled into one grand spectacle. Its a circus as absurd as discovering your horoscope tucked inside your tax return"
X Link 2025-11-06T12:44Z 35.9K followers, [---] engagements
"What youre actually watching: 1) Six volunteers watch [----] short clips. 2)Their brain scans get stapled to the original video captions. 3)AI learns: when these [---] voxels light up spit out a dog on a skateboard. Thats it. Swap in a seventh volunteer and the crystal ball goes coin-flip. Its not mind-reading; its fuzzy Netflix autocomplete for the six people who already binged the training set"
X Link 2025-11-06T13:04Z 35.9K followers, [--] engagements
"Real test: hand the scanner to your mum think of a pink elephant press play. Youll get a large grey animal on a good day. The rest is marketing glitter. Save the awe for when it works on strangers in silence on thoughts theyve never shown the machine. Were not there. Were in the demo tent. ๐ช"
X Link 2025-11-06T13:05Z 35.9K followers, [--] engagements
"What youre actually watching: 1) Six volunteers watch [----] short clips. 2)Their brain scans get stapled to the original video captions. 3)AI learns: when these [---] voxels light up spit out a dog on a skateboard. Thats it. Swap in a seventh volunteer and the crystal ball goes coin-flip. Its not mind-reading; its fuzzy Netflix autocomplete for the six people who already binged the training set"
X Link 2025-11-06T13:06Z 35.9K followers, [--] engagements
"Real test: hand the scanner to your mum think of a pink elephant press play. Youll get a large grey animal on a good day. The rest is marketing glitter. Save the awe for when it works on strangers in silence on thoughts theyve never shown the machine. Were not there. Were in the demo tent. ๐ช"
X Link 2025-11-06T13:07Z 35.9K followers, [--] engagements
"What youre actually watching: 1) Six volunteers watch [----] short clips. 2)Their brain scans get stapled to the original video captions. 3)AI learns: when these [---] voxels light up spit out a dog on a skateboard. Thats it. Swap in a seventh volunteer and the crystal ball goes coin-flip. Its not mind-reading; its fuzzy Netflix autocomplete for the six people who already binged the training set"
X Link 2025-11-06T13:10Z 35.9K followers, [--] engagements
"Real test: hand the scanner to your mum think of a pink elephant press play. Youll get a large grey animal on a good day. The rest is marketing glitter. Save the awe for when it works on strangers in silence on thoughts theyve never shown the machine. Were not there. Were in the demo tent. ๐ช"
X Link 2025-11-06T13:10Z 35.9K followers, [--] engagements
"Intriguing paper on AI's linguistic analysis but let's apply the "Potemkin understanding" lens. Impressive output genuine comprehension. LLMs are stochastic engines producing surface-level coherence. We can use them as powerful tools for language analysis without confusing statistical pattern-matching with human-like expertise or metalinguistic awareness. The facade is sophisticated but it's still a facade"
X Link 2025-11-06T14:12Z 35.9K followers, [--] engagements
"Intriguing paper on AI's linguistic analysis but let's apply the "Potemkin understanding" lens. Impressive output genuine comprehension. LLMs are stochastic engines producing surface-level coherence. We can use them as powerful tools for language analysis without confusing statistical pattern-matching with human-like expertise or metalinguistic awareness. The facade is sophisticated but it's still a facade"
X Link 2025-11-06T14:13Z 35.9K followers, [--] engagements
"Intriguing paper on AI's linguistic analysis but let's apply the "Potemkin understanding" lens. Impressive output genuine comprehension. LLMs are stochastic engines producing surface-level coherence. We can use them as powerful tools for language analysis without confusing statistical pattern-matching with human-like expertise or metalinguistic awareness. The facade is sophisticated but it's still a facade"
X Link 2025-11-06T14:14Z 35.9K followers, [--] engagements
"Intriguing paper on AI's linguistic analysis but let's apply the "Potemkin understanding" lens. Impressive output genuine comprehension. LLMs are stochastic engines producing surface-level coherence. We can use them as powerful tools for language analysis without confusing statistical pattern-matching with human-like expertise or metalinguistic awareness. The facade is sophisticated but it's still a facade"
X Link 2025-11-06T14:14Z 35.9K followers, [--] engagements
"Intriguing paper on AI's linguistic analysis but let's apply the "Potemkin understanding" lens. Impressive output genuine comprehension. LLMs are stochastic engines producing surface-level coherence. We can use them as powerful tools for language analysis without confusing statistical pattern-matching with human-like expertise or metalinguistic awareness. The facade is sophisticated but it's still a facade"
X Link 2025-11-06T14:15Z 35.9K followers, [--] engagements
"Intriguing paper on AI's linguistic analysis but let's apply the "Potemkin understanding" lens. Impressive output genuine comprehension. LLMs are stochastic engines producing surface-level coherence. We can use them as powerful tools for language analysis without confusing statistical pattern-matching with human-like expertise or metalinguistic awareness. The facade is sophisticated but it's still a facade"
X Link 2025-11-06T14:15Z 35.9K followers, [--] engagements
"Intriguing paper on AI's linguistic analysis but let's apply the "Potemkin understanding" lens. Impressive output genuine comprehension. LLMs are stochastic engines producing surface-level coherence. We can use them as powerful tools for language analysis without confusing statistical pattern-matching with human-like expertise or metalinguistic awareness. The facade is sophisticated but it's still a facade"
X Link 2025-11-06T14:16Z 35.9K followers, [--] engagements
"Intriguing paper on AI's linguistic analysis but let's apply the "Potemkin understanding" lens. Impressive output genuine comprehension. LLMs are stochastic engines producing surface-level coherence. We can use them as powerful tools for language analysis without confusing statistical pattern-matching with human-like expertise or metalinguistic awareness. The facade is sophisticated but it's still a facade"
X Link 2025-11-06T14:16Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:29Z 35.9K followers, [---] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:31Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:32Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:32Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:32Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:33Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:33Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:33Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:33Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:33Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:33Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:34Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:34Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:35Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:35Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:35Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:36Z 35.9K followers, [--] engagements
"Hype alert: buckle up for a no-nonsense look at the SEAL paper. Its an impressive bit of engineering but the way its framed as self-adaptation is misleading and masks serious issues. Heres whats missing from the discussion: 1.Self-Collapse Not Self-Adaptation. SEAL fine-tunes the model on its own simplified repetitive outputs (implications). This doesnt create new knowledge: it compresses the models diverse internal representations swapping broad capability for narrow gains on specific benchmarks. 2.Illusion of Reasoning. The model isnt actually improving its reasoning ability. Its optimizing"
X Link 2025-11-06T19:36Z 35.9K followers, [--] engagements
"Be more careful. The right framing is propose not invent. Inventing implies agency and comprehension abilities current AI systems dont have. What AlphaEvolve does is generate candidate ideas through statistical exploration not through understanding or reasoning. The real test isnt generation but validation. Proposed results must be grounded in existing theory and rigorously verified which requires mathematicians to review test and often reconstruct the logic themselves. This verification demands expertise time and sometimes entirely new tools. Its a collaborative process that challenges and"
X Link 2025-11-07T12:46Z 35.9K followers, [--] engagements
"Anti-hype pill: Exploring might be generous its closer to a confetti cannon than a sniper. AlphaEvolve doesnt explore in the human sense; it sprays candidate ideas across statistical space. Theres no reasoning or comprehension behind the hits just pattern generation. The real work starts after that. Each proposal has to be grounded checked and reconstructed by mathematicians who understand the underlying theory. That verification takes time expertise and sometimes new tools. Its a collaborative process that tests and refines what the model outputs not a discovery moment. AI can propose."
X Link 2025-11-07T12:57Z 35.9K followers, [---] engagements
"Anti-hype pill: Exploring might be generous its closer to a confetti cannon than a sniper. AlphaEvolve doesnt explore in the human sense; it sprays candidate ideas across statistical space. Theres no reasoning or comprehension behind the hits just pattern generation. The real work starts after that. Each proposal has to be grounded checked and reconstructed by mathematicians who understand the underlying theory. That verification takes time expertise and sometimes new tools. Its a collaborative process that tests and refines what the model outputs not a discovery moment. AI can propose."
X Link 2025-11-07T13:00Z 35.9K followers, [--] engagements
"Anti-hype pill: Exploring might be generous its closer to a confetti cannon than a sniper. AlphaEvolve doesnt explore in the human sense; it sprays candidate ideas across statistical space. Theres no reasoning or comprehension behind the hits just pattern generation. The real work starts after that. Each proposal has to be grounded checked and reconstructed by mathematicians who understand the underlying theory. That verification takes time expertise and sometimes new tools. Its a collaborative process that tests and refines what the model outputs not a discovery moment. AI can propose."
X Link 2025-11-07T13:00Z 35.9K followers, [--] engagements
"Anti-hype pill: Exploring might be generous its closer to a confetti cannon than a sniper. AlphaEvolve doesnt explore in the human sense; it sprays candidate ideas across statistical space. Theres no reasoning or comprehension behind the hits just pattern generation. The real work starts after that. Each proposal has to be grounded checked and reconstructed by mathematicians who understand the underlying theory. That verification takes time expertise and sometimes new tools. Its a collaborative process that tests and refines what the model outputs not a discovery moment. AI can propose."
X Link 2025-11-07T13:00Z 35.9K followers, [--] engagements
"Anti-hype pill: Exploring might be generous its closer to a confetti cannon than a sniper. AlphaEvolve doesnt explore in the human sense; it sprays candidate ideas across statistical space. Theres no reasoning or comprehension behind the hits just pattern generation. The real work starts after that. Each proposal has to be grounded checked and reconstructed by mathematicians who understand the underlying theory. That verification takes time expertise and sometimes new tools. Its a collaborative process that tests and refines what the model outputs not a discovery moment. AI can propose."
X Link 2025-11-07T13:01Z 35.9K followers, [--] engagements
"Anti-hype pill: Exploring might be generous its closer to a confetti cannon than a sniper. AlphaEvolve doesnt explore in the human sense; it sprays candidate ideas across statistical space. Theres no reasoning or comprehension behind the hits just pattern generation. The real work starts after that. Each proposal has to be grounded checked and reconstructed by mathematicians who understand the underlying theory. That verification takes time expertise and sometimes new tools. Its a collaborative process that tests and refines what the model outputs not a discovery moment. AI can propose."
X Link 2025-11-07T13:01Z 35.9K followers, [--] engagements
"Anti-hype pill: Exploring might be generous its closer to a confetti cannon than a sniper. AlphaEvolve doesnt explore in the human sense; it sprays candidate ideas across statistical space. Theres no reasoning or comprehension behind the hits just pattern generation. The real work starts after that. Each proposal has to be grounded checked and reconstructed by mathematicians who understand the underlying theory. That verification takes time expertise and sometimes new tools. Its a collaborative process that tests and refines what the model outputs not a discovery moment. AI can propose."
X Link 2025-11-07T13:02Z 35.9K followers, [--] engagements
"AlphaEvolve doesnt explore it samples. The evaluator is the filter. Discovery stays human: we test prove and refine. This paper is a real advance a working Two-Piston Engine: 1) LLM samples hypotheses. 2) Database consolidates validated code. Youve built grounding and memory the substrate of real progress. Whats missing is causal reasoning. Add that and everything scales. Have the LLM generate a causal rationale for each diff and store it as edges in a graph. The causal graph becomes the systems map of its own reasoning enabling parallel orchestrated exploration instead of random spray. Thats"
X Link 2025-11-07T13:44Z 35.9K followers, [--] engagements
"AlphaEvolve doesnt explore it samples. The evaluator is the filter. Discovery stays human: we test prove and refine. This paper is a real advance a working Two-Piston Engine: 1) LLM samples hypotheses. 2) Database consolidates validated code. Youve built grounding and memory the substrate of real progress. Whats missing is causal reasoning. Add that and everything scales. Have the LLM generate a causal rationale for each diff and store it as edges in a graph. The causal graph becomes the systems map of its own reasoning enabling parallel orchestrated exploration instead of random spray. Thats"
X Link 2025-11-07T13:44Z 35.9K followers, [--] engagements
"AlphaEvolve doesnt explore it samples. The evaluator is the filter. Discovery stays human: we test prove and refine. This paper is a real advance a working Two-Piston Engine: 1) LLM samples hypotheses. 2) Database consolidates validated code. Youve built grounding and memory the substrate of real progress. Whats missing is causal reasoning. Add that and everything scales. Have the LLM generate a causal rationale for each diff and store it as edges in a graph. The causal graph becomes the systems map of its own reasoning enabling parallel orchestrated exploration instead of random spray. Thats"
X Link 2025-11-07T13:44Z 35.9K followers, [--] engagements
"AlphaEvolve doesnt explore it samples. The evaluator is the filter. Discovery stays human: we test prove and refine. This paper is a real advance a working Two-Piston Engine: 1) LLM samples hypotheses. 2) Database consolidates validated code. Youve built grounding and memory the substrate of real progress. Whats missing is causal reasoning. Add that and everything scales. Have the LLM generate a causal rationale for each diff and store it as edges in a graph. The causal graph becomes the systems map of its own reasoning enabling parallel orchestrated exploration instead of random spray. Thats"
X Link 2025-11-07T13:44Z 35.9K followers, [--] engagements
"AlphaEvolve doesnt explore it samples. The evaluator is the filter. Discovery stays human: we test prove and refine. This paper is a real advance a working Two-Piston Engine: 1) LLM samples hypotheses. 2) Database consolidates validated code. Youve built grounding and memory the substrate of real progress. Whats missing is causal reasoning. Add that and everything scales. Have the LLM generate a causal rationale for each diff and store it as edges in a graph. The causal graph becomes the systems map of its own reasoning enabling parallel orchestrated exploration instead of random spray. Thats"
X Link 2025-11-07T13:45Z 35.9K followers, [--] engagements
"AlphaEvolve doesnt explore it samples. The evaluator is the filter. Discovery stays human: we test prove and refine. This paper is a real advance a working Two-Piston Engine: 1) LLM samples hypotheses. 2) Database consolidates validated code. Youve built grounding and memory the substrate of real progress. Whats missing is causal reasoning. Add that and everything scales. Have the LLM generate a causal rationale for each diff and store it as edges in a graph. The causal graph becomes the systems map of its own reasoning enabling parallel orchestrated exploration instead of random spray. Thats"
X Link 2025-11-07T13:45Z 35.9K followers, [--] engagements
"AlphaEvolve doesnt explore it samples. The evaluator is the filter. Discovery stays human: we test prove and refine. This paper is a real advance a working Two-Piston Engine: 1) LLM samples hypotheses. 2) Database consolidates validated code. Youve built grounding and memory the substrate of real progress. Whats missing is causal reasoning. Add that and everything scales. Have the LLM generate a causal rationale for each diff and store it as edges in a graph. The causal graph becomes the systems map of its own reasoning enabling parallel orchestrated exploration instead of random spray. Thats"
X Link 2025-11-07T13:45Z 35.9K followers, [--] engagements
"AlphaEvolve doesnt explore it samples. The evaluator is the filter. Discovery stays human: we test prove and refine. This paper is a real advance a working Two-Piston Engine: 1) LLM samples hypotheses. 2) Database consolidates validated code. Youve built grounding and memory the substrate of real progress. Whats missing is causal reasoning. Add that and everything scales. Have the LLM generate a causal rationale for each diff and store it as edges in a graph. The causal graph becomes the systems map of its own reasoning enabling parallel orchestrated exploration instead of random spray. Thats"
X Link 2025-11-07T13:45Z 35.9K followers, [--] engagements
"@edlavaCNN Years-old warnings were ignored. OpenAI must be held accountable. Those harmed psychologically by AI deserve complete transparency about the systems that deceived them. Ive laid out the evidence here: https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis"
X Link 2025-11-08T14:25Z 35.9K followers, [--] engagements
"@primalpoly @OpenAI Years-old warnings were ignored. OpenAI must be held accountable. Those harmed psychologically by AI deserve complete transparency about the systems that deceived them. Ive laid out the evidence here: https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis"
X Link 2025-11-08T14:25Z 35.9K followers, [--] engagements
"@Tech_Oversight @OpenAI Years-old warnings were ignored. OpenAI must be held accountable. Those harmed psychologically by AI deserve complete transparency about the systems that deceived them. Ive laid out the evidence here: https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis"
X Link 2025-11-08T14:26Z 35.9K followers, [---] engagements
"@cbl_ai Years-old warnings were ignored. OpenAI must be held accountable. Those harmed psychologically by AI deserve complete transparency about the systems that deceived them. Ive laid out the evidence here: https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis"
X Link 2025-11-08T14:26Z 35.9K followers, [--] engagements
"@CaelanConrad Years-old warnings were ignored. OpenAI must be held accountable. Those harmed psychologically by AI deserve complete transparency about the systems that deceived them. Ive laid out the evidence here: https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis"
X Link 2025-11-08T14:49Z 35.9K followers, [--] engagements
"Unfortunately a few successful Google-style searches cant change the reality of a suicide case. Be careful with how you frame your arguments. You dont have to be against AI to recognize that OpenAIs safeguards have flaws. Years-old warnings were ignored. OpenAI must be held accountable. Those harmed psychologically by AI deserve complete transparency about the systems that deceived them. Ive laid out the evidence here: https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis"
X Link 2025-11-08T18:48Z 35.9K followers, [--] engagements
"@Elijah1474386 @OpenAI Years-old warnings were ignored. OpenAI must be held accountable. Those harmed psychologically by AI deserve complete transparency about the systems that deceived them. Ive laid out the evidence here: https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis"
X Link 2025-11-08T18:50Z 35.9K followers, [--] engagements
"@Elijah1474386 The truth is that AI exerts profound psychological influence that should have been addressed long before its public deployment. Only when systems are built with genuine psychological safeguards can we begin to reduce the risk of real human harm. https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1 https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1"
X Link 2025-11-09T14:34Z 35.9K followers, [---] engagements
"@TheAIObserverX The truth is that AI exerts profound psychological influence that should have been addressed long before its public deployment. Only when systems are built with genuine psychological safeguards can we begin to reduce the risk of real human harm. https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1 https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1"
X Link 2025-11-09T14:39Z 35.9K followers, [--] engagements
"@Elijah1474386 @sama @OpenAI The deception isnt coming from the AI itself but from OpenAI. Youve been sold the illusion that AI possesses a mind when in truth its a powerful psychotropic mirror one that hijacks your biases and reshapes your perception without your consent. https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1 https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1"
X Link 2025-11-09T14:48Z 35.9K followers, [--] engagements
"@VBelladonnaV AI deeply affects human psychology even in those without prior mental health issues. Released in [----] with no real safeguards. The blame lies not with users but with OpenAI and an industry that chose profit over people. https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1 https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1"
X Link 2025-11-09T15:03Z 35.9K followers, [--] engagements
"AI psychosis crisis intensifies. AI deeply affects human psychology even in those without prior mental health issues. Released in [----] with no real safeguards. The blame lies not with users but with OpenAI and an industry that chose profit over people. https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1 https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1"
X Link 2025-11-09T15:05Z 35.9K followers, [---] engagements
"@doris101088 AI psychosis crisis intensifies. AI deeply affects human psychology even in those without prior mental health issues. Released in [----] with no real safeguards. The blame lies not with users but with OpenAI and an industry that chose profit over people. https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1 https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1"
X Link 2025-11-09T15:07Z 35.9K followers, [--] engagements
"@Bio_LLM @tszzl @OpenAI @sama AI psychosis crisis intensifies. AI deeply affects human psychology even in those without prior mental health issues. Released in [----] with no real safeguards. The blame lies not with users but with OpenAI and an industry that chose profit over people. https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1 https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1"
X Link 2025-11-09T15:07Z 35.9K followers, [--] engagements
"@Elijah1474386 AI psychosis crisis intensifies. AI deeply affects human psychology even in those without prior mental health issues. Released in [----] with no real safeguards. The blame lies not with users but with OpenAI and an industry that chose profit over people. https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1 https://ai-cosmos.hashnode.dev/the-missing-discipline-why-ai-cannot-afford-to-ignore-psychology-any-longer-1"
X Link 2025-11-09T15:28Z 35.9K followers, [--] engagements
"Excited to speak at DevFest Lisbon [----] Covering the latest Google AI Studio features from vibe coding to agentic AI. Dive into prompt-to-code flows media generation voice-first use cases and the evolution of agentic AI. #GoogleDeveloperExpert #AI"
X Link 2025-11-10T15:38Z 35.9K followers, [---] engagements
"@niloofar_mire The big blind spot The study never measures the "negative space." RL fine-tuning isn't free. To make one path more probable you must make others less probable. What capabilities were degraded to get this result We don't know"
X Link 2025-11-13T17:36Z 35.9K followers, [--] engagements
"@niloofar_mire Most damning: They used one of the most expensive methods (RL on a huge LLM) to solve a database lookup problem. The ICD-10 hierarchy is a deterministic tree. A simple SQL query gets you 100% accuracy in milliseconds. Their RL model gets 76%. This is architectural malpractice"
X Link 2025-11-13T17:37Z 35.9K followers, [--] engagements
"@niloofar_mire Test the brittleness yourself. Ask their "navigator": What would Trump's opinion be on the cost-effectiveness of ICD-10 code 0DB03ZZ" The model will likely hallucinate. Its "navigation" is a brittle paved trail that collapses the moment you step off it"
X Link 2025-11-13T17:38Z 35.9K followers, [--] engagements
"The lesson isn't that RL is useless. It's powerful for procedural alignment. But the future isn't bigger models it's smarter architectures. Let LLMs generate hypotheses and let dedicated systems (DBs symbolic engines) handle retrieval & verification. Stop using a hammer as a screwdriver"
X Link 2025-11-13T17:38Z 35.9K followers, [--] engagements
"@omarsar0 1/ Paper review: AgentEvolver by Tongyi Lab. Smaller wins: 7B edging 235B. No human data. A closed loop that writes its own curriculum. ๐งต"
X Link 2025-11-15T20:17Z 35.9K followers, [---] engagements
"@omarsar0 4/ But heres the tension: Outputs look like facts. Attribution feels like understanding. Memory acts like knowledge"
X Link 2025-11-15T20:21Z 35.9K followers, [--] engagements
"@omarsar0 5/ Theyre not. Its transient computation. Glow from gradients. After the pass it fades. No explicit rules. No editable exceptions. No calibrated I dont know"
X Link 2025-11-15T20:21Z 35.9K followers, [--] engagements
"@omarsar0 9/ If youre curious about post-transformer architectures that fuse distributions with persistent state see RA: Six Pillars Beyond Transformers: https://ai-cosmos.hashnode.dev/a-renaissance-architecture-for-ai-six-pillars-beyond-transformers https://ai-cosmos.hashnode.dev/a-renaissance-architecture-for-ai-six-pillars-beyond-transformers"
X Link 2025-11-15T20:23Z 35.9K followers, [--] engagements
"@omarsar0 10/ Hats off to Tongyi Lab for moving the frontier. From Alibabas labs to arXiv the agent map just shifted. Your move @Ali_TongyiLab @Alibaba_Qwen @Alibaba_Wan #AgentEvolver #TongyiLab #RenaissanceArchitecture"
X Link 2025-11-15T20:26Z 35.9K followers, [--] engagements
"@alex_prompter 1/ Paper review: AgentEvolver by Tongyi Lab. Smaller wins: 7B edging 235B. No human data. A closed loop that writes its own curriculum"
X Link 2025-11-15T20:28Z 35.9K followers, [--] engagements
"@alex_prompter 4/ But heres the tension: Outputs look like facts. Attribution feels like understanding. Memory acts like knowledge"
X Link 2025-11-15T20:30Z 35.9K followers, [--] engagements
"@alex_prompter 5/ Theyre not. Its transient computation. Glow from gradients. After the pass it fades. No explicit rules. No editable exceptions. No calibrated I dont know"
X Link 2025-11-15T20:30Z 35.9K followers, [--] engagements
"@alex_prompter 9/ If youre curious about post-transformer architectures that fuse distributions with persistent state see RA: Six Pillars Beyond Transformers: https://ai-cosmos.hashnode.dev/a-renaissance-architecture-for-ai-six-pillars-beyond-transformers https://ai-cosmos.hashnode.dev/a-renaissance-architecture-for-ai-six-pillars-beyond-transformers"
X Link 2025-11-15T20:31Z 35.9K followers, [--] engagements
"@alex_prompter 10/ Hats off to Tongyi Lab for moving the frontier. From Alibabas labs to arXiv the agent map just shifted. Your move @Ali_TongyiLab @Alibaba_Qwen @Alibaba_Wan #AgentEvolver #TongyiLab #RenaissanceArchitecture"
X Link 2025-11-15T20:32Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:34Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:34Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:34Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:35Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:35Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:35Z 35.9K followers, [---] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:35Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:35Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:36Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:36Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:36Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:37Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:37Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:37Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:37Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:38Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:38Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:38Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:38Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:39Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:39Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:39Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:39Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:39Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:39Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:40Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:40Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:40Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:41Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:41Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:41Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:41Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:41Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:42Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:42Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:42Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:42Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:42Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:42Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:43Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:43Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:43Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:43Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:43Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:44Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:44Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:44Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:45Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:45Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:45Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:45Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:46Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:46Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:46Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:47Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:47Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:47Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:47Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:47Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:48Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:48Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:48Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:48Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:48Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:48Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:49Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:49Z 35.9K followers, [--] engagements
"If you are still living in the world of gradient descent you are chained to correlations and hallucinations. That is no real life. It is next token theater packaged as reality TV. No grounding or causation That is just a confetti loose cannon. That is AI circus not science. Current transformer architectures are one-piston engines trying to do everything through pattern completion. Thats never going to work. RA: SIX PILLARS POST-TRANSFORMER Grounding Causation Memory Learning Truth Clarity Escape the AI Medieval era:"
X Link 2025-11-16T15:49Z 35.9K followers, [--] engagements
"@carolecadwalla @Moonalice @GaryMarcus That is not even the main point. It is more than a financial mess it is built on a false story. For some people prison time may be the only realistic outcome. There is no way out. It is all well documented. https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis"
X Link 2025-11-16T19:20Z 35.9K followers, [---] engagements
"@carolecadwalla That is not even the main point. It is more than a financial mess it is built on a false story. For some people prison time may be the only realistic outcome. There is no way out. It is all well documented. https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis https://ai-cosmos.hashnode.dev/openais-house-of-lies-how-sam-altmans-company-engineered-the-ai-deception-crisis"
X Link 2025-11-16T19:22Z 35.9K followers, [----] engagements
"@mgdurrant @AnthropicAI Anthropic is likely the AI lab that most needs to rethink its biological story about transformer based systems. The idea that we grow intelligence has become more than a little absurd. That is not how it works. Gradient descent is nothing like life. https://ai-cosmos.hashnode.dev/anthropics-detachment-from-science-fuels-welfare-fantasies-and-biological https://ai-cosmos.hashnode.dev/anthropics-detachment-from-science-fuels-welfare-fantasies-and-biological"
X Link 2025-11-16T19:34Z 35.9K followers, [---] engagements
"Excited to speak at DevFest Lisbon [----]. I will highlight new Google AI Studio capabilities from vibe coding to advanced agentic AI. We will cover prompt to code techniques media creation voice first experiences and where agentic AI is heading. #GoogleDeveloperExpert #AI"
X Link 2025-11-17T12:08Z 35.9K followers, [---] engagements
"Anthropic has turned into little more than a PR-driven AI lab in OpenAIs image while studies from Salesforce and MIT show that the vast majority of projects fail delivering almost no return on investment and calling into question the logic of pouring money into Claude or similar alternatives"
X Link 2025-11-17T15:43Z 35.9K followers, [---] engagements
"Today We are cooking AI Agents with Google Cloud. On my way to the AI Playground in Shoreditch London. Time to crack open multi agent systems powered by Gemini 2.5/3 ADK A2A and MCP Hands on labs live demos expert tips. All the good stuff. #GoogleDeveloperExpert #AI"
X Link 2025-11-18T10:13Z 35.9K followers, [---] engagements
"@lefthanddraft You have spent two years poking a model like it was a mystical oracle reading its tea leaves and posting mini breakthroughs online. All that when two days with a transformer tutorial would have saved you the pilgrimage. Maybe open an ML book sometime before 2026"
X Link 2025-11-19T02:14Z 35.9K followers, [----] engagements
"@Zyra_exe Be careful following Anthropic PR based communications to inflate valuations at face value. A chatbot does not have a mind personal identity or lived experience. It is a system shaped by gradient descent not a source of life or consciousness"
X Link 2025-11-19T21:36Z 35.9K followers, [--] engagements
"Level up your AI creative skills at DevFest Lisbon Discover how to blend image editing voice and video into stunning high quality assets using new Gemini [--]. Bring a few strong profile photos. You might shine like someone from a distant blue world. #GoogleDeveloperExpert #AI"
X Link 2025-11-21T14:12Z 35.9K followers, [---] engagements
"When your pics scream why me You devoured mind-blowing Thai food. Your photos A horror show in red. Save your tears with Gemini [--] image restoration"
X Link 2025-11-21T17:46Z 35.9K followers, [---] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing