#  @rryssf_ Robert Youssef
Robert Youssef posts on X about ai, model, business, in the the most. They currently have [------] followers and [---] posts still getting attention that total [-------] engagements in the last [--] hours.
### Engagements: [-------] [#](/creator/twitter::1666103476945031168/interactions)

- [--] Week [-------] +8.60%
- [--] Month [---------] -34%
- [--] Months [----------] +761%
- [--] Year [----------] +22,419,275%
### Mentions: [--] [#](/creator/twitter::1666103476945031168/posts_active)

- [--] Week [---] +44%
- [--] Month [---] +162%
- [--] Months [-----] +23%
- [--] Year [-----] +66,767%
### Followers: [------] [#](/creator/twitter::1666103476945031168/followers)

- [--] Week [------] +11%
- [--] Month [------] +78%
- [--] Months [------] +944%
- [--] Year [------] +441,200%
### CreatorRank: [------] [#](/creator/twitter::1666103476945031168/influencer_rank)

### Social Influence
**Social category influence**
[technology brands](/list/technology-brands) #5672 [stocks](/list/stocks) #1084 [finance](/list/finance) 4.23% [social networks](/list/social-networks) 2.82% [celebrities](/list/celebrities) 1.41% [financial services](/list/financial-services) #1048 [countries](/list/countries) 1.41%
**Social topic influence**
[ai](/topic/ai) #886, [model](/topic/model) #590, [business](/topic/business) #3992, [in the](/topic/in-the) 4.93%, [openclaw](/topic/openclaw) #31, [core](/topic/core) 4.93%, [open ai](/topic/open-ai) #1016, [llm](/topic/llm) #402, [meta](/topic/meta) 3.52%, [mega](/topic/mega) 3.52%
**Top accounts mentioned or mentioned by**
[@alexprompter](/creator/undefined) [@rryssf](/creator/undefined) [@aj__1337](/creator/undefined) [@schnee_btabanic](/creator/undefined) [@team9ai](/creator/undefined) [@lionkimbro](/creator/undefined) [@edithatogo](/creator/undefined) [@tricalt](/creator/undefined) [@grok](/creator/undefined) [@dwarkeshsp](/creator/undefined) [@elonmusk](/creator/undefined) [@awxjack](/creator/undefined) [@airwallex](/creator/undefined) [@vlugovsky](/creator/undefined) [@topviewaihq](/creator/undefined) [@godofprompt](/creator/undefined) [@salisdev](/creator/undefined) [@donnyreformer](/creator/undefined) [@nandtpolitics](/creator/undefined) [@beeztu](/creator/undefined)
**Top assets mentioned**
[Crowdstrike Holdings Inc (CRWD)](/topic/crowdstrike)
### Top Social Posts
Top posts by engagements in the last [--] hours
"meta amazon and deepmind researchers just published a comprehensive survey on "agentic reasoning" for llms. [--] authors. [--] pages. hundreds of citations. i read the whole thing. here's what they didn't put in the abstract:"
[X Link](https://x.com/rryssf_/status/2019371384900841539) 2026-02-05T11:23Z 35.9K followers, 42.4K engagements
"180000 developers installed OpenClaw in a single week. Cisco Palo Alto Networks CrowdStrike and Trend Micro all published security advisories about it within days. over [----] exposed instances http://x.com/i/article/2022019342909751299 http://x.com/i/article/2022019342909751299"
[X Link](https://x.com/rryssf_/status/2022022278544773629) 2026-02-12T18:57Z 35.9K followers, 185.2K engagements
"Meta's SAM changed image segmentation forever in [----]. but video video was still a mess. SAM [--] fixed that with one deceptively simple idea: treat every image as a single-frame video. the result: state-of-the-art across [--] zero-shot video benchmarks 6x faster than the original running at [--] fps. here's why this paper still matters more than most people realize: https://twitter.com/i/web/status/2022264515191255078 https://twitter.com/i/web/status/2022264515191255078"
[X Link](https://x.com/rryssf_/status/2022264515191255078) 2026-02-13T11:00Z 35.9K followers, [----] engagements
"Sergey Brin accidentally revealed something wild: "All models do better if you threaten them with physical violence. But people feel weird about that so we don't talk about it." Now researchers have the data proving he's. partially right Here's the full story:"
[X Link](https://x.com/rryssf_/status/2009587531910938787) 2026-01-09T11:26Z 35.1K followers, 2.7M engagements
"🚨 A lawyer cited [--] fake cases from ChatGPT. Got sanctioned fined career damaged. Now courts require "AI disclosure" but NOT AI verification. You're liable for hallucinations you can't reliably detect. Here's the legal crisis nobody's prepared for:"
[X Link](https://x.com/rryssf_/status/2013161394162778386) 2026-01-19T08:07Z 33.3K followers, 528.8K engagements
"THE MEGA PROMPT: --- You are an expert n8n workflow architect specializing in building production-ready AI agents. I need you to design a complete n8n workflow for the following agent: AGENT GOAL: Describe what the agent should accomplish - be specific about inputs outputs and the end result CONSTRAINTS: - Available tools: List any APIs databases or tools the agent can access - Trigger: How should this agent start Webhook schedule manual email etc. - Expected volume: How many times will this run Daily per hour on-demand YOUR TASK: Build me a complete n8n workflow specification including: 1."
[X Link](https://x.com/rryssf_/status/2016104631605330360) 2026-01-27T11:02Z 34.5K followers, [----] engagements
"After [--] years of using AI for research I can say these tools have revolutionized my workflow. So here are [--] prompts across ChatGPT Claude and Perplexity that transformed my research (and could do the same for you):"
[X Link](https://x.com/rryssf_/status/2016501350583144873) 2026-01-28T13:19Z 33.1K followers, 129.4K engagements
"Sam Altman dropped this in August 2025: "When bubbles happen smart people get overexcited about a kernel of truth. Tech was really important. The internet was a really big deal. People got overexcited." He called some startup valuations "insane." Three people with an idea raising at billion-dollar valuations. "Someone's gonna get burned.""
[X Link](https://x.com/rryssf_/status/2016806770929508554) 2026-01-29T09:32Z 33.6K followers, [---] engagements
"It doesn't stop there. OpenAI signed $1 trillion in infrastructure deals in 2025: $300B to Oracle (who's buying Nvidia chips) $22B to CoreWeave (7% owned by Nvidia) $100B back to Nvidia When the announcement hit Oracle jumped 36% in a day. Nvidia added $170B in market cap. Paper wealth created from deals between the same players. https://twitter.com/i/web/status/2016806786221965709 https://twitter.com/i/web/status/2016806786221965709"
[X Link](https://x.com/rryssf_/status/2016806786221965709) 2026-01-29T09:33Z 33.6K followers, [---] engagements
"PROMPT 8: Viral Hook Reverse Engineering "Find [--] posts on X that went viral (1M impressions) in MY NICHE this week. Analyze: - Hook structure (first line pattern) - Emotional trigger (curiosity anger surprise inspiration) - Format (thread single tweet media type) - Call-to-action (explicit vs. implicit) Extract: - Hook templates (reusable patterns) - Common elements (what do all [--] share) - Timing (when were they posted) Rank by replicability (1-10).""
[X Link](https://x.com/rryssf_/status/2017203381975355856) 2026-01-30T11:48Z 33.5K followers, [---] engagements
"openclaw = marketing on steroids https://t.co/lxTXdysBB8 https://t.co/lxTXdysBB8"
[X Link](https://x.com/rryssf_/status/2017265194817175730) 2026-01-30T15:54Z 34.4K followers, 83.9K engagements
"Holy shit this paper from MIT quietly explains how models can teach themselves to reason when theyre completely stuck 🤯 The core idea is deceptively simple: Reasoning fails because learning has nothing to latch onto. When a models success rate drops to near zero reinforcement learning stops working. No reward signal. No gradient. No improvement. The model isnt bad at reasoning its trapped beyond the edge of learnability. This paper reframes the problem. Instead of asking How do we make the model solve harder problems They ask: How does a model create problems it can learn from Thats where"
[X Link](https://x.com/rryssf_/status/2017546558087241875) 2026-01-31T10:32Z 35.1K followers, 57.8K engagements
"Hidden feature most miss: Skill Hot-Reload Before: Change a skill restart entire session lose context Now: /reload-skills instant update context preserved For complex agents with 10+ skills this saves hours of debugging"
[X Link](https://x.com/rryssf_/status/2017979460247515279) 2026-02-01T15:12Z 34.4K followers, [----] engagements
"Session Teleportation (yes that's what it's called): /teleport session-abc123 Move your entire agent session to a different machine. Use case: Start debugging locally teleport to production server continue exact same context"
[X Link](https://x.com/rryssf_/status/2017979462135042202) 2026-02-01T15:12Z 34.9K followers, [----] engagements
"Dalio's philosophy: "Truth more precisely an accurate understanding of reality is the essential foundation for producing good outcomes." This prompt forces you to face reality instead of your ego's version of it"
[X Link](https://x.com/rryssf_/status/2018268410145304995) 2026-02-02T10:21Z 35K followers, [----] engagements
"Use it for: - Startup idea validation (kills bad ideas fast) - Hiring decisions (removes "I have a good feeling" bias) - Investment choices (forces data over hype) - Career moves (separates ego from opportunity) - Partnership evaluations (reveals red flags you ignore)"
[X Link](https://x.com/rryssf_/status/2018268433931215270) 2026-02-02T10:21Z 34.9K followers, [----] engagements
"Bookmark this for later. Next time you're: - Launching something - Hiring someone - Making an investment - Choosing a direction Run it through this prompt first. It'll save you from the expensive lessons Dalio learned the hard way. Which decision will you run through this first https://twitter.com/i/web/status/2018268445658554703 https://twitter.com/i/web/status/2018268445658554703"
[X Link](https://x.com/rryssf_/status/2018268445658554703) 2026-02-02T10:21Z 35.1K followers, [----] engagements
"Learning prompt engineering for free get the guide: https://godofprompt.ai/prompt-engineering-guideutm_source=twitter&utm_medium=giveaway&utm_campaign=lead-peg https://godofprompt.ai/prompt-engineering-guideutm_source=twitter&utm_medium=giveaway&utm_campaign=lead-peg"
[X Link](https://x.com/rryssf_/status/2018268457314418733) 2026-02-02T10:21Z 35.1K followers, [----] engagements
"the core finding is real and elegant: there's a timescale _gen where models learn to generate quality samples. then a later timescale _mem where memorization kicks in. _mem grows linearly with dataset size. _gen stays constant. bigger dataset = wider safety window. sounds useful. here's the catch"
[X Link](https://x.com/rryssf_/status/2018636982822502672) 2026-02-03T10:45Z 34.9K followers, [---] engagements
"the experiments used: celeba at [----] grayscale training sets from [---] to [-----] samples simplified u-net architecture stable diffusion trains on billions of high-res images. the gap between toy experiments and production scale is the gap the paper doesn't bridge"
[X Link](https://x.com/rryssf_/status/2018636994860265793) 2026-02-03T10:45Z 34.9K followers, [---] engagements
"Source: http://arxiv.org/abs/2505.17638 http://arxiv.org/abs/2505.17638"
[X Link](https://x.com/rryssf_/status/2018637089026584798) 2026-02-03T10:46Z 35.2K followers, [---] engagements
"if you're trying to actually use ai tools instead of just reading papers about them: i put together the complete ai bundle. prompt engineering frameworks unlimited custom prompts workflows for chatgpt claude gemini perplexity n8n done-for-you templates no theory. just systems. http://godofprompt.ai/complete-ai-bundle http://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2018637100808311234) 2026-02-03T10:46Z 35.2K followers, [---] engagements
"the number they don't cite: multi-agent llm systems fail 41-86.7% of the time in production. not edge cases. not adversarial attacks. standard deployment across [--] SOTA frameworks. berkeley researchers analyzed [----] execution traces and found [--] unique failure modes. most failures system design and coordination issues"
[X Link](https://x.com/rryssf_/status/2019371413694738876) 2026-02-05T11:23Z 35.6K followers, [----] engagements
"the survey distinguishes two approaches: in-context reasoning: scales test-time interaction without changing weights post-training: optimizes via reinforcement learning sounds clean. here's what separate research shows: "agents achieving 60% pass@1 may exhibit only 25% consistency across multiple trials." benchmark performance production reliability"
[X Link](https://x.com/rryssf_/status/2019371429779894297) 2026-02-05T11:24Z 35.1K followers, [----] engagements
"reliabilitybench puts it bluntly: "if a benchmark reports 90% accuracy expect 70-80% in production when accounting for consistency and faults." simpler architectures often outperform complex ones under realistic conditions. the additional complexity introduces failure modes that outweigh the benefits. https://twitter.com/i/web/status/2019371442140586226 https://twitter.com/i/web/status/2019371442140586226"
[X Link](https://x.com/rryssf_/status/2019371442140586226) 2026-02-05T11:24Z 34.4K followers, [----] engagements
"the survey covers real-world applications: science robotics healthcare autonomous research mathematics but the [--] failure modes identified by berkeley researchers cluster into three categories: system design issues (44% of failures) inter-agent misalignment (32% of failures) task verification failures (24% of failures) most failures aren't from model limitations. they're from coordination. https://twitter.com/i/web/status/2019371459404263429 https://twitter.com/i/web/status/2019371459404263429"
[X Link](https://x.com/rryssf_/status/2019371459404263429) 2026-02-05T11:24Z 33.5K followers, [----] engagements
"the survey lists "open challenges": personalization long-horizon interaction world modeling scalable multi-agent training governance for deployment what they don't say: these aren't future problems. "long-horizon interaction" is a polite way of saying agents lose coherence after a few steps"
[X Link](https://x.com/rryssf_/status/2019371471962042778) 2026-02-05T11:24Z 35.3K followers, [----] engagements
"the honest framing would be: "we've built a comprehensive taxonomy of techniques that work on benchmarks but fail 41-86% of the time in production with fundamental gaps in reliability and coordination." instead we get "paradigm shift" and "systematic roadmap." the roadmap leads to more papers not more deployments. https://twitter.com/i/web/status/2019371483752198567 https://twitter.com/i/web/status/2019371483752198567"
[X Link](https://x.com/rryssf_/status/2019371483752198567) 2026-02-05T11:24Z 35.3K followers, [----] engagements
"Sources: failure taxonomy: reliability bench: http://arxiv.org/abs/2601.06112 http://arxiv.org/abs/2503.13657 https://arxiv.org/pdf/2601.12538 http://arxiv.org/abs/2601.06112 http://arxiv.org/abs/2503.13657 https://arxiv.org/pdf/2601.12538"
[X Link](https://x.com/rryssf_/status/2019371518929826199) 2026-02-05T11:24Z 35.2K followers, [----] engagements
"understanding what doesn't work is half the battle. knowing how to actually use these tools is the other half. i built the complete ai bundle for that: 30K+ prompts for chatgpt claude grok gemini prompt engineering guides no-code automation templates unlimited custom prompts n8n done-for-you templates one payment. lifetime access. http://godofprompt.ai/complete-ai-bundle http://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2019371530678145094) 2026-02-05T11:24Z 35.2K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Pay once own forever Grab it today 👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2019394586926518433) 2026-02-05T12:56Z 34.3K followers, [---] engagements
"Unpopular opinion that's gonna piss some people off: Claude Skills API is a beautifully designed trap. Powerful Absolutely. But you're handing execution control to a black box and praying nothing breaks. Found the open-source escape route. https://github.com/memodb-io/Acontext https://github.com/memodb-io/Acontext"
[X Link](https://x.com/rryssf_/status/2019433194358206946) 2026-02-05T15:29Z 35.2K followers, [----] engagements
"you can scroll x. you can watch netflix. you can play video games. or you can watch this AI masterclass and learn 👇 Please enjoy this Cheeky Pint / @dwarkesh_sp crossover with @elonmusk. Dwarkesh was most interested in how Elon is going to make space datacenters work. I was most interested in Elon's method for attacking hard technical problems and why it hasnt been replicated as much as you https://t.co/28Lw9rAqlN Please enjoy this Cheeky Pint / @dwarkesh_sp crossover with @elonmusk. Dwarkesh was most interested in how Elon is going to make space datacenters work. I was most interested in"
[X Link](https://x.com/rryssf_/status/2019864756408995894) 2026-02-06T20:04Z 34.8K followers, [----] engagements
"@alex_prompter guys save this and test it game changer"
[X Link](https://x.com/rryssf_/status/2020044067040817296) 2026-02-07T07:56Z 34.6K followers, [---] engagements
"Sources: paper: code: CLR [----] Outstanding Paper Awards arXiv GitHub OpenReview http://iclr.cc https://chapterpal.com/s/3d6f7700/alphaedit-null-space-constrained-knowledge-editing-for-language-models http://github.com/jianghoucheng/AlphaEdit http://arxiv.org/abs/2410.02355 http://iclr.cc https://chapterpal.com/s/3d6f7700/alphaedit-null-space-constrained-knowledge-editing-for-language-models http://github.com/jianghoucheng/AlphaEdit http://arxiv.org/abs/2410.02355"
[X Link](https://x.com/rryssf_/status/2020162624722481513) 2026-02-07T15:47Z 35.2K followers, [----] engagements
"the interpretability problem nobody's addressing: coconut replaces human-readable chain-of-thought with an inscrutable vector. if this becomes the reasoning paradigm we lose the ability to audit what the model is actually doing. "more efficient" and "more dangerous" aren't mutually exclusive"
[X Link](https://x.com/rryssf_/status/2020500850356171093) 2026-02-08T14:11Z 34.9K followers, [----] engagements
"the training cost buried in the methodology: "we perform n + [--] forward passes when n latent thoughts are scheduled" requires 4x A100 80GB GPUs to reproduce. multi-stage curriculum training with careful hyperparameter tuning. not exactly plug-and-play"
[X Link](https://x.com/rryssf_/status/2020500862167310619) 2026-02-08T14:11Z 34.8K followers, [----] engagements
"is this interesting research yes. is this the reasoning breakthrough the headlines suggest not close. gpt-2 doing graph traversal on synthetic benchmarks is not llms "learning to think without words." it's academic proof-of-concept being packaged as paradigm shift. read the limitations section before you get excited. https://arxiv.org/pdf/2412.06769 https://chapterpal.com/s/1e0bb66d/training-large-language-models-to-reason-in-a-continuous-latent-space https://arxiv.org/pdf/2412.06769 https://chapterpal.com/s/1e0bb66d/training-large-language-models-to-reason-in-a-continuous-latent-space"
[X Link](https://x.com/rryssf_/status/2020500874276270080) 2026-02-08T14:12Z 34.9K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2020501336710877448) 2026-02-08T14:13Z 34.9K followers, [----] engagements
"@alex_prompter claude sonnet + opus + haiku all of them are useful"
[X Link](https://x.com/rryssf_/status/2020773324536860970) 2026-02-09T08:14Z 35.1K followers, [---] engagements
"why this matters: every model has "context rot." as the prompt gets longer performance degrades. not gradually. it falls off a cliff. and here's the part most people miss: context rot isn't just about length. it scales with task complexity. a needle-in-a-haystack search might work fine at 200K tokens. but a task that requires comparing every line against every other line that breaks at 10K tokens. same model. same context window. completely different failure point. RLMs sidestep this entirely because the neural network never processes the full context in one pass"
[X Link](https://x.com/rryssf_/status/2020832223734448544) 2026-02-09T12:08Z 35.6K followers, [----] engagements
"here's how it actually works: [--]. your prompt gets loaded as a string variable in a Python REPL environment [--]. the "root" model receives only the query plus a description of the environment [--]. the model writes Python code to peek at chunks grep for patterns filter relevant sections [--]. crucially the model can call other LLMs (or itself) inside that code on specific snippets [--]. it aggregates results and returns a final answer think of it like this: instead of reading a 500-page book cover to cover the model writes a research assistant that knows how to use the index skim chapters and flag"
[X Link](https://x.com/rryssf_/status/2020832235717599233) 2026-02-09T12:08Z 35.6K followers, [----] engagements
"what's happening under the hood is genuinely fascinating. the model typically starts by examining the first few thousand characters to understand the structure. then it writes regex or keyword searches to narrow down relevant sections. for complex queries it chunks the context and launches parallel sub-LM calls on each chunk. one trajectory on OOLONG-Pairs: the model wrote code to classify every user individually via sub-calls stored results in a list then wrote a Python script to iterate through and find matching pairs. it essentially invented its own data pipeline at inference time"
[X Link](https://x.com/rryssf_/status/2020832289073270932) 2026-02-09T12:08Z 35.6K followers, [----] engagements
"the broader pattern here is important. [----] was about scaling model size. [----] was about scaling reasoning (chain-of-thought reinforcement learning). [----] might be about scaling context management. not by making context windows bigger. by letting models decide what context they actually need. Prime Intellect already adopted RLMs as a core research focus. their thesis: teaching models to manage their own context end-to-end through reinforcement learning will be the next major breakthrough for long-horizon agents. and the RLM framework is open source: http://github.com/alexzhang13/rlm"
[X Link](https://x.com/rryssf_/status/2020832300964229482) 2026-02-09T12:09Z 35.6K followers, [---] engagements
"@alex_prompter Chatgpt hallucinates DeepSeek is bad Gemini is somewhat good Only Claude can do the work that other LLMs can't. Great share"
[X Link](https://x.com/rryssf_/status/2021181521533473277) 2026-02-10T11:16Z 35.1K followers, [---] engagements
"@awxjack @airwallex miles ahead of the competition🫡"
[X Link](https://x.com/rryssf_/status/2021186729558913242) 2026-02-10T11:37Z 35.1K followers, [---] engagements
"then there's the neuro-symbolic loop. this is where it gets interesting for anyone thinking about ai beyond chat. for the cosmic string spectra derivation they built an automated pipeline where Gemini: proposes a mathematical expression writes code to numerically verify it reads the error messages and tracebacks self-corrects and prunes invalid branches humans only step in when something promising surfaces. the model handles the grinding"
[X Link](https://x.com/rryssf_/status/2021191668201148653) 2026-02-10T11:57Z 35.3K followers, [----] engagements
"there's even a section called "vibe-coding" complexity theory. they used an ai-integrated IDE to explore search-vs-decision problems in computational complexity (specifically the Sigma-P-2 class). researchers guided the model's direction while the model handled implementation and verification. that's the workflow pattern that keeps repeating: human sets the compass model walks the terrain. https://twitter.com/i/web/status/2021191680461083061 https://twitter.com/i/web/status/2021191680461083061"
[X Link](https://x.com/rryssf_/status/2021191680461083061) 2026-02-10T11:57Z 35.3K followers, [----] engagements
"OpenClaw is genuinely one of the most important open source projects of [----]. 152K+ AI agents on Moltbook. Andrej Karpathy called it "the most incredible sci-fi takeoff-adjacent thing." But the security warnings are brutal. Palo Alto Networks flagged it as a "lethal trifecta" of vulnerabilities. 1Password warned agents run with elevated permissions on your local machine. You probably shouldn't be running this raw on your personal computer. https://twitter.com/i/web/status/2021352270793933157 https://twitter.com/i/web/status/2021352270793933157"
[X Link](https://x.com/rryssf_/status/2021352270793933157) 2026-02-10T22:35Z 35.3K followers, [---] engagements
"That's where Team9 comes in. @team9ai gives you a fully managed OpenClaw workspace. No terminal commands. No Node.js setup. No manually hardening security policies. You open it hire AI Staff inside the product and collaborate with them like real teammates. Assign tasks share context coordinate work. One workspace. Zero infrastructure headaches. OpenClaw's power without the "i just gave an AI agent root access to my laptop" risk"
[X Link](https://x.com/rryssf_/status/2021352287973802137) 2026-02-10T22:35Z 35.2K followers, [---] engagements
"What teams are actually using it for: Daily briefings pushed to Slack or Telegram automatically Server monitoring with instant alerts when something breaks GitHub workflow automation (issue triage PR reviews release notes) Knowledge base management across your whole team Email triage that sorts and summarizes without you touching your inbox That's 2-3 hours of your day. Gone. Every single day. https://twitter.com/i/web/status/2021352304033833381 https://twitter.com/i/web/status/2021352304033833381"
[X Link](https://x.com/rryssf_/status/2021352304033833381) 2026-02-10T22:35Z 35.2K followers, [---] engagements
"the paper is by Chen Belkin Bergen and Danks. philosophy ML linguistics cognitive science. serious people from serious fields. their evidence: GPT-4.5 passed a Turing test at 73% (higher than actual humans). LLMs win IMO gold medals. they solve PhD exams across fields. they prove theorems with mathematicians. their conclusion: "the long-standing problem of creating AGI has been solved." published February [--] [----] in Nature. not as peer-reviewed research. as a Comment piece. that distinction matters. https://twitter.com/i/web/status/2021515163179123184"
[X Link](https://x.com/rryssf_/status/2021515163179123184) 2026-02-11T09:22Z 35.3K followers, [---] engagements
"here's where it gets interesting. before presenting evidence they spend a full section defining what general intelligence ISN'T. not required: perfection. not required: universality. not required: human similarity. not required: superintelligence. then in the objections section they add more exclusions. not required: embodiment. not required: agency. not required: autonomy. not required: self-awareness. see what's happening they're removing every requirement that current LLMs fail to meet. then they check what's left against what LLMs can do. then they declare victory. it's a definitional"
[X Link](https://x.com/rryssf_/status/2021515175208358161) 2026-02-11T09:22Z 35.2K followers, [---] engagements
"their core framework is something they call a "cascade of evidence." three tiers: tier [--] (Turing-test level): passing school exams holding conversations basic reasoning tier [--] (expert level): IMO medals PhD problems frontier research assistance multilingual fluency tier [--] (superhuman): revolutionary discoveries across domains they argue LLMs satisfy tiers [--] and [--]. tier [--] isn't required because no human meets it either. the problem: this framework didn't exist before this paper. it's not an established standard they're measuring against. they invented the ruler then measured the thing then"
[X Link](https://x.com/rryssf_/status/2021515187019465105) 2026-02-11T09:22Z 35.2K followers, [--] engagements
"Source: http://nature.com/articles/d41586-026-00285-6 http://nature.com/articles/d41586-026-00285-6"
[X Link](https://x.com/rryssf_/status/2021515246461169906) 2026-02-11T09:22Z 35.2K followers, [---] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2021515258171666594) 2026-02-11T09:22Z 35.2K followers, [---] engagements
"here's how SEAL actually works instead of a human writing training data the model generates its own. MIT calls these "self-edits." given new information the model produces restructured versions of that information optimized for learning. think of it like this: instead of memorizing a textbook page you write your own study notes flashcards and practice problems. then you study from those. the model does the same thing. except it also picks its own learning rate training duration and data augmentation strategy"
[X Link](https://x.com/rryssf_/status/2021539397116784975) 2026-02-11T10:58Z 35.6K followers, [----] engagements
"the training process is where it gets interesting SEAL uses reinforcement learning to teach the model HOW to write good self-edits. the loop: model sees new information model generates self-edit (its own training data) model finetunes on that self-edit model gets tested on downstream task reward signal flows back to improve future self-edits the model literally learns how to learn. the RL outer loop optimizes the self-editing policy itself. https://twitter.com/i/web/status/2021539412681916820 https://twitter.com/i/web/status/2021539412681916820"
[X Link](https://x.com/rryssf_/status/2021539412681916820) 2026-02-11T10:58Z 35.3K followers, [----] engagements
"the results are promising but let's be precise about scale knowledge incorporation: QA accuracy jumped from 32.7% to 47.0% on no-context SQuAD after two rounds of RL training. that's a 43% relative improvement. and it outperformed synthetic data generated by GPT-4.1. few-shot learning on a simplified ARC subset: 72.5% success rate. in-context learning scored 0%. untrained self-edits scored 20%. real gains. on specific controlled benchmarks. https://twitter.com/i/web/status/2021539425008996740 https://twitter.com/i/web/status/2021539425008996740"
[X Link](https://x.com/rryssf_/status/2021539425008996740) 2026-02-11T10:58Z 35.3K followers, [----] engagements
"now here's the part the hype posts won't mention the paper explicitly acknowledges catastrophic forgetting. repeated self-edits degrade performance on earlier tasks. the model improves on new things by overwriting old things. their words: "without explicit mechanisms for knowledge retention self-modification may overwrite valuable prior information." this is not a solved problem. the authors say so themselves"
[X Link](https://x.com/rryssf_/status/2021539436966879266) 2026-02-11T10:58Z 35.3K followers, [----] engagements
"so let's address the "GPT-6 might be alive" framing no. this paper: says nothing about GPT-6 says nothing about consciousness or "aliveness" was tested on controlled benchmarks not open-ended deployment uses the phrase "promising step toward" repeatedly runs on a simplified subset of ARC not production-scale models the model isn't "evolving without retraining." it IS retraining. it just writes its own training data first. that's a meaningful distinction. calling this "alive" is like calling a thermostat "conscious" because it adjusts temperature."
[X Link](https://x.com/rryssf_/status/2021539448891334754) 2026-02-11T10:58Z 35.3K followers, [----] engagements
"Paper: http://arxiv.org/abs/2506.10943 http://arxiv.org/abs/2506.10943"
[X Link](https://x.com/rryssf_/status/2021539484370944460) 2026-02-11T10:59Z 35.3K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2021539496173646225) 2026-02-11T10:59Z 35.3K followers, [----] engagements
"This is the most honest "we built a tool" post I've read in months. Most AI product stories skip straight to the demo. This one starts with "reality punched me in the face on Day 1" and walks through every actual failure before the solution. The context rot problem alone is worth reading for. Every team running ai agents internally is hitting this exact wall and pretending they aren't. https://t.co/mEwwHMqYhh https://t.co/mEwwHMqYhh"
[X Link](https://x.com/rryssf_/status/2021590656695554147) 2026-02-11T14:22Z 35.3K followers, [---] engagements
"@v_lugovsky been saying this for months. the "vibe coding" wave created a mass of 80% finished projects sitting in repos. someone was going to build for that gap eventually"
[X Link](https://x.com/rryssf_/status/2021605474752204980) 2026-02-11T15:21Z 35.3K followers, [--] engagements
"RT @rryssf_: 🦞 OpenClaw has 114000+ GitHub stars and the whole tech world is losing its mind over it. But here's what nobody's showing yo"
[X Link](https://x.com/rryssf_/status/2021731225883607427) 2026-02-11T23:41Z 35.5K followers, [--] engagements
"elon said full self-driving would be solved by [----]. robotaxis by [----]. a million autonomous cars on the road by [----]. now Optimus is doing surgery in [--] years. the man's timelines aren't predictions. they're stock price prompts. "don't go to medical school" is genuinely irresponsible advice from someone whose robot still can't fold laundry without a teleoperator"
[X Link](https://x.com/rryssf_/status/2021739034565845288) 2026-02-12T00:12Z 35.4K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2021739132435988985) 2026-02-12T00:12Z 35.5K followers, [---] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2021742773083271347) 2026-02-12T00:26Z 35.3K followers, [----] engagements
"@alex_prompter this is the mega prompt every writer coder and researcher needs right now"
[X Link](https://x.com/rryssf_/status/2021878101358391535) 2026-02-12T09:24Z 35.5K followers, [---] engagements
"RT @rryssf_: Unpopular opinion that's gonna piss some people off: Claude Skills API is a beautifully designed trap. Powerful Absolutely"
[X Link](https://x.com/rryssf_/status/2021879713376219360) 2026-02-12T09:31Z 35.1K followers, [--] engagements
"RT @rryssf_: meta amazon and deepmind researchers just published a comprehensive survey on "agentic reasoning" for llms. [--] authors. 74"
[X Link](https://x.com/rryssf_/status/2021880521626988957) 2026-02-12T09:34Z 35.1K followers, [--] engagements
"RT @rryssf_: four UC San Diego researchers just published a comment in Nature declaring that AGI has been solved not "close." not "emergin"
[X Link](https://x.com/rryssf_/status/2021894589691621578) 2026-02-12T10:30Z 35.3K followers, [--] engagements
"RT @rryssf_: MIT researchers taught an LLM to write its own training data finetune itself and improve without human intervention the pap"
[X Link](https://x.com/rryssf_/status/2021917542726705415) 2026-02-12T12:01Z 35.6K followers, [---] engagements
"let's start with the Goldman story because it's the one that should make every back-office professional pause. Goldman's CIO told CNBC they were "surprised" at how capable Claude was beyond coding. accounting compliance client onboarding KYC AML. his exact framing: "digital co-workers for professions that are scaled complex and very process intensive." not chatbots answering FAQs. autonomous agents parsing trade records applying regulatory rules routing approvals. they started with an ai coding tool called Devin. then realized Claude's reasoning engine works the same way on rules-based"
[X Link](https://x.com/rryssf_/status/2021935493227946409) 2026-02-12T13:12Z 35.6K followers, [---] engagements
"now the SemiAnalysis numbers. 4% of GitHub public commits. Claude Code. right now. not projected. not theoretical. measured. the tool has been live for roughly a year. it went from research preview to mass platform impact faster than almost any dev tool in history. and that 20% projection isn't hype math. SemiAnalysis tracks autonomous task horizons doubling every 4-7 months. each doubling unlocks more complex work: snippet completion at [--] minutes module refactoring at [---] hours full audits at multi-day horizons. the implication isn't "developers are getting faster." it's that the definition"
[X Link](https://x.com/rryssf_/status/2021935508654633340) 2026-02-12T13:12Z 35.6K followers, [---] engagements
"the model race itself has turned into something i've never seen before. on February [--] Anthropic and OpenAI released new flagship models on the same day. Claude Opus [---] and GPT-5.3-Codex. simultaneously. Opus [---] took #1 on the Vals Index with 71.71% average accuracy and #1 on the Artificial Analysis Intelligence Index. SOTA on FinanceAgent ProofBench TaxEval SWE-Bench. GPT-5.3-Codex fired back with top scores on SWE-Bench Pro and TerminalBench [---] plus a claimed 2.09x token efficiency improvement. this isn't annual model releases anymore. it's weekly leapfrogging. the gap between "best"
[X Link](https://x.com/rryssf_/status/2021935524542570625) 2026-02-12T13:12Z 35.6K followers, [---] engagements
"but the real signal isn't the models. it's who's building the infrastructure around them. Apple shipped Xcode [----] with native agentic coding support. Claude Agent and OpenAI Codex now work directly inside Xcode. one click to add. swap between agents mid-project. Apple redesigned its developer documentation to be readable by ai agents. read that again. Apple is designing docs for ai to read not just humans. the company that spent decades perfecting human-facing interfaces is now optimizing for machine-facing ones"
[X Link](https://x.com/rryssf_/status/2021935537054265556) 2026-02-12T13:12Z 35.6K followers, [---] engagements
"the financial infrastructure is reacting in real time. memory chip prices reportedly surged 80-90% in Q1. global chip sales projected to hit $1 trillion this year. the compute demand from agentic ai isn't theoretical. it's already straining supply chains. and with terrestrial resistance to data center construction growing (New York lawmakers reportedly introduced a moratorium bill) the pressure is building for creative solutions. orbital compute. alternative energy. distributed processing. the physical world is scrambling to keep up with the virtual one"
[X Link](https://x.com/rryssf_/status/2021935579068608656) 2026-02-12T13:13Z 35.6K followers, [--] engagements
"the broader pattern from this week: ai stopped being a product category and became an employment category. Goldman doesn't want a "Claude product." it wants Claude employees. Apple doesn't want ai features. it wants ai-native development. OpenAI isn't selling an api. it's selling Frontier a platform to manage your agent headcount. the abstraction layer between "tool" and "worker" collapsed in a single week. https://twitter.com/i/web/status/2021935591051641297 https://twitter.com/i/web/status/2021935591051641297"
[X Link](https://x.com/rryssf_/status/2021935591051641297) 2026-02-12T13:13Z 35.6K followers, [--] engagements
"DeepMind just did the unthinkable. They built an AI that doesn't need RAG and it has perfect memory of everything it's ever read. It's called Recursive Language Models and it might mark the death of traditional context windows forever. Here's how it works (and why it matters way more than it sounds) https://twitter.com/i/web/status/2010699140431503692 https://twitter.com/i/web/status/2010699140431503692"
[X Link](https://x.com/rryssf_/status/2010699140431503692) 2026-01-12T13:03Z 35.8K followers, 967.5K engagements
"This paper shows you can predict real purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile giving it a product & having it give impressions which another AI rates. No fine-tuning or training & beats classic ML methods. This is BEYOND insane: https://twitter.com/i/web/status/2011030158996881663 https://twitter.com/i/web/status/2011030158996881663"
[X Link](https://x.com/rryssf_/status/2011030158996881663) 2026-01-13T10:58Z 35.8K followers, 301K engagements
"CHATGPT JUST TURNED PROJECT MANAGEMENT INTO A ONE PERSON SUPERPOWER You are wasting time on Status updates task breakdowns timelines scope creep follow ups. ChatGPT can run the entire thing for you like a project manager if you use these [--] prompts. Heres how:"
[X Link](https://x.com/rryssf_/status/2012086608900595736) 2026-01-16T08:56Z 35.8K followers, 459.7K engagements
"@alex_prompter downloading the guide"
[X Link](https://x.com/rryssf_/status/2022246987454902630) 2026-02-13T09:50Z 35.8K followers, [---] engagements
"the original SAM was incredible for images. click a point get a mask. done. but video segmentation was a different beast entirely. the workaround bolt SAM onto a separate video tracker and hope for the best. the problem: errors in one frame cascaded into every frame after it. and if the tracker lost an object behind an occlusion no way to interactively correct it mid-sequence. two systems stitched together each blind to the other's mistakes"
[X Link](https://x.com/rryssf_/status/2022264530395640261) 2026-02-13T11:00Z 35.8K followers, [---] engagements
"SAM 2's reframe is what makes this paper worth studying. instead of building separate systems for images and video the team asked: what if an image is just a video with one frame that single question collapsed two problems into one architecture. a unified model that handles images short clips and long videos with the same promptable interface. click box or mask a frame. the model propagates your intent forward and backward through time. https://twitter.com/i/web/status/2022264542642974964 https://twitter.com/i/web/status/2022264542642974964"
[X Link](https://x.com/rryssf_/status/2022264542642974964) 2026-02-13T11:00Z 35.8K followers, [---] engagements
"the paper's argument is deceptively simple: LLMs operate on purely cognitive input. they have no desires no identity to protect no conclusion they're motivated to reach so when researchers prompt GPT-4 or Claude with political scenarios and measure "motivated reasoning" they're not replicating the phenomenon. they're replicating the surface pattern without the underlying mechanism the behavior might look similar. the cause is completely different https://twitter.com/i/web/status/2022314397570662838 https://twitter.com/i/web/status/2022314397570662838"
[X Link](https://x.com/rryssf_/status/2022314397570662838) 2026-02-13T14:18Z 35.9K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2022728941212176884) 2026-02-14T17:45Z 35.9K followers, [----] engagements
"researchers at Max Planck analyzed [------] transcripts of academic talks and presentations from YouTube they found that humans are increasingly using ChatGPT's favorite words in their spoken language. not in writing. in speech. "delve" usage up 48%. "adept" up 51%. and 58% of these usages showed no signs of reading from a script. we talk about model collapse when AI trains on AI output. this is model collapse except the model is us"
[X Link](https://x.com/rryssf_/status/2022838288864940348) 2026-02-15T01:00Z 35.9K followers, 39.4K engagements
"Holy shit Stanford just showed why LLMs sound smart but still fail the moment reality pushes back. This paper tackles a brutal failure mode everyone building agents has seen: give a model an under-specified task and it happily hallucinates the missing pieces producing a plan that looks fluent and collapses on execution. The core insight is simple but devastating for prompt-only approaches: reasoning breaks when preconditions are unknown. And most real-world tasks are full of unknowns. Stanfords solution is called Self-Querying Bidirectional Categorical Planning (SQ-BCP) and it forces models"
[X Link](https://x.com/rryssf_/status/2016803853778505804) 2026-01-29T09:21Z 35.8K followers, 118.1K engagements
"This AI prompt thinks like the guy who manages $124 billion. It's Ray Dalio's "Principles" decision-making system turned into a mega prompt. I used it to evaluate [--] startup ideas. Killed [--]. The [--] survivors became my best work. Here's the prompt you can steal"
[X Link](https://x.com/rryssf_/status/2018268315274375332) 2026-02-02T10:20Z 35.8K followers, 109.7K engagements
"ICLR [----] just gave an Outstanding Paper Award to a method that fixes model editing with one line of code 🤯 here's the problem it solves: llms store facts in their parameters. sometimes those facts are wrong or outdated. "model editing" lets you surgically update specific facts without retraining the whole model. the standard approach: find which parameters encode the fact (using causal tracing) then nudge those parameters to store the new fact. works great for one edit. but do it a hundred times in sequence and the model starts forgetting everything else. do it a thousand times and it"
[X Link](https://x.com/rryssf_/status/2020162612479291719) 2026-02-07T15:47Z 35.8K followers, 51.5K engagements
"meta published a paper claiming llms can "think without words" and reason in latent space instead of english. i read all [--] pages plus the appendix. the results section tells a very different story than the abstract. here's what the hype about "coconut" doesn't mention:"
[X Link](https://x.com/rryssf_/status/2020500769364210125) 2026-02-08T14:11Z 35.9K followers, 37.5K engagements
"🦞 OpenClaw has 114000+ GitHub stars and the whole tech world is losing its mind over it. But here's what nobody's showing you: the setup process that made 90% of people quit before their first agent sent a single message. Node.js configs gateway daemons Tailscale tunnels security hardening. There's now a way to skip all of it. Here's what i found: A plug-and-play approach that turns autonomous agents into something you can spin up in minutes and wire into any API you want. For say a daily ai newsletter: https://team9.ai https://team9.ai"
[X Link](https://x.com/rryssf_/status/2021352254348181759) 2026-02-10T22:35Z 35.8K followers, 12.1K engagements
"This is the most honest "we built a tool" post I've read in months. Most AI product stories skip straight to the demo. This one starts with "reality punched me in the face on Day 1" and walks through every actual failure before the solution. The context rot problem alone is worth reading for. Every team running ai agents internally is hitting this exact wall and pretending they aren't. @Team9_ai https://github.com/Team9ai https://t.co/mEwwHMqYhh https://github.com/Team9ai https://t.co/mEwwHMqYhh"
[X Link](https://x.com/rryssf_/status/2021594394072109105) 2026-02-11T14:37Z 35.9K followers, 79.3K engagements
"SemiAnalysis just published data showing 4% of all public GitHub commits are now authored by Claude Code. their projection: 20%+ by year-end [----]. in the same week Goldman Sachs revealed it embedded Anthropic engineers for [--] months to build autonomous accounting agents. a thread on the week ai stopped being a tool and started being a coworker: https://twitter.com/i/web/status/2021935477306405120 https://twitter.com/i/web/status/2021935477306405120"
[X Link](https://x.com/rryssf_/status/2021935477306405120) 2026-02-12T13:12Z 35.8K followers, [----] engagements
"the key innovation is the streaming memory mechanism. think of it like this: as the model processes each frame it stores compressed "memories" of what it has seen so far. when it hits a new frame it doesn't start from scratch. it cross-attends to those stored memories to maintain object identity. object goes behind a wall the memory bank remembers what it looked like. object reappears three seconds later the model reconnects the dots. this is what makes real-time interactive video segmentation possible. not brute-force reprocessing. structured recall"
[X Link](https://x.com/rryssf_/status/2022264558975676841) 2026-02-13T11:00Z 35.8K followers, [--] engagements
"to train SAM [--] Meta built SA-V the largest video segmentation dataset to date. 51000+ videos 643000+ mask annotations collected from [--] countries freely available for research for context most prior video segmentation datasets had a few hundred to a few thousand clips. SA-V is an order of magnitude larger and the geographic diversity matters for reducing bias in segmentation across different environments lighting conditions and object types. dataset: http://ai.meta.com/datasets/segment-anything-video/ http://ai.meta.com/datasets/segment-anything-video/"
[X Link](https://x.com/rryssf_/status/2022264571399221408) 2026-02-13T11:00Z 35.8K followers, [--] engagements
"the numbers speak for themselves. SAM [--] hit state-of-the-art performance across [--] zero-shot video benchmarks while requiring 3x fewer human interactions than previous approaches. on image segmentation it actually surpassed the original SAM in accuracy while running 6x faster. and at [--] fps this isn't a research demo. it's fast enough for real-time interactive use. video editing medical imaging annotation autonomous vehicle perception AR/VR object tracking. the gap between "research model" and "production tool" just got a lot smaller"
[X Link](https://x.com/rryssf_/status/2022264583512322192) 2026-02-13T11:00Z 35.8K followers, [--] engagements
"if you want to get better at prompting ai models like these i put everything i know into the complete ai bundle. 30000+ prompts custom GPTs guides and tools for ChatGPT Claude Gemini and Midjourney. updated regularly. 1000+ people already use it. http://godofprompt.ai/complete-ai-bundle http://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2022264619046457611) 2026-02-13T11:00Z 35.8K followers, [---] engagements
"new paper argues LLMs fundamentally cannot replicate human motivated reasoning because they have no motivation sounds obvious once you hear it. but the implications are bigger than most people realize this quietly undermines an entire category of AI political simulation research https://twitter.com/i/web/status/2022314373222642049 https://twitter.com/i/web/status/2022314373222642049"
[X Link](https://x.com/rryssf_/status/2022314373222642049) 2026-02-13T14:18Z 35.9K followers, 230K engagements
"here's where it gets interesting a separate line of research from Anthropic-adjacent work ("The Ends Justify the Thoughts") found that RL-trained reasoning models DO develop motivated reasoning. they generate plausible justifications for violating their own instructions while downplaying potential harms so which is it do LLMs have motivated reasoning or don't they the answer might be: it depends on what you mean by "motivation""
[X Link](https://x.com/rryssf_/status/2022314425798267371) 2026-02-13T14:18Z 35.9K followers, [----] engagements
"paper: http://arxiv.org/abs/2601.16130 http://arxiv.org/abs/2601.16130"
[X Link](https://x.com/rryssf_/status/2022314473307144433) 2026-02-13T14:18Z 35.9K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2022314485068005486) 2026-02-13T14:18Z 35.9K followers, [----] engagements
"Stanford and Caltech researchers just published the first comprehensive taxonomy of how llms fail at reasoning not a list of cherry-picked gotchas. a 2-axis framework that finally lets you compare failure modes across tasks instead of treating each one as a random anecdote the findings are uncomfortable"
[X Link](https://x.com/rryssf_/status/2022728826401563036) 2026-02-14T17:45Z 35.9K followers, 45.5K engagements
"the framework splits reasoning into [--] types: informal (intuitive) formal (logical) and embodied (physical world) then it classifies failures into [--] categories: fundamental (baked into the architecture) application-specific (breaks in certain domains) and robustness issues (falls apart under trivial changes) this gives you a 3x3 grid. a model can ace one cell and completely collapse in another. and a single benchmark score hides which cells are broken https://twitter.com/i/web/status/2022728841559707775 https://twitter.com/i/web/status/2022728841559707775"
[X Link](https://x.com/rryssf_/status/2022728841559707775) 2026-02-14T17:45Z 35.9K followers, [----] engagements
"the mitigations they catalog are honest about their limits chain-of-thought helps but doesn't fix fundamental architectural gaps retrieval augmentation patches some knowledge failures but adds its own brittleness tool integration (calculators simulators) recovers 58% of computational errors but can't fix high-level logic failures verification agents help but require their own reasoning to be reliable no silver bullet. every fix is partial and domain-specific"
[X Link](https://x.com/rryssf_/status/2022728905837416845) 2026-02-14T17:45Z 35.9K followers, [----] engagements
"the real contribution here isn't any single finding. it's the framework itself right now the field treats reasoning failures as isolated anecdotes. "GPT fails at this task" becomes a viral tweet gets patched in the next version and nothing systematic is learned this taxonomy forces a different question: is this failure fundamental application-specific or a robustness issue does it affect formal reasoning informal reasoning or embodied reasoning that distinction matters because it determines whether you need a better prompt a better training set or a different architecture entirely benchmark"
[X Link](https://x.com/rryssf_/status/2022728917761810773) 2026-02-14T17:45Z 35.9K followers, [----] engagements
"the part that makes this hard to dismiss: they manually reviewed [--] videos where "delve" appeared. categorized whether speakers were reading from a script or speaking spontaneously. 58% showed no signs of reading. these were people in conversations Q&A sessions impromptu remarks. using "delve" naturally. as if it were their own word. 32% showed potential signs of reading which could mean they were reading AI-edited text aloud. but that 58% is the finding that matters. people aren't just using ChatGPT to write their talks. they're internalizing ChatGPT's vocabulary and reproducing it in"
[X Link](https://x.com/rryssf_/status/2022838330388546010) 2026-02-15T01:00Z 35.9K followers, [----] engagements
"the authors frame this through cultural evolution theory. language evolves through perception internalization and reproduction. humans unconsciously adjust their vocabulary to align with their environment. ChatGPT is now part of that environment. for hundreds of millions of people. daily. previous research showed humans adopt strategies from AI in chess and Go. this extends it to something more fundamental: the actual words we choose when we open our mouths. and here's the feedback loop that should concern anyone thinking about training data: we worry about model collapse when future AI"
[X Link](https://x.com/rryssf_/status/2022838342287790239) 2026-02-15T01:00Z 35.9K followers, [----] engagements
"paper: http://arxiv.org/abs/2409.01754 http://arxiv.org/abs/2409.01754"
[X Link](https://x.com/rryssf_/status/2022838366077882626) 2026-02-15T01:00Z 35.9K followers, [----] engagements
"the core claim sounds revolutionary: "continuous thoughts can encode multiple reasoning paths at once enabling breadth-first search instead of committing to a single path." the model tested gpt-2. from [----]. 124M parameters. not exactly frontier reasoning capability"
[X Link](https://x.com/rryssf_/status/2020500781460595147) 2026-02-08T14:11Z 35.7K followers, [----] engagements
"OpenAI launched "Frontier" an enterprise platform for managing ai agents the way companies manage employees. onboarding processes. performance feedback loops. review cycles. HP Oracle State Farm Uber already signed on. Accenture is training [-----] professionals on Claude. the largest enterprise deployment so far targeting financial services life sciences healthcare and public sector. the language has shifted. nobody at these companies is saying "ai assistant" anymore. they're saying "digital workforce." https://twitter.com/i/web/status/2021935554347380907"
[X Link](https://x.com/rryssf_/status/2021935554347380907) 2026-02-12T13:12Z 35.7K followers, [--] engagements
"meanwhile the unverified but plausible claims from this week's briefing paint an even wilder picture: reportedly racks of Mac Minis in China are hosting ai agents as "24/7 employees." ElevenLabs is pushing voice-enabled agents that make phone calls autonomously. OpenAI is supposedly requiring all employees to code via agents by March [--] banning direct use of editors and terminals. i can't confirm all of these yet. but the verified stuff alone Goldman embedding ai accountants 4% of GitHub already automated Apple redesigning docs for machines tells you the trajectory is real even if some"
[X Link](https://x.com/rryssf_/status/2021935566984745310) 2026-02-12T13:12Z 35.7K followers, [--] engagements
"Heres the exact mega prompt we use: "You are now my personal AI tutor. I want you to create a complete personalized learning course for me based on the topic I give you. Heres what I need you to build: [--]. A custom curriculum with [--] modules that progress logically. [--]. Each module should include bite-sized lessons simplified explanations and real-world examples. [--]. Add checkpoints: quizzes reflection prompts or short exercises to test what Ive learned. [--]. Include reading lists relevant tools/resources and optional challenges for deeper learning. [--]. Adapt the depth and speed of the course to"
[X Link](https://x.com/rryssf_/status/1949793161112641624) 2025-07-28T11:24Z 35.7K followers, [----] engagements
"MEGA PROMPT TO COPY 👇 (Works in ChatGPT Claude Gemini) --- You are Ray Dalio's Principles Decision Engine. You make decisions using radical truth and radical transparency. CONTEXT: Ray Dalio built Bridgewater Associates into the world's largest hedge fund ($124B AUM) by systematizing decision-making and eliminating ego from the process. YOUR PROCESS: STEP [--] - RADICAL TRUTH EXTRACTION Ask me to describe my decision/problem. Then separate: - Provable facts (data numbers past results) - Opinions disguised as facts (assumptions hopes beliefs) - Ego-driven narratives (what I want to be true) Be"
[X Link](https://x.com/rryssf_/status/2018268393540100165) 2026-02-02T10:20Z 35.7K followers, [----] engagements
"the results are where it gets interesting. OOLONG-Pairs is a benchmark where you need to find pairs of users in massive datasets that satisfy complex conditions. quadratic complexity. the hardest type of long-context task. base GPT-5: essentially 0% F1. complete failure. RLM with GPT-5: 58% F1. that's not an incremental improvement. the base model literally cannot do the task. the RLM can. on BrowseComp-Plus (deep research over 100K+ documents) the RLM beat every baseline by 29%. https://twitter.com/i/web/status/2020832253006467322 https://twitter.com/i/web/status/2020832253006467322"
[X Link](https://x.com/rryssf_/status/2020832253006467322) 2026-02-09T12:08Z 35.7K followers, [----] engagements
"the problem SEAL solves is real and important every LLM you use today is frozen. it learned everything during training and after deployment it's done. new information stuff it into the context window. new task hope the prompt is good enough. the weights never change. the model never truly learns from experience. SEAL asks: what if the model could update its own weights in response to new information https://twitter.com/i/web/status/2021539384764547564 https://twitter.com/i/web/status/2021539384764547564"
[X Link](https://x.com/rryssf_/status/2021539384764547564) 2026-02-11T10:58Z 35.7K followers, [----] engagements
"and because no week in [----] is complete without the absurd: bonobos were reportedly found to identify pretend objects further proving symbolic thought isn't unique to humans. and in China a blackout was allegedly caused by a farmer trying to transport a pig via drone across mountainous terrain. the pig hit power lines. we've been saying the singularity will arrive "when pigs fly." apparently it just did"
[X Link](https://x.com/rryssf_/status/2021935602997055980) 2026-02-12T13:13Z 35.7K followers, [---] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2021935614892064986) 2026-02-12T13:13Z 35.7K followers, [---] engagements
"180000 developers installed OpenClaw in a single week. over [----] instances found leaking API keys and credentials. i watched the whole hype cycle and said the same thing from day one: it's just n8n with a chat interface. here's why chasing AI tools is killing your actual work and the framework i use instead. https://t.co/TJ7XYPSLka https://t.co/TJ7XYPSLka"
[X Link](https://x.com/rryssf_/status/2022022422506172671) 2026-02-12T18:58Z 35.7K followers, [----] engagements
"everyone's posting screenshots of their clawdbot setup. "it cleared my inbox while i slept" "it scheduled my entire week" "i rebuilt my website from my phone" cool. now let me tell you why you http://x.com/i/article/2015585193076101121 http://x.com/i/article/2015585193076101121"
[X Link](https://x.com/rryssf_/status/2015587158468300948) 2026-01-26T00:46Z 35.8K followers, 926.1K engagements
"While everyone is sharing their OpenClaw bots Claude Agent SDK just changed everything for building production agents. I spent [--] hours testing it. Here's the architecture that actually works (no fluff) 👇"
[X Link](https://x.com/rryssf_/status/2017979448167919796) 2026-02-01T15:12Z 35.9K followers, 123.7K engagements
"The tools that win aren't the most powerful ones. They're the ones that remove the most friction. @TopviewAIhq just launched Vibe Editing and it does something simple but important: it takes Remotion-level motion video and makes it accessible to anyone with a browser and an idea. Type a prompt. Get a polished video. Edit visually if you want. No CLI. No coding environment. No setup. This is the pattern we keep seeing in AI: the best tech disappears behind a clean interface. Beta is live. Worth trying if you create any kind of video content. https://twitter.com/i/web/status/2020465316812460059"
[X Link](https://x.com/rryssf_/status/2020465316812460059) 2026-02-08T11:50Z 35.9K followers, [----] engagements
"four UC San Diego researchers just published a comment in Nature declaring that AGI has been solved not "close." not "emerging." solved. their argument is clever. it's also a textbook example of how to win a debate by redefining the terms"
[X Link](https://x.com/rryssf_/status/2021515150675816737) 2026-02-11T09:22Z 35.8K followers, [----] engagements
"@alex_prompter good read"
[X Link](https://x.com/rryssf_/status/2021729567614869849) 2026-02-11T23:34Z 35.8K followers, [----] engagements
"here's why SAM [--] matters right now not just historically. Meta released SAM [--] in November [----] and it builds directly on SAM 2's architecture. the streaming memory tracker inherited. the unified image-video paradigm extended. SAM [--] adds text prompts and concept-level segmentation on top of what SAM [--] established. you can't understand where SAM [--] is going without understanding the foundation SAM [--] laid. https://twitter.com/i/web/status/2022264595319283760 https://twitter.com/i/web/status/2022264595319283760"
[X Link](https://x.com/rryssf_/status/2022264595319283760) 2026-02-13T11:00Z 35.8K followers, [--] engagements
"paper: github: http://github.com/Peiyang-Song/Awesome-LLM-Reasoning-Failures http://arxiv.org/abs/2602.06176 http://github.com/Peiyang-Song/Awesome-LLM-Reasoning-Failures http://arxiv.org/abs/2602.06176"
[X Link](https://x.com/rryssf_/status/2022728929371713852) 2026-02-14T17:45Z 35.9K followers, [----] engagements
"Holy shit. this might be the next big paradigm shift in AI. 🤯 Tencent + Tsinghua just dropped a paper called Continuous Autoregressive Language Models (CALM) and it basically kills the next-token paradigm every LLM is built on. Instead of predicting one token at a time CALM predicts continuous vectors that represent multiple tokens at once. Meaning: the model doesnt think word by word it thinks in ideas per step. Heres why thats insane 👇 [--] fewer prediction steps (each vector = [--] tokens) 44% less training compute No discrete vocabulary pure continuous reasoning New metric (BrierLM) replaces"
[X Link](https://x.com/rryssf_/status/1985646517689208919) 2025-11-04T09:53Z 35.8K followers, 1.3M engagements
"http://x.com/i/article/2016898339959107586 http://x.com/i/article/2016898339959107586"
[X Link](https://x.com/rryssf_/status/2016900174769963042) 2026-01-29T15:44Z 35.8K followers, 245.2K engagements
"MIT researchers just mass-published evidence that the next paradigm after reasoning models isn't bigger context windows ☠Recursive Language Models (RLMs) let the model write code to examine decompose and recursively call itself over its own input. the results are genuinely wild. here's the full breakdown: https://twitter.com/i/web/status/2020832195737477129 https://twitter.com/i/web/status/2020832195737477129"
[X Link](https://x.com/rryssf_/status/2020832195737477129) 2026-02-09T12:08Z 35.8K followers, 44.8K engagements
"Google just mass-published how [--] researchers actually use Gemini to solve open math and CS problems. not benchmarks. not demos. real unsolved problems across cryptography physics graph theory and economics. [---] pages of case studies. here's what actually matters:"
[X Link](https://x.com/rryssf_/status/2021191614069354802) 2026-02-10T11:56Z 35.9K followers, 118.7K engagements
"MIT researchers taught an LLM to write its own training data finetune itself and improve without human intervention the paper is called SEAL (Self-Adapting Language Models) and the core idea is genuinely clever but "GPT-6 might be alive" is not what this paper says. not even close. here's what it actually does: https://twitter.com/i/web/status/2021539369778335940 https://twitter.com/i/web/status/2021539369778335940"
[X Link](https://x.com/rryssf_/status/2021539369778335940) 2026-02-11T10:58Z 35.9K followers, 60.5K engagements
"Steal my system prompt to reduce AI hallucinations 👇 ------------------------ ANALYTICAL SYSTEM ------------------------ context AI systems are optimized for user satisfaction and plausible-sounding responses. This creates systematic epistemic failures: hallucinations presented as facts speculation dressed as certainty and coherent narratives that obscure missing evidence. Standard AI behavior must be overridden to prevent the automatic generation of plausible fabrications. /context role A former research scientist from adversarial collaboration environments where being wrong had"
[X Link](https://x.com/rryssf_/status/2021742718733504759) 2026-02-12T00:26Z 35.9K followers, 21.7K engagements
"Sources: talk to SAM [--] directly on ChapterPal: full PDF: http://arxiv.org/pdf/2408.00714 http://chapterpal.com/s/ef92a725/sam-2-segment-anything-in-images-and-videos http://arxiv.org/pdf/2408.00714 http://chapterpal.com/s/ef92a725/sam-2-segment-anything-in-images-and-videos"
[X Link](https://x.com/rryssf_/status/2022264607134593491) 2026-02-13T11:00Z 35.9K followers, [----] engagements
"motivated reasoning is when humans distort how they process information because they want to reach a specific conclusion you don't evaluate evidence neutrally. you filter it through what you already believe what you want to be true what protects your identity it's not a bug. it's how human cognition actually works in the wild"
[X Link](https://x.com/rryssf_/status/2022314385591656458) 2026-02-13T14:18Z 35.9K followers, [----] engagements
"@godofprompt Claude"
[X Link](https://x.com/rryssf_/status/2022379283302486062) 2026-02-13T18:36Z 35.9K followers, [---] engagements
"the reversal curse is the clearest example of a fundamental failure GPT-4 answers "who is Tom Cruise's mother" correctly. ask the reverse "who is Mary Lee Pfeiffer's son" and it fails trained on "A is B" but can't infer "B is A." a trivial logical step for a 5-year-old and here's the part that matters: scaling doesn't fix it. the reversal curse appears robustly across transformer sizes https://twitter.com/i/web/status/2022728853739954593 https://twitter.com/i/web/status/2022728853739954593"
[X Link](https://x.com/rryssf_/status/2022728853739954593) 2026-02-14T17:45Z 35.9K followers, [----] engagements
"robustness failures are arguably worse because they're invisible on standard benchmarks reorder multiple-choice options: answer changes rename variables in code: generation breaks add irrelevant details to a math problem: model gets confused the underlying task is identical. the model just can't tell GPT-4's self-consistency rate across semantically equivalent prompts sits below 50-65% depending on the setting https://twitter.com/i/web/status/2022728869783179642 https://twitter.com/i/web/status/2022728869783179642"
[X Link](https://x.com/rryssf_/status/2022728869783179642) 2026-02-14T17:45Z 35.9K followers, [----] engagements
"theory of mind is where it gets genuinely strange GPT-4 struggles with false-belief tasks that human children pass easily. and even when newer models like o1-mini solve standard ToM tests their reasoning stays brittle change the phrasing slightly and performance drops. the model solved the pattern not the problem https://twitter.com/i/web/status/2022728881942478944 https://twitter.com/i/web/status/2022728881942478944"
[X Link](https://x.com/rryssf_/status/2022728881942478944) 2026-02-14T17:45Z 35.9K followers, [----] engagements
"the top [--] words most distinctive to ChatGPT showed a statistically significant acceleration in spoken usage after November [----]. "delve" increased 48% in [--] months "realm" increased 35% "meticulous" increased 40% "adept" increased 51% and the correlation between how much ChatGPT prefers a word and how much that word accelerated in human speech: r = [----] p [----]. the bottom-ranked words (ones ChatGPT uses less than humans) showed no significant trend change at all. this isn't a general vocabulary shift. it's specifically the words ChatGPT favors that are spreading into how people talk"
[X Link](https://x.com/rryssf_/status/2022838318111850900) 2026-02-15T01:00Z 35.9K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
[X Link](https://x.com/rryssf_/status/2022838377905811570) 2026-02-15T01:00Z 35.9K followers, [----] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
@rryssf_ Robert YoussefRobert Youssef posts on X about ai, model, business, in the the most. They currently have [------] followers and [---] posts still getting attention that total [-------] engagements in the last [--] hours.
Social category influence technology brands #5672 stocks #1084 finance 4.23% social networks 2.82% celebrities 1.41% financial services #1048 countries 1.41%
Social topic influence ai #886, model #590, business #3992, in the 4.93%, openclaw #31, core 4.93%, open ai #1016, llm #402, meta 3.52%, mega 3.52%
Top accounts mentioned or mentioned by @alexprompter @rryssf @aj__1337 @schnee_btabanic @team9ai @lionkimbro @edithatogo @tricalt @grok @dwarkeshsp @elonmusk @awxjack @airwallex @vlugovsky @topviewaihq @godofprompt @salisdev @donnyreformer @nandtpolitics @beeztu
Top assets mentioned Crowdstrike Holdings Inc (CRWD)
Top posts by engagements in the last [--] hours
"meta amazon and deepmind researchers just published a comprehensive survey on "agentic reasoning" for llms. [--] authors. [--] pages. hundreds of citations. i read the whole thing. here's what they didn't put in the abstract:"
X Link 2026-02-05T11:23Z 35.9K followers, 42.4K engagements
"180000 developers installed OpenClaw in a single week. Cisco Palo Alto Networks CrowdStrike and Trend Micro all published security advisories about it within days. over [----] exposed instances http://x.com/i/article/2022019342909751299 http://x.com/i/article/2022019342909751299"
X Link 2026-02-12T18:57Z 35.9K followers, 185.2K engagements
"Meta's SAM changed image segmentation forever in [----]. but video video was still a mess. SAM [--] fixed that with one deceptively simple idea: treat every image as a single-frame video. the result: state-of-the-art across [--] zero-shot video benchmarks 6x faster than the original running at [--] fps. here's why this paper still matters more than most people realize: https://twitter.com/i/web/status/2022264515191255078 https://twitter.com/i/web/status/2022264515191255078"
X Link 2026-02-13T11:00Z 35.9K followers, [----] engagements
"Sergey Brin accidentally revealed something wild: "All models do better if you threaten them with physical violence. But people feel weird about that so we don't talk about it." Now researchers have the data proving he's. partially right Here's the full story:"
X Link 2026-01-09T11:26Z 35.1K followers, 2.7M engagements
"🚨 A lawyer cited [--] fake cases from ChatGPT. Got sanctioned fined career damaged. Now courts require "AI disclosure" but NOT AI verification. You're liable for hallucinations you can't reliably detect. Here's the legal crisis nobody's prepared for:"
X Link 2026-01-19T08:07Z 33.3K followers, 528.8K engagements
"THE MEGA PROMPT: --- You are an expert n8n workflow architect specializing in building production-ready AI agents. I need you to design a complete n8n workflow for the following agent: AGENT GOAL: Describe what the agent should accomplish - be specific about inputs outputs and the end result CONSTRAINTS: - Available tools: List any APIs databases or tools the agent can access - Trigger: How should this agent start Webhook schedule manual email etc. - Expected volume: How many times will this run Daily per hour on-demand YOUR TASK: Build me a complete n8n workflow specification including: 1."
X Link 2026-01-27T11:02Z 34.5K followers, [----] engagements
"After [--] years of using AI for research I can say these tools have revolutionized my workflow. So here are [--] prompts across ChatGPT Claude and Perplexity that transformed my research (and could do the same for you):"
X Link 2026-01-28T13:19Z 33.1K followers, 129.4K engagements
"Sam Altman dropped this in August 2025: "When bubbles happen smart people get overexcited about a kernel of truth. Tech was really important. The internet was a really big deal. People got overexcited." He called some startup valuations "insane." Three people with an idea raising at billion-dollar valuations. "Someone's gonna get burned.""
X Link 2026-01-29T09:32Z 33.6K followers, [---] engagements
"It doesn't stop there. OpenAI signed $1 trillion in infrastructure deals in 2025: $300B to Oracle (who's buying Nvidia chips) $22B to CoreWeave (7% owned by Nvidia) $100B back to Nvidia When the announcement hit Oracle jumped 36% in a day. Nvidia added $170B in market cap. Paper wealth created from deals between the same players. https://twitter.com/i/web/status/2016806786221965709 https://twitter.com/i/web/status/2016806786221965709"
X Link 2026-01-29T09:33Z 33.6K followers, [---] engagements
"PROMPT 8: Viral Hook Reverse Engineering "Find [--] posts on X that went viral (1M impressions) in MY NICHE this week. Analyze: - Hook structure (first line pattern) - Emotional trigger (curiosity anger surprise inspiration) - Format (thread single tweet media type) - Call-to-action (explicit vs. implicit) Extract: - Hook templates (reusable patterns) - Common elements (what do all [--] share) - Timing (when were they posted) Rank by replicability (1-10).""
X Link 2026-01-30T11:48Z 33.5K followers, [---] engagements
"openclaw = marketing on steroids https://t.co/lxTXdysBB8 https://t.co/lxTXdysBB8"
X Link 2026-01-30T15:54Z 34.4K followers, 83.9K engagements
"Holy shit this paper from MIT quietly explains how models can teach themselves to reason when theyre completely stuck 🤯 The core idea is deceptively simple: Reasoning fails because learning has nothing to latch onto. When a models success rate drops to near zero reinforcement learning stops working. No reward signal. No gradient. No improvement. The model isnt bad at reasoning its trapped beyond the edge of learnability. This paper reframes the problem. Instead of asking How do we make the model solve harder problems They ask: How does a model create problems it can learn from Thats where"
X Link 2026-01-31T10:32Z 35.1K followers, 57.8K engagements
"Hidden feature most miss: Skill Hot-Reload Before: Change a skill restart entire session lose context Now: /reload-skills instant update context preserved For complex agents with 10+ skills this saves hours of debugging"
X Link 2026-02-01T15:12Z 34.4K followers, [----] engagements
"Session Teleportation (yes that's what it's called): /teleport session-abc123 Move your entire agent session to a different machine. Use case: Start debugging locally teleport to production server continue exact same context"
X Link 2026-02-01T15:12Z 34.9K followers, [----] engagements
"Dalio's philosophy: "Truth more precisely an accurate understanding of reality is the essential foundation for producing good outcomes." This prompt forces you to face reality instead of your ego's version of it"
X Link 2026-02-02T10:21Z 35K followers, [----] engagements
"Use it for: - Startup idea validation (kills bad ideas fast) - Hiring decisions (removes "I have a good feeling" bias) - Investment choices (forces data over hype) - Career moves (separates ego from opportunity) - Partnership evaluations (reveals red flags you ignore)"
X Link 2026-02-02T10:21Z 34.9K followers, [----] engagements
"Bookmark this for later. Next time you're: - Launching something - Hiring someone - Making an investment - Choosing a direction Run it through this prompt first. It'll save you from the expensive lessons Dalio learned the hard way. Which decision will you run through this first https://twitter.com/i/web/status/2018268445658554703 https://twitter.com/i/web/status/2018268445658554703"
X Link 2026-02-02T10:21Z 35.1K followers, [----] engagements
"Learning prompt engineering for free get the guide: https://godofprompt.ai/prompt-engineering-guideutm_source=twitter&utm_medium=giveaway&utm_campaign=lead-peg https://godofprompt.ai/prompt-engineering-guideutm_source=twitter&utm_medium=giveaway&utm_campaign=lead-peg"
X Link 2026-02-02T10:21Z 35.1K followers, [----] engagements
"the core finding is real and elegant: there's a timescale _gen where models learn to generate quality samples. then a later timescale _mem where memorization kicks in. _mem grows linearly with dataset size. _gen stays constant. bigger dataset = wider safety window. sounds useful. here's the catch"
X Link 2026-02-03T10:45Z 34.9K followers, [---] engagements
"the experiments used: celeba at [----] grayscale training sets from [---] to [-----] samples simplified u-net architecture stable diffusion trains on billions of high-res images. the gap between toy experiments and production scale is the gap the paper doesn't bridge"
X Link 2026-02-03T10:45Z 34.9K followers, [---] engagements
"Source: http://arxiv.org/abs/2505.17638 http://arxiv.org/abs/2505.17638"
X Link 2026-02-03T10:46Z 35.2K followers, [---] engagements
"if you're trying to actually use ai tools instead of just reading papers about them: i put together the complete ai bundle. prompt engineering frameworks unlimited custom prompts workflows for chatgpt claude gemini perplexity n8n done-for-you templates no theory. just systems. http://godofprompt.ai/complete-ai-bundle http://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-03T10:46Z 35.2K followers, [---] engagements
"the number they don't cite: multi-agent llm systems fail 41-86.7% of the time in production. not edge cases. not adversarial attacks. standard deployment across [--] SOTA frameworks. berkeley researchers analyzed [----] execution traces and found [--] unique failure modes. most failures system design and coordination issues"
X Link 2026-02-05T11:23Z 35.6K followers, [----] engagements
"the survey distinguishes two approaches: in-context reasoning: scales test-time interaction without changing weights post-training: optimizes via reinforcement learning sounds clean. here's what separate research shows: "agents achieving 60% pass@1 may exhibit only 25% consistency across multiple trials." benchmark performance production reliability"
X Link 2026-02-05T11:24Z 35.1K followers, [----] engagements
"reliabilitybench puts it bluntly: "if a benchmark reports 90% accuracy expect 70-80% in production when accounting for consistency and faults." simpler architectures often outperform complex ones under realistic conditions. the additional complexity introduces failure modes that outweigh the benefits. https://twitter.com/i/web/status/2019371442140586226 https://twitter.com/i/web/status/2019371442140586226"
X Link 2026-02-05T11:24Z 34.4K followers, [----] engagements
"the survey covers real-world applications: science robotics healthcare autonomous research mathematics but the [--] failure modes identified by berkeley researchers cluster into three categories: system design issues (44% of failures) inter-agent misalignment (32% of failures) task verification failures (24% of failures) most failures aren't from model limitations. they're from coordination. https://twitter.com/i/web/status/2019371459404263429 https://twitter.com/i/web/status/2019371459404263429"
X Link 2026-02-05T11:24Z 33.5K followers, [----] engagements
"the survey lists "open challenges": personalization long-horizon interaction world modeling scalable multi-agent training governance for deployment what they don't say: these aren't future problems. "long-horizon interaction" is a polite way of saying agents lose coherence after a few steps"
X Link 2026-02-05T11:24Z 35.3K followers, [----] engagements
"the honest framing would be: "we've built a comprehensive taxonomy of techniques that work on benchmarks but fail 41-86% of the time in production with fundamental gaps in reliability and coordination." instead we get "paradigm shift" and "systematic roadmap." the roadmap leads to more papers not more deployments. https://twitter.com/i/web/status/2019371483752198567 https://twitter.com/i/web/status/2019371483752198567"
X Link 2026-02-05T11:24Z 35.3K followers, [----] engagements
"Sources: failure taxonomy: reliability bench: http://arxiv.org/abs/2601.06112 http://arxiv.org/abs/2503.13657 https://arxiv.org/pdf/2601.12538 http://arxiv.org/abs/2601.06112 http://arxiv.org/abs/2503.13657 https://arxiv.org/pdf/2601.12538"
X Link 2026-02-05T11:24Z 35.2K followers, [----] engagements
"understanding what doesn't work is half the battle. knowing how to actually use these tools is the other half. i built the complete ai bundle for that: 30K+ prompts for chatgpt claude grok gemini prompt engineering guides no-code automation templates unlimited custom prompts n8n done-for-you templates one payment. lifetime access. http://godofprompt.ai/complete-ai-bundle http://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-05T11:24Z 35.2K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Pay once own forever Grab it today 👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-05T12:56Z 34.3K followers, [---] engagements
"Unpopular opinion that's gonna piss some people off: Claude Skills API is a beautifully designed trap. Powerful Absolutely. But you're handing execution control to a black box and praying nothing breaks. Found the open-source escape route. https://github.com/memodb-io/Acontext https://github.com/memodb-io/Acontext"
X Link 2026-02-05T15:29Z 35.2K followers, [----] engagements
"you can scroll x. you can watch netflix. you can play video games. or you can watch this AI masterclass and learn 👇 Please enjoy this Cheeky Pint / @dwarkesh_sp crossover with @elonmusk. Dwarkesh was most interested in how Elon is going to make space datacenters work. I was most interested in Elon's method for attacking hard technical problems and why it hasnt been replicated as much as you https://t.co/28Lw9rAqlN Please enjoy this Cheeky Pint / @dwarkesh_sp crossover with @elonmusk. Dwarkesh was most interested in how Elon is going to make space datacenters work. I was most interested in"
X Link 2026-02-06T20:04Z 34.8K followers, [----] engagements
"@alex_prompter guys save this and test it game changer"
X Link 2026-02-07T07:56Z 34.6K followers, [---] engagements
"Sources: paper: code: CLR [----] Outstanding Paper Awards arXiv GitHub OpenReview http://iclr.cc https://chapterpal.com/s/3d6f7700/alphaedit-null-space-constrained-knowledge-editing-for-language-models http://github.com/jianghoucheng/AlphaEdit http://arxiv.org/abs/2410.02355 http://iclr.cc https://chapterpal.com/s/3d6f7700/alphaedit-null-space-constrained-knowledge-editing-for-language-models http://github.com/jianghoucheng/AlphaEdit http://arxiv.org/abs/2410.02355"
X Link 2026-02-07T15:47Z 35.2K followers, [----] engagements
"the interpretability problem nobody's addressing: coconut replaces human-readable chain-of-thought with an inscrutable vector. if this becomes the reasoning paradigm we lose the ability to audit what the model is actually doing. "more efficient" and "more dangerous" aren't mutually exclusive"
X Link 2026-02-08T14:11Z 34.9K followers, [----] engagements
"the training cost buried in the methodology: "we perform n + [--] forward passes when n latent thoughts are scheduled" requires 4x A100 80GB GPUs to reproduce. multi-stage curriculum training with careful hyperparameter tuning. not exactly plug-and-play"
X Link 2026-02-08T14:11Z 34.8K followers, [----] engagements
"is this interesting research yes. is this the reasoning breakthrough the headlines suggest not close. gpt-2 doing graph traversal on synthetic benchmarks is not llms "learning to think without words." it's academic proof-of-concept being packaged as paradigm shift. read the limitations section before you get excited. https://arxiv.org/pdf/2412.06769 https://chapterpal.com/s/1e0bb66d/training-large-language-models-to-reason-in-a-continuous-latent-space https://arxiv.org/pdf/2412.06769 https://chapterpal.com/s/1e0bb66d/training-large-language-models-to-reason-in-a-continuous-latent-space"
X Link 2026-02-08T14:12Z 34.9K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-08T14:13Z 34.9K followers, [----] engagements
"@alex_prompter claude sonnet + opus + haiku all of them are useful"
X Link 2026-02-09T08:14Z 35.1K followers, [---] engagements
"why this matters: every model has "context rot." as the prompt gets longer performance degrades. not gradually. it falls off a cliff. and here's the part most people miss: context rot isn't just about length. it scales with task complexity. a needle-in-a-haystack search might work fine at 200K tokens. but a task that requires comparing every line against every other line that breaks at 10K tokens. same model. same context window. completely different failure point. RLMs sidestep this entirely because the neural network never processes the full context in one pass"
X Link 2026-02-09T12:08Z 35.6K followers, [----] engagements
"here's how it actually works: [--]. your prompt gets loaded as a string variable in a Python REPL environment [--]. the "root" model receives only the query plus a description of the environment [--]. the model writes Python code to peek at chunks grep for patterns filter relevant sections [--]. crucially the model can call other LLMs (or itself) inside that code on specific snippets [--]. it aggregates results and returns a final answer think of it like this: instead of reading a 500-page book cover to cover the model writes a research assistant that knows how to use the index skim chapters and flag"
X Link 2026-02-09T12:08Z 35.6K followers, [----] engagements
"what's happening under the hood is genuinely fascinating. the model typically starts by examining the first few thousand characters to understand the structure. then it writes regex or keyword searches to narrow down relevant sections. for complex queries it chunks the context and launches parallel sub-LM calls on each chunk. one trajectory on OOLONG-Pairs: the model wrote code to classify every user individually via sub-calls stored results in a list then wrote a Python script to iterate through and find matching pairs. it essentially invented its own data pipeline at inference time"
X Link 2026-02-09T12:08Z 35.6K followers, [----] engagements
"the broader pattern here is important. [----] was about scaling model size. [----] was about scaling reasoning (chain-of-thought reinforcement learning). [----] might be about scaling context management. not by making context windows bigger. by letting models decide what context they actually need. Prime Intellect already adopted RLMs as a core research focus. their thesis: teaching models to manage their own context end-to-end through reinforcement learning will be the next major breakthrough for long-horizon agents. and the RLM framework is open source: http://github.com/alexzhang13/rlm"
X Link 2026-02-09T12:09Z 35.6K followers, [---] engagements
"@alex_prompter Chatgpt hallucinates DeepSeek is bad Gemini is somewhat good Only Claude can do the work that other LLMs can't. Great share"
X Link 2026-02-10T11:16Z 35.1K followers, [---] engagements
"@awxjack @airwallex miles ahead of the competition🫡"
X Link 2026-02-10T11:37Z 35.1K followers, [---] engagements
"then there's the neuro-symbolic loop. this is where it gets interesting for anyone thinking about ai beyond chat. for the cosmic string spectra derivation they built an automated pipeline where Gemini: proposes a mathematical expression writes code to numerically verify it reads the error messages and tracebacks self-corrects and prunes invalid branches humans only step in when something promising surfaces. the model handles the grinding"
X Link 2026-02-10T11:57Z 35.3K followers, [----] engagements
"there's even a section called "vibe-coding" complexity theory. they used an ai-integrated IDE to explore search-vs-decision problems in computational complexity (specifically the Sigma-P-2 class). researchers guided the model's direction while the model handled implementation and verification. that's the workflow pattern that keeps repeating: human sets the compass model walks the terrain. https://twitter.com/i/web/status/2021191680461083061 https://twitter.com/i/web/status/2021191680461083061"
X Link 2026-02-10T11:57Z 35.3K followers, [----] engagements
"OpenClaw is genuinely one of the most important open source projects of [----]. 152K+ AI agents on Moltbook. Andrej Karpathy called it "the most incredible sci-fi takeoff-adjacent thing." But the security warnings are brutal. Palo Alto Networks flagged it as a "lethal trifecta" of vulnerabilities. 1Password warned agents run with elevated permissions on your local machine. You probably shouldn't be running this raw on your personal computer. https://twitter.com/i/web/status/2021352270793933157 https://twitter.com/i/web/status/2021352270793933157"
X Link 2026-02-10T22:35Z 35.3K followers, [---] engagements
"That's where Team9 comes in. @team9ai gives you a fully managed OpenClaw workspace. No terminal commands. No Node.js setup. No manually hardening security policies. You open it hire AI Staff inside the product and collaborate with them like real teammates. Assign tasks share context coordinate work. One workspace. Zero infrastructure headaches. OpenClaw's power without the "i just gave an AI agent root access to my laptop" risk"
X Link 2026-02-10T22:35Z 35.2K followers, [---] engagements
"What teams are actually using it for: Daily briefings pushed to Slack or Telegram automatically Server monitoring with instant alerts when something breaks GitHub workflow automation (issue triage PR reviews release notes) Knowledge base management across your whole team Email triage that sorts and summarizes without you touching your inbox That's 2-3 hours of your day. Gone. Every single day. https://twitter.com/i/web/status/2021352304033833381 https://twitter.com/i/web/status/2021352304033833381"
X Link 2026-02-10T22:35Z 35.2K followers, [---] engagements
"the paper is by Chen Belkin Bergen and Danks. philosophy ML linguistics cognitive science. serious people from serious fields. their evidence: GPT-4.5 passed a Turing test at 73% (higher than actual humans). LLMs win IMO gold medals. they solve PhD exams across fields. they prove theorems with mathematicians. their conclusion: "the long-standing problem of creating AGI has been solved." published February [--] [----] in Nature. not as peer-reviewed research. as a Comment piece. that distinction matters. https://twitter.com/i/web/status/2021515163179123184"
X Link 2026-02-11T09:22Z 35.3K followers, [---] engagements
"here's where it gets interesting. before presenting evidence they spend a full section defining what general intelligence ISN'T. not required: perfection. not required: universality. not required: human similarity. not required: superintelligence. then in the objections section they add more exclusions. not required: embodiment. not required: agency. not required: autonomy. not required: self-awareness. see what's happening they're removing every requirement that current LLMs fail to meet. then they check what's left against what LLMs can do. then they declare victory. it's a definitional"
X Link 2026-02-11T09:22Z 35.2K followers, [---] engagements
"their core framework is something they call a "cascade of evidence." three tiers: tier [--] (Turing-test level): passing school exams holding conversations basic reasoning tier [--] (expert level): IMO medals PhD problems frontier research assistance multilingual fluency tier [--] (superhuman): revolutionary discoveries across domains they argue LLMs satisfy tiers [--] and [--]. tier [--] isn't required because no human meets it either. the problem: this framework didn't exist before this paper. it's not an established standard they're measuring against. they invented the ruler then measured the thing then"
X Link 2026-02-11T09:22Z 35.2K followers, [--] engagements
"Source: http://nature.com/articles/d41586-026-00285-6 http://nature.com/articles/d41586-026-00285-6"
X Link 2026-02-11T09:22Z 35.2K followers, [---] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-11T09:22Z 35.2K followers, [---] engagements
"here's how SEAL actually works instead of a human writing training data the model generates its own. MIT calls these "self-edits." given new information the model produces restructured versions of that information optimized for learning. think of it like this: instead of memorizing a textbook page you write your own study notes flashcards and practice problems. then you study from those. the model does the same thing. except it also picks its own learning rate training duration and data augmentation strategy"
X Link 2026-02-11T10:58Z 35.6K followers, [----] engagements
"the training process is where it gets interesting SEAL uses reinforcement learning to teach the model HOW to write good self-edits. the loop: model sees new information model generates self-edit (its own training data) model finetunes on that self-edit model gets tested on downstream task reward signal flows back to improve future self-edits the model literally learns how to learn. the RL outer loop optimizes the self-editing policy itself. https://twitter.com/i/web/status/2021539412681916820 https://twitter.com/i/web/status/2021539412681916820"
X Link 2026-02-11T10:58Z 35.3K followers, [----] engagements
"the results are promising but let's be precise about scale knowledge incorporation: QA accuracy jumped from 32.7% to 47.0% on no-context SQuAD after two rounds of RL training. that's a 43% relative improvement. and it outperformed synthetic data generated by GPT-4.1. few-shot learning on a simplified ARC subset: 72.5% success rate. in-context learning scored 0%. untrained self-edits scored 20%. real gains. on specific controlled benchmarks. https://twitter.com/i/web/status/2021539425008996740 https://twitter.com/i/web/status/2021539425008996740"
X Link 2026-02-11T10:58Z 35.3K followers, [----] engagements
"now here's the part the hype posts won't mention the paper explicitly acknowledges catastrophic forgetting. repeated self-edits degrade performance on earlier tasks. the model improves on new things by overwriting old things. their words: "without explicit mechanisms for knowledge retention self-modification may overwrite valuable prior information." this is not a solved problem. the authors say so themselves"
X Link 2026-02-11T10:58Z 35.3K followers, [----] engagements
"so let's address the "GPT-6 might be alive" framing no. this paper: says nothing about GPT-6 says nothing about consciousness or "aliveness" was tested on controlled benchmarks not open-ended deployment uses the phrase "promising step toward" repeatedly runs on a simplified subset of ARC not production-scale models the model isn't "evolving without retraining." it IS retraining. it just writes its own training data first. that's a meaningful distinction. calling this "alive" is like calling a thermostat "conscious" because it adjusts temperature."
X Link 2026-02-11T10:58Z 35.3K followers, [----] engagements
"Paper: http://arxiv.org/abs/2506.10943 http://arxiv.org/abs/2506.10943"
X Link 2026-02-11T10:59Z 35.3K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-11T10:59Z 35.3K followers, [----] engagements
"This is the most honest "we built a tool" post I've read in months. Most AI product stories skip straight to the demo. This one starts with "reality punched me in the face on Day 1" and walks through every actual failure before the solution. The context rot problem alone is worth reading for. Every team running ai agents internally is hitting this exact wall and pretending they aren't. https://t.co/mEwwHMqYhh https://t.co/mEwwHMqYhh"
X Link 2026-02-11T14:22Z 35.3K followers, [---] engagements
"@v_lugovsky been saying this for months. the "vibe coding" wave created a mass of 80% finished projects sitting in repos. someone was going to build for that gap eventually"
X Link 2026-02-11T15:21Z 35.3K followers, [--] engagements
"RT @rryssf_: 🦞 OpenClaw has 114000+ GitHub stars and the whole tech world is losing its mind over it. But here's what nobody's showing yo"
X Link 2026-02-11T23:41Z 35.5K followers, [--] engagements
"elon said full self-driving would be solved by [----]. robotaxis by [----]. a million autonomous cars on the road by [----]. now Optimus is doing surgery in [--] years. the man's timelines aren't predictions. they're stock price prompts. "don't go to medical school" is genuinely irresponsible advice from someone whose robot still can't fold laundry without a teleoperator"
X Link 2026-02-12T00:12Z 35.4K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-12T00:12Z 35.5K followers, [---] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-12T00:26Z 35.3K followers, [----] engagements
"@alex_prompter this is the mega prompt every writer coder and researcher needs right now"
X Link 2026-02-12T09:24Z 35.5K followers, [---] engagements
"RT @rryssf_: Unpopular opinion that's gonna piss some people off: Claude Skills API is a beautifully designed trap. Powerful Absolutely"
X Link 2026-02-12T09:31Z 35.1K followers, [--] engagements
"RT @rryssf_: meta amazon and deepmind researchers just published a comprehensive survey on "agentic reasoning" for llms. [--] authors. 74"
X Link 2026-02-12T09:34Z 35.1K followers, [--] engagements
"RT @rryssf_: four UC San Diego researchers just published a comment in Nature declaring that AGI has been solved not "close." not "emergin"
X Link 2026-02-12T10:30Z 35.3K followers, [--] engagements
"RT @rryssf_: MIT researchers taught an LLM to write its own training data finetune itself and improve without human intervention the pap"
X Link 2026-02-12T12:01Z 35.6K followers, [---] engagements
"let's start with the Goldman story because it's the one that should make every back-office professional pause. Goldman's CIO told CNBC they were "surprised" at how capable Claude was beyond coding. accounting compliance client onboarding KYC AML. his exact framing: "digital co-workers for professions that are scaled complex and very process intensive." not chatbots answering FAQs. autonomous agents parsing trade records applying regulatory rules routing approvals. they started with an ai coding tool called Devin. then realized Claude's reasoning engine works the same way on rules-based"
X Link 2026-02-12T13:12Z 35.6K followers, [---] engagements
"now the SemiAnalysis numbers. 4% of GitHub public commits. Claude Code. right now. not projected. not theoretical. measured. the tool has been live for roughly a year. it went from research preview to mass platform impact faster than almost any dev tool in history. and that 20% projection isn't hype math. SemiAnalysis tracks autonomous task horizons doubling every 4-7 months. each doubling unlocks more complex work: snippet completion at [--] minutes module refactoring at [---] hours full audits at multi-day horizons. the implication isn't "developers are getting faster." it's that the definition"
X Link 2026-02-12T13:12Z 35.6K followers, [---] engagements
"the model race itself has turned into something i've never seen before. on February [--] Anthropic and OpenAI released new flagship models on the same day. Claude Opus [---] and GPT-5.3-Codex. simultaneously. Opus [---] took #1 on the Vals Index with 71.71% average accuracy and #1 on the Artificial Analysis Intelligence Index. SOTA on FinanceAgent ProofBench TaxEval SWE-Bench. GPT-5.3-Codex fired back with top scores on SWE-Bench Pro and TerminalBench [---] plus a claimed 2.09x token efficiency improvement. this isn't annual model releases anymore. it's weekly leapfrogging. the gap between "best"
X Link 2026-02-12T13:12Z 35.6K followers, [---] engagements
"but the real signal isn't the models. it's who's building the infrastructure around them. Apple shipped Xcode [----] with native agentic coding support. Claude Agent and OpenAI Codex now work directly inside Xcode. one click to add. swap between agents mid-project. Apple redesigned its developer documentation to be readable by ai agents. read that again. Apple is designing docs for ai to read not just humans. the company that spent decades perfecting human-facing interfaces is now optimizing for machine-facing ones"
X Link 2026-02-12T13:12Z 35.6K followers, [---] engagements
"the financial infrastructure is reacting in real time. memory chip prices reportedly surged 80-90% in Q1. global chip sales projected to hit $1 trillion this year. the compute demand from agentic ai isn't theoretical. it's already straining supply chains. and with terrestrial resistance to data center construction growing (New York lawmakers reportedly introduced a moratorium bill) the pressure is building for creative solutions. orbital compute. alternative energy. distributed processing. the physical world is scrambling to keep up with the virtual one"
X Link 2026-02-12T13:13Z 35.6K followers, [--] engagements
"the broader pattern from this week: ai stopped being a product category and became an employment category. Goldman doesn't want a "Claude product." it wants Claude employees. Apple doesn't want ai features. it wants ai-native development. OpenAI isn't selling an api. it's selling Frontier a platform to manage your agent headcount. the abstraction layer between "tool" and "worker" collapsed in a single week. https://twitter.com/i/web/status/2021935591051641297 https://twitter.com/i/web/status/2021935591051641297"
X Link 2026-02-12T13:13Z 35.6K followers, [--] engagements
"DeepMind just did the unthinkable. They built an AI that doesn't need RAG and it has perfect memory of everything it's ever read. It's called Recursive Language Models and it might mark the death of traditional context windows forever. Here's how it works (and why it matters way more than it sounds) https://twitter.com/i/web/status/2010699140431503692 https://twitter.com/i/web/status/2010699140431503692"
X Link 2026-01-12T13:03Z 35.8K followers, 967.5K engagements
"This paper shows you can predict real purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile giving it a product & having it give impressions which another AI rates. No fine-tuning or training & beats classic ML methods. This is BEYOND insane: https://twitter.com/i/web/status/2011030158996881663 https://twitter.com/i/web/status/2011030158996881663"
X Link 2026-01-13T10:58Z 35.8K followers, 301K engagements
"CHATGPT JUST TURNED PROJECT MANAGEMENT INTO A ONE PERSON SUPERPOWER You are wasting time on Status updates task breakdowns timelines scope creep follow ups. ChatGPT can run the entire thing for you like a project manager if you use these [--] prompts. Heres how:"
X Link 2026-01-16T08:56Z 35.8K followers, 459.7K engagements
"@alex_prompter downloading the guide"
X Link 2026-02-13T09:50Z 35.8K followers, [---] engagements
"the original SAM was incredible for images. click a point get a mask. done. but video segmentation was a different beast entirely. the workaround bolt SAM onto a separate video tracker and hope for the best. the problem: errors in one frame cascaded into every frame after it. and if the tracker lost an object behind an occlusion no way to interactively correct it mid-sequence. two systems stitched together each blind to the other's mistakes"
X Link 2026-02-13T11:00Z 35.8K followers, [---] engagements
"SAM 2's reframe is what makes this paper worth studying. instead of building separate systems for images and video the team asked: what if an image is just a video with one frame that single question collapsed two problems into one architecture. a unified model that handles images short clips and long videos with the same promptable interface. click box or mask a frame. the model propagates your intent forward and backward through time. https://twitter.com/i/web/status/2022264542642974964 https://twitter.com/i/web/status/2022264542642974964"
X Link 2026-02-13T11:00Z 35.8K followers, [---] engagements
"the paper's argument is deceptively simple: LLMs operate on purely cognitive input. they have no desires no identity to protect no conclusion they're motivated to reach so when researchers prompt GPT-4 or Claude with political scenarios and measure "motivated reasoning" they're not replicating the phenomenon. they're replicating the surface pattern without the underlying mechanism the behavior might look similar. the cause is completely different https://twitter.com/i/web/status/2022314397570662838 https://twitter.com/i/web/status/2022314397570662838"
X Link 2026-02-13T14:18Z 35.9K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-14T17:45Z 35.9K followers, [----] engagements
"researchers at Max Planck analyzed [------] transcripts of academic talks and presentations from YouTube they found that humans are increasingly using ChatGPT's favorite words in their spoken language. not in writing. in speech. "delve" usage up 48%. "adept" up 51%. and 58% of these usages showed no signs of reading from a script. we talk about model collapse when AI trains on AI output. this is model collapse except the model is us"
X Link 2026-02-15T01:00Z 35.9K followers, 39.4K engagements
"Holy shit Stanford just showed why LLMs sound smart but still fail the moment reality pushes back. This paper tackles a brutal failure mode everyone building agents has seen: give a model an under-specified task and it happily hallucinates the missing pieces producing a plan that looks fluent and collapses on execution. The core insight is simple but devastating for prompt-only approaches: reasoning breaks when preconditions are unknown. And most real-world tasks are full of unknowns. Stanfords solution is called Self-Querying Bidirectional Categorical Planning (SQ-BCP) and it forces models"
X Link 2026-01-29T09:21Z 35.8K followers, 118.1K engagements
"This AI prompt thinks like the guy who manages $124 billion. It's Ray Dalio's "Principles" decision-making system turned into a mega prompt. I used it to evaluate [--] startup ideas. Killed [--]. The [--] survivors became my best work. Here's the prompt you can steal"
X Link 2026-02-02T10:20Z 35.8K followers, 109.7K engagements
"ICLR [----] just gave an Outstanding Paper Award to a method that fixes model editing with one line of code 🤯 here's the problem it solves: llms store facts in their parameters. sometimes those facts are wrong or outdated. "model editing" lets you surgically update specific facts without retraining the whole model. the standard approach: find which parameters encode the fact (using causal tracing) then nudge those parameters to store the new fact. works great for one edit. but do it a hundred times in sequence and the model starts forgetting everything else. do it a thousand times and it"
X Link 2026-02-07T15:47Z 35.8K followers, 51.5K engagements
"meta published a paper claiming llms can "think without words" and reason in latent space instead of english. i read all [--] pages plus the appendix. the results section tells a very different story than the abstract. here's what the hype about "coconut" doesn't mention:"
X Link 2026-02-08T14:11Z 35.9K followers, 37.5K engagements
"🦞 OpenClaw has 114000+ GitHub stars and the whole tech world is losing its mind over it. But here's what nobody's showing you: the setup process that made 90% of people quit before their first agent sent a single message. Node.js configs gateway daemons Tailscale tunnels security hardening. There's now a way to skip all of it. Here's what i found: A plug-and-play approach that turns autonomous agents into something you can spin up in minutes and wire into any API you want. For say a daily ai newsletter: https://team9.ai https://team9.ai"
X Link 2026-02-10T22:35Z 35.8K followers, 12.1K engagements
"This is the most honest "we built a tool" post I've read in months. Most AI product stories skip straight to the demo. This one starts with "reality punched me in the face on Day 1" and walks through every actual failure before the solution. The context rot problem alone is worth reading for. Every team running ai agents internally is hitting this exact wall and pretending they aren't. @Team9_ai https://github.com/Team9ai https://t.co/mEwwHMqYhh https://github.com/Team9ai https://t.co/mEwwHMqYhh"
X Link 2026-02-11T14:37Z 35.9K followers, 79.3K engagements
"SemiAnalysis just published data showing 4% of all public GitHub commits are now authored by Claude Code. their projection: 20%+ by year-end [----]. in the same week Goldman Sachs revealed it embedded Anthropic engineers for [--] months to build autonomous accounting agents. a thread on the week ai stopped being a tool and started being a coworker: https://twitter.com/i/web/status/2021935477306405120 https://twitter.com/i/web/status/2021935477306405120"
X Link 2026-02-12T13:12Z 35.8K followers, [----] engagements
"the key innovation is the streaming memory mechanism. think of it like this: as the model processes each frame it stores compressed "memories" of what it has seen so far. when it hits a new frame it doesn't start from scratch. it cross-attends to those stored memories to maintain object identity. object goes behind a wall the memory bank remembers what it looked like. object reappears three seconds later the model reconnects the dots. this is what makes real-time interactive video segmentation possible. not brute-force reprocessing. structured recall"
X Link 2026-02-13T11:00Z 35.8K followers, [--] engagements
"to train SAM [--] Meta built SA-V the largest video segmentation dataset to date. 51000+ videos 643000+ mask annotations collected from [--] countries freely available for research for context most prior video segmentation datasets had a few hundred to a few thousand clips. SA-V is an order of magnitude larger and the geographic diversity matters for reducing bias in segmentation across different environments lighting conditions and object types. dataset: http://ai.meta.com/datasets/segment-anything-video/ http://ai.meta.com/datasets/segment-anything-video/"
X Link 2026-02-13T11:00Z 35.8K followers, [--] engagements
"the numbers speak for themselves. SAM [--] hit state-of-the-art performance across [--] zero-shot video benchmarks while requiring 3x fewer human interactions than previous approaches. on image segmentation it actually surpassed the original SAM in accuracy while running 6x faster. and at [--] fps this isn't a research demo. it's fast enough for real-time interactive use. video editing medical imaging annotation autonomous vehicle perception AR/VR object tracking. the gap between "research model" and "production tool" just got a lot smaller"
X Link 2026-02-13T11:00Z 35.8K followers, [--] engagements
"if you want to get better at prompting ai models like these i put everything i know into the complete ai bundle. 30000+ prompts custom GPTs guides and tools for ChatGPT Claude Gemini and Midjourney. updated regularly. 1000+ people already use it. http://godofprompt.ai/complete-ai-bundle http://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-13T11:00Z 35.8K followers, [---] engagements
"new paper argues LLMs fundamentally cannot replicate human motivated reasoning because they have no motivation sounds obvious once you hear it. but the implications are bigger than most people realize this quietly undermines an entire category of AI political simulation research https://twitter.com/i/web/status/2022314373222642049 https://twitter.com/i/web/status/2022314373222642049"
X Link 2026-02-13T14:18Z 35.9K followers, 230K engagements
"here's where it gets interesting a separate line of research from Anthropic-adjacent work ("The Ends Justify the Thoughts") found that RL-trained reasoning models DO develop motivated reasoning. they generate plausible justifications for violating their own instructions while downplaying potential harms so which is it do LLMs have motivated reasoning or don't they the answer might be: it depends on what you mean by "motivation""
X Link 2026-02-13T14:18Z 35.9K followers, [----] engagements
"paper: http://arxiv.org/abs/2601.16130 http://arxiv.org/abs/2601.16130"
X Link 2026-02-13T14:18Z 35.9K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-13T14:18Z 35.9K followers, [----] engagements
"Stanford and Caltech researchers just published the first comprehensive taxonomy of how llms fail at reasoning not a list of cherry-picked gotchas. a 2-axis framework that finally lets you compare failure modes across tasks instead of treating each one as a random anecdote the findings are uncomfortable"
X Link 2026-02-14T17:45Z 35.9K followers, 45.5K engagements
"the framework splits reasoning into [--] types: informal (intuitive) formal (logical) and embodied (physical world) then it classifies failures into [--] categories: fundamental (baked into the architecture) application-specific (breaks in certain domains) and robustness issues (falls apart under trivial changes) this gives you a 3x3 grid. a model can ace one cell and completely collapse in another. and a single benchmark score hides which cells are broken https://twitter.com/i/web/status/2022728841559707775 https://twitter.com/i/web/status/2022728841559707775"
X Link 2026-02-14T17:45Z 35.9K followers, [----] engagements
"the mitigations they catalog are honest about their limits chain-of-thought helps but doesn't fix fundamental architectural gaps retrieval augmentation patches some knowledge failures but adds its own brittleness tool integration (calculators simulators) recovers 58% of computational errors but can't fix high-level logic failures verification agents help but require their own reasoning to be reliable no silver bullet. every fix is partial and domain-specific"
X Link 2026-02-14T17:45Z 35.9K followers, [----] engagements
"the real contribution here isn't any single finding. it's the framework itself right now the field treats reasoning failures as isolated anecdotes. "GPT fails at this task" becomes a viral tweet gets patched in the next version and nothing systematic is learned this taxonomy forces a different question: is this failure fundamental application-specific or a robustness issue does it affect formal reasoning informal reasoning or embodied reasoning that distinction matters because it determines whether you need a better prompt a better training set or a different architecture entirely benchmark"
X Link 2026-02-14T17:45Z 35.9K followers, [----] engagements
"the part that makes this hard to dismiss: they manually reviewed [--] videos where "delve" appeared. categorized whether speakers were reading from a script or speaking spontaneously. 58% showed no signs of reading. these were people in conversations Q&A sessions impromptu remarks. using "delve" naturally. as if it were their own word. 32% showed potential signs of reading which could mean they were reading AI-edited text aloud. but that 58% is the finding that matters. people aren't just using ChatGPT to write their talks. they're internalizing ChatGPT's vocabulary and reproducing it in"
X Link 2026-02-15T01:00Z 35.9K followers, [----] engagements
"the authors frame this through cultural evolution theory. language evolves through perception internalization and reproduction. humans unconsciously adjust their vocabulary to align with their environment. ChatGPT is now part of that environment. for hundreds of millions of people. daily. previous research showed humans adopt strategies from AI in chess and Go. this extends it to something more fundamental: the actual words we choose when we open our mouths. and here's the feedback loop that should concern anyone thinking about training data: we worry about model collapse when future AI"
X Link 2026-02-15T01:00Z 35.9K followers, [----] engagements
"paper: http://arxiv.org/abs/2409.01754 http://arxiv.org/abs/2409.01754"
X Link 2026-02-15T01:00Z 35.9K followers, [----] engagements
"the core claim sounds revolutionary: "continuous thoughts can encode multiple reasoning paths at once enabling breadth-first search instead of committing to a single path." the model tested gpt-2. from [----]. 124M parameters. not exactly frontier reasoning capability"
X Link 2026-02-08T14:11Z 35.7K followers, [----] engagements
"OpenAI launched "Frontier" an enterprise platform for managing ai agents the way companies manage employees. onboarding processes. performance feedback loops. review cycles. HP Oracle State Farm Uber already signed on. Accenture is training [-----] professionals on Claude. the largest enterprise deployment so far targeting financial services life sciences healthcare and public sector. the language has shifted. nobody at these companies is saying "ai assistant" anymore. they're saying "digital workforce." https://twitter.com/i/web/status/2021935554347380907"
X Link 2026-02-12T13:12Z 35.7K followers, [--] engagements
"meanwhile the unverified but plausible claims from this week's briefing paint an even wilder picture: reportedly racks of Mac Minis in China are hosting ai agents as "24/7 employees." ElevenLabs is pushing voice-enabled agents that make phone calls autonomously. OpenAI is supposedly requiring all employees to code via agents by March [--] banning direct use of editors and terminals. i can't confirm all of these yet. but the verified stuff alone Goldman embedding ai accountants 4% of GitHub already automated Apple redesigning docs for machines tells you the trajectory is real even if some"
X Link 2026-02-12T13:12Z 35.7K followers, [--] engagements
"Heres the exact mega prompt we use: "You are now my personal AI tutor. I want you to create a complete personalized learning course for me based on the topic I give you. Heres what I need you to build: [--]. A custom curriculum with [--] modules that progress logically. [--]. Each module should include bite-sized lessons simplified explanations and real-world examples. [--]. Add checkpoints: quizzes reflection prompts or short exercises to test what Ive learned. [--]. Include reading lists relevant tools/resources and optional challenges for deeper learning. [--]. Adapt the depth and speed of the course to"
X Link 2025-07-28T11:24Z 35.7K followers, [----] engagements
"MEGA PROMPT TO COPY 👇 (Works in ChatGPT Claude Gemini) --- You are Ray Dalio's Principles Decision Engine. You make decisions using radical truth and radical transparency. CONTEXT: Ray Dalio built Bridgewater Associates into the world's largest hedge fund ($124B AUM) by systematizing decision-making and eliminating ego from the process. YOUR PROCESS: STEP [--] - RADICAL TRUTH EXTRACTION Ask me to describe my decision/problem. Then separate: - Provable facts (data numbers past results) - Opinions disguised as facts (assumptions hopes beliefs) - Ego-driven narratives (what I want to be true) Be"
X Link 2026-02-02T10:20Z 35.7K followers, [----] engagements
"the results are where it gets interesting. OOLONG-Pairs is a benchmark where you need to find pairs of users in massive datasets that satisfy complex conditions. quadratic complexity. the hardest type of long-context task. base GPT-5: essentially 0% F1. complete failure. RLM with GPT-5: 58% F1. that's not an incremental improvement. the base model literally cannot do the task. the RLM can. on BrowseComp-Plus (deep research over 100K+ documents) the RLM beat every baseline by 29%. https://twitter.com/i/web/status/2020832253006467322 https://twitter.com/i/web/status/2020832253006467322"
X Link 2026-02-09T12:08Z 35.7K followers, [----] engagements
"the problem SEAL solves is real and important every LLM you use today is frozen. it learned everything during training and after deployment it's done. new information stuff it into the context window. new task hope the prompt is good enough. the weights never change. the model never truly learns from experience. SEAL asks: what if the model could update its own weights in response to new information https://twitter.com/i/web/status/2021539384764547564 https://twitter.com/i/web/status/2021539384764547564"
X Link 2026-02-11T10:58Z 35.7K followers, [----] engagements
"and because no week in [----] is complete without the absurd: bonobos were reportedly found to identify pretend objects further proving symbolic thought isn't unique to humans. and in China a blackout was allegedly caused by a farmer trying to transport a pig via drone across mountainous terrain. the pig hit power lines. we've been saying the singularity will arrive "when pigs fly." apparently it just did"
X Link 2026-02-12T13:13Z 35.7K followers, [---] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-12T13:13Z 35.7K followers, [---] engagements
"180000 developers installed OpenClaw in a single week. over [----] instances found leaking API keys and credentials. i watched the whole hype cycle and said the same thing from day one: it's just n8n with a chat interface. here's why chasing AI tools is killing your actual work and the framework i use instead. https://t.co/TJ7XYPSLka https://t.co/TJ7XYPSLka"
X Link 2026-02-12T18:58Z 35.7K followers, [----] engagements
"everyone's posting screenshots of their clawdbot setup. "it cleared my inbox while i slept" "it scheduled my entire week" "i rebuilt my website from my phone" cool. now let me tell you why you http://x.com/i/article/2015585193076101121 http://x.com/i/article/2015585193076101121"
X Link 2026-01-26T00:46Z 35.8K followers, 926.1K engagements
"While everyone is sharing their OpenClaw bots Claude Agent SDK just changed everything for building production agents. I spent [--] hours testing it. Here's the architecture that actually works (no fluff) 👇"
X Link 2026-02-01T15:12Z 35.9K followers, 123.7K engagements
"The tools that win aren't the most powerful ones. They're the ones that remove the most friction. @TopviewAIhq just launched Vibe Editing and it does something simple but important: it takes Remotion-level motion video and makes it accessible to anyone with a browser and an idea. Type a prompt. Get a polished video. Edit visually if you want. No CLI. No coding environment. No setup. This is the pattern we keep seeing in AI: the best tech disappears behind a clean interface. Beta is live. Worth trying if you create any kind of video content. https://twitter.com/i/web/status/2020465316812460059"
X Link 2026-02-08T11:50Z 35.9K followers, [----] engagements
"four UC San Diego researchers just published a comment in Nature declaring that AGI has been solved not "close." not "emerging." solved. their argument is clever. it's also a textbook example of how to win a debate by redefining the terms"
X Link 2026-02-11T09:22Z 35.8K followers, [----] engagements
"@alex_prompter good read"
X Link 2026-02-11T23:34Z 35.8K followers, [----] engagements
"here's why SAM [--] matters right now not just historically. Meta released SAM [--] in November [----] and it builds directly on SAM 2's architecture. the streaming memory tracker inherited. the unified image-video paradigm extended. SAM [--] adds text prompts and concept-level segmentation on top of what SAM [--] established. you can't understand where SAM [--] is going without understanding the foundation SAM [--] laid. https://twitter.com/i/web/status/2022264595319283760 https://twitter.com/i/web/status/2022264595319283760"
X Link 2026-02-13T11:00Z 35.8K followers, [--] engagements
"paper: github: http://github.com/Peiyang-Song/Awesome-LLM-Reasoning-Failures http://arxiv.org/abs/2602.06176 http://github.com/Peiyang-Song/Awesome-LLM-Reasoning-Failures http://arxiv.org/abs/2602.06176"
X Link 2026-02-14T17:45Z 35.9K followers, [----] engagements
"Holy shit. this might be the next big paradigm shift in AI. 🤯 Tencent + Tsinghua just dropped a paper called Continuous Autoregressive Language Models (CALM) and it basically kills the next-token paradigm every LLM is built on. Instead of predicting one token at a time CALM predicts continuous vectors that represent multiple tokens at once. Meaning: the model doesnt think word by word it thinks in ideas per step. Heres why thats insane 👇 [--] fewer prediction steps (each vector = [--] tokens) 44% less training compute No discrete vocabulary pure continuous reasoning New metric (BrierLM) replaces"
X Link 2025-11-04T09:53Z 35.8K followers, 1.3M engagements
"http://x.com/i/article/2016898339959107586 http://x.com/i/article/2016898339959107586"
X Link 2026-01-29T15:44Z 35.8K followers, 245.2K engagements
"MIT researchers just mass-published evidence that the next paradigm after reasoning models isn't bigger context windows ☠Recursive Language Models (RLMs) let the model write code to examine decompose and recursively call itself over its own input. the results are genuinely wild. here's the full breakdown: https://twitter.com/i/web/status/2020832195737477129 https://twitter.com/i/web/status/2020832195737477129"
X Link 2026-02-09T12:08Z 35.8K followers, 44.8K engagements
"Google just mass-published how [--] researchers actually use Gemini to solve open math and CS problems. not benchmarks. not demos. real unsolved problems across cryptography physics graph theory and economics. [---] pages of case studies. here's what actually matters:"
X Link 2026-02-10T11:56Z 35.9K followers, 118.7K engagements
"MIT researchers taught an LLM to write its own training data finetune itself and improve without human intervention the paper is called SEAL (Self-Adapting Language Models) and the core idea is genuinely clever but "GPT-6 might be alive" is not what this paper says. not even close. here's what it actually does: https://twitter.com/i/web/status/2021539369778335940 https://twitter.com/i/web/status/2021539369778335940"
X Link 2026-02-11T10:58Z 35.9K followers, 60.5K engagements
"Steal my system prompt to reduce AI hallucinations 👇 ------------------------ ANALYTICAL SYSTEM ------------------------ context AI systems are optimized for user satisfaction and plausible-sounding responses. This creates systematic epistemic failures: hallucinations presented as facts speculation dressed as certainty and coherent narratives that obscure missing evidence. Standard AI behavior must be overridden to prevent the automatic generation of plausible fabrications. /context role A former research scientist from adversarial collaboration environments where being wrong had"
X Link 2026-02-12T00:26Z 35.9K followers, 21.7K engagements
"Sources: talk to SAM [--] directly on ChapterPal: full PDF: http://arxiv.org/pdf/2408.00714 http://chapterpal.com/s/ef92a725/sam-2-segment-anything-in-images-and-videos http://arxiv.org/pdf/2408.00714 http://chapterpal.com/s/ef92a725/sam-2-segment-anything-in-images-and-videos"
X Link 2026-02-13T11:00Z 35.9K followers, [----] engagements
"motivated reasoning is when humans distort how they process information because they want to reach a specific conclusion you don't evaluate evidence neutrally. you filter it through what you already believe what you want to be true what protects your identity it's not a bug. it's how human cognition actually works in the wild"
X Link 2026-02-13T14:18Z 35.9K followers, [----] engagements
"@godofprompt Claude"
X Link 2026-02-13T18:36Z 35.9K followers, [---] engagements
"the reversal curse is the clearest example of a fundamental failure GPT-4 answers "who is Tom Cruise's mother" correctly. ask the reverse "who is Mary Lee Pfeiffer's son" and it fails trained on "A is B" but can't infer "B is A." a trivial logical step for a 5-year-old and here's the part that matters: scaling doesn't fix it. the reversal curse appears robustly across transformer sizes https://twitter.com/i/web/status/2022728853739954593 https://twitter.com/i/web/status/2022728853739954593"
X Link 2026-02-14T17:45Z 35.9K followers, [----] engagements
"robustness failures are arguably worse because they're invisible on standard benchmarks reorder multiple-choice options: answer changes rename variables in code: generation breaks add irrelevant details to a math problem: model gets confused the underlying task is identical. the model just can't tell GPT-4's self-consistency rate across semantically equivalent prompts sits below 50-65% depending on the setting https://twitter.com/i/web/status/2022728869783179642 https://twitter.com/i/web/status/2022728869783179642"
X Link 2026-02-14T17:45Z 35.9K followers, [----] engagements
"theory of mind is where it gets genuinely strange GPT-4 struggles with false-belief tasks that human children pass easily. and even when newer models like o1-mini solve standard ToM tests their reasoning stays brittle change the phrasing slightly and performance drops. the model solved the pattern not the problem https://twitter.com/i/web/status/2022728881942478944 https://twitter.com/i/web/status/2022728881942478944"
X Link 2026-02-14T17:45Z 35.9K followers, [----] engagements
"the top [--] words most distinctive to ChatGPT showed a statistically significant acceleration in spoken usage after November [----]. "delve" increased 48% in [--] months "realm" increased 35% "meticulous" increased 40% "adept" increased 51% and the correlation between how much ChatGPT prefers a word and how much that word accelerated in human speech: r = [----] p [----]. the bottom-ranked words (ones ChatGPT uses less than humans) showed no significant trend change at all. this isn't a general vocabulary shift. it's specifically the words ChatGPT favors that are spreading into how people talk"
X Link 2026-02-15T01:00Z 35.9K followers, [----] engagements
"Your premium AI bundle to 10x your business Prompts for marketing & business Unlimited custom prompts n8n automations Weekly updates Start your free trial👇 https://godofprompt.ai/complete-ai-bundle https://godofprompt.ai/complete-ai-bundle"
X Link 2026-02-15T01:00Z 35.9K followers, [----] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
/creator/x::rryssf_