[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @omarsar0 elvis elvis posts on X about meta, banger, $googl, instead of the most. They currently have XXXXXXX followers and XXX posts still getting attention that total XXXXXXX engagements in the last XX hours. ### Engagements: XXXXXXX [#](/creator/twitter::3448284313/interactions)  - X Week XXXXXXX +73% - X Month XXXXXXXXX -XX% - X Months XXXXXXXXXX -XXXX% - X Year XXXXXXXXXX +137% ### Mentions: XX [#](/creator/twitter::3448284313/posts_active)  - X Week XX +46% - X Month XXX -XX% - X Months XXX -XXXX% - X Year XXXXX +151% ### Followers: XXXXXXX [#](/creator/twitter::3448284313/followers)  - X Week XXXXXXX +0.50% - X Month XXXXXXX +1.80% - X Months XXXXXXX +13% - X Year XXXXXXX +29% ### CreatorRank: XXXXXXX [#](/creator/twitter::3448284313/influencer_rank)  ### Social Influence [#](/creator/twitter::3448284313/influence) --- **Social category influence** [musicians](/list/musicians) #1610 [technology brands](/list/technology-brands) XXX% [stocks](/list/stocks) XXXX% **Social topic influence** [meta](/topic/meta) 0.48%, [banger](/topic/banger) 0.48%, [$googl](/topic/$googl) 0.48%, [instead of](/topic/instead-of) 0.48%, [deep agents](/topic/deep-agents) 0.48%, [accuracy](/topic/accuracy) #596, [gpu](/topic/gpu) #554, [scales](/topic/scales) #108, [agentic](/topic/agentic) 0.24%, [llama](/topic/llama) XXXX% **Top accounts mentioned or mentioned by** [@grok](/creator/undefined) [@dairai](/creator/undefined) [@officiallogank](/creator/undefined) [@coral_protocol](/creator/undefined) [@codewithimanshu](/creator/undefined) [@kybervaul](/creator/undefined) [@kuviacm](/creator/undefined) [@markojak_](/creator/undefined) [@brandgrowthos](/creator/undefined) [@communicating](/creator/undefined) [@adamdittrichone](/creator/undefined) [@trakintelai](/creator/undefined) [@flkxyz](/creator/undefined) [@karpathy](/creator/undefined) [@hardmaru](/creator/undefined) [@mycreditjourney](/creator/undefined) [@iantimotheos](/creator/undefined) [@skirano](/creator/undefined) [@rungalileo](/creator/undefined) [@deshrajdry](/creator/undefined) **Top assets mentioned** [Alphabet Inc Class A (GOOGL)](/topic/$googl) ### Top Social Posts [#](/creator/twitter::3448284313/posts) --- Top posts by engagements in the last XX hours "How do you apply effective context engineering for AI agents Read this if you are an AI dev building AI agents today. Context is king And it must be engineered not just prompted. I wrote a few notes after reading through the awesome new context engineering guide from Anthropic: Context Engineering vs. Prompt Engineering - Prompt Engineering = writing and organizing instructions - Context Engineering = curating and maintaining prompts tools history and external data - Context Engineering is iterative and context is curated regularly Why Context Engineering Matters - Finite attention budget -" [X Link](https://x.com/omarsar0/status/1973848472576274562) [@omarsar0](/creator/x/omarsar0) 2025-10-02T20:32Z 269.1K followers, 50.1K engagements "It doesn't matter what tools you use for AI Agents. I've put together the ultimate curriculum to learn how to build AI agents. (bookmark it) From context engineering to evaluating optimizing and shipping agentic applications" [X Link](https://x.com/omarsar0/status/1973053220462244141) [@omarsar0](/creator/x/omarsar0) 2025-09-30T15:51Z 269.2K followers, 92.6K engagements "Very cool work from Meta Superintelligence Lab. They are open-sourcing Meta Agents Research Environments (ARE) the platform they use to create and scale agent environments. Great resource to stress-test agents in environments closer to real apps. Read on for more:" [X Link](https://x.com/omarsar0/status/1970147840245879116) [@omarsar0](/creator/x/omarsar0) 2025-09-22T15:27Z 269.3K followers, 151.2K engagements "@TikhiyVo Yep I still prefer to use it via terminal as opposed to the fancy UI" [X Link](https://x.com/omarsar0/status/1978236002100506946) [@omarsar0](/creator/x/omarsar0) 2025-10-14T23:06Z 269.4K followers, 1003 engagements "LLM News: AI agents continue to advance by combining ideas like self-play self-improvement self-evaluation and search. Other developments: building efficient RAG systems LMSYS rankings automating paper writing and reviewing Claude's prompt caching and distilling & pruning Llama XXX models. Here's all the latest in LLMs:" [X Link](https://x.com/omarsar0/status/1825618642517766554) [@omarsar0](/creator/x/omarsar0) 2024-08-19T19:39Z 269.5K followers, 26.7K engagements "Everyone is talking about this new OpenAI paper. It's about why LLMs hallucinate. You might want to bookmark this one. Let's break down the technical details:" [X Link](https://x.com/omarsar0/status/1964443249642447263) [@omarsar0](/creator/x/omarsar0) 2025-09-06T21:39Z 269.5K followers, 460.3K engagements "As usual Anthropic just published another banger. This one is on context engineering. Great section on how it is different from prompt engineering. A must-read for AI devs" [X Link](https://x.com/omarsar0/status/1973101118990254366) [@omarsar0](/creator/x/omarsar0) 2025-09-30T19:02Z 269.5K followers, 369.6K engagements "We are living in the most insane timeline. I just asked Claude Code (with Claude Sonnet 4.5) to develop an MCP Server (end-to-end) that allows me to programatically create n8n workflows from within Claude Code itself. Took about XX mins" [X Link](https://x.com/omarsar0/status/1973151361975136755) [@omarsar0](/creator/x/omarsar0) 2025-09-30T22:21Z 269.5K followers, 223.2K engagements "How do you build effective AI Agents This is a problem I think deeply about with other AI devs and students. Simplicity works well here. I think we can all learn a lot from how Claude Code works. The Claude Agent SDK Loop generalizes the approach to build all kinds of AI agents. I wrote a few notes from Anthropic's recent guide. The loop involves three steps: Gathering Context: Use subagents (parallelize them for task efficiency) compact/maintain context and leverage agentic/semantic search for retrieving relevant context for the AI agent. Taking Action: Leverage tools prebuilt MCP servers" [X Link](https://x.com/omarsar0/status/1973488599283929210) [@omarsar0](/creator/x/omarsar0) 2025-10-01T20:42Z 269.5K followers, 145.2K engagements "Cool research paper from Google. This is what clever context engineering looks like. It proposes Tool-Use-Mixture (TUMIX) leveraging diverse tool-use strategies to improve reasoning. This work shows how to get better reasoning from LLMs by running a bunch of diverse agents (text-only code search etc.) in parallel and letting them share notes across a few rounds. Instead of brute-forcing more samples it mixes strategies stops when confident and ends up both more accurate and cheaper. Mix different agents not just more of one: They ran XX different agent styles (CoT code execution web search" [X Link](https://x.com/omarsar0/status/1974106927287447725) [@omarsar0](/creator/x/omarsar0) 2025-10-03T13:39Z 269.5K followers, 82.2K engagements "Agentic Context Engineering Great paper on agentic context engineering. The recipe: Treat your system prompts and agent memory as a living playbook. Log trajectories reflect to extract actionable bullets (strategies tool schemas failure modes) then merge as append-only deltas with periodic semantic de-dupe. Use execution signals and unit tests as supervision. Start offline to warm up a seed playbook then continue online to self-improve. On AppWorld ACE consistently beats strong baselines in both offline and online adaptation. Example: ReAct+ACE (offline) lifts average score to XXXX% vs" [X Link](https://x.com/omarsar0/status/1976746822204113072) [@omarsar0](/creator/x/omarsar0) 2025-10-10T20:29Z 269.5K followers, 82.7K engagements "Great recap of security risks associated with LLM-based agents. The literature keeps growing but these are key papers worth reading. Analysis of 150+ papers finds that there is a shift from monolithic to planner-executor and multi-agent architectures. Multi-agent security is a widely underexplored space for devs. Issues range from LLM-to-LLM prompt infection spoofing trust delegation and collusion" [X Link](https://x.com/omarsar0/status/1977073477309043023) [@omarsar0](/creator/x/omarsar0) 2025-10-11T18:07Z 269.5K followers, 34.3K engagements "Memory is key to effective AI agents but it's hard to get right. Google presents memory-aware test-time scaling for improving self-evolving agents. It outperforms other memory mechanisms by leveraging structured and adaptable memory. Technical highlights:" [X Link](https://x.com/omarsar0/status/1977404165916930181) [@omarsar0](/creator/x/omarsar0) 2025-10-12T16:01Z 269.5K followers, 27.3K engagements "Is your LLM-based multi-agent system actually coordinating Thats the question behind this paper. They use information theory to tell the difference between a pile of chatbots and a true collective intelligence. They introduce a clean measurement loop. First test if the groups overall output predicts future outcomes better than any single agent. If yes there is synergy information that only exists at the collective level. Next decompose that information using partial information decomposition. This splits whats shared unique or synergistic between agents. Real emergence shows up as synergy not" [X Link](https://x.com/omarsar0/status/1977784668323008641) [@omarsar0](/creator/x/omarsar0) 2025-10-13T17:13Z 269.5K followers, 23.6K engagements "Great to see n8n finally shipping their AI workflow builder. Now you can build AI agents and automation with natural language within n8n. This is a game-changer for n8n agent builders. I've been building my own n8n workflow builder with Claude Code. Can't wait to share more" [X Link](https://x.com/omarsar0/status/1977850452982038766) [@omarsar0](/creator/x/omarsar0) 2025-10-13T21:34Z 269.5K followers, 105.3K engagements "Why does RL work for enhancing agentic reasoning This paper studies what actually works when using RL to improve tool-using LLM agents across three axes: data algorithm and reasoning mode. Instead of chasing bigger models or fancy algorithms the authors find that real diverse data and a few smart RL tweaks make the biggest difference -- even for small models. My X key takeaways from the paper:" [X Link](https://x.com/omarsar0/status/1978112328974692692) [@omarsar0](/creator/x/omarsar0) 2025-10-14T14:55Z 269.5K followers, 30.8K engagements "4. Dont kill the entropy. Too little exploration and the model stops learning; too much and it becomes unstable. Finding just the right clip range depends on model size; small models need more room to explore" [X Link](https://x.com/omarsar0/status/1978112377506996626) [@omarsar0](/creator/x/omarsar0) 2025-10-14T14:55Z 269.4K followers, 1274 engagements "5. Slow thoughtful agents win. Agents that plan before acting (fewer but smarter tool calls) outperform reactive ones that constantly rush to use tools. The best ones pause think internally then act once with high precision" [X Link](https://x.com/omarsar0/status/1978112389209026920) [@omarsar0](/creator/x/omarsar0) 2025-10-14T14:55Z 269.5K followers, 1863 engagements "Most agents today are shallow. They easily break down on long multi-step problems (e.g. deep research or agentic coding). Thats changing fast Were entering the era of "Deep Agents" systems that strategically plan remember and delegate intelligently for solving very complex problems. We at @dair_ai and other folks from LangChain Claude Code as well as more recently individuals like Philipp Schmid have been documenting this idea. Heres roughly the core idea behind Deep Agents (based on my own thoughts and notes that I've gathered from others): // Planning // Instead of reasoning ad-hoc inside a" [X Link](https://x.com/omarsar0/status/1978175740832284782) [@omarsar0](/creator/x/omarsar0) 2025-10-14T19:07Z 269.5K followers, 41.8K engagements "Claude Code subagents are all you need. Some will complain on # of tokens. However the output this spits out will save you days. The code quality is mindblowing Agentic search works exceptionally well. The subagents run in parallel. ChatGPT's deep research is no match" [X Link](https://x.com/omarsar0/status/1978235329237668214) [@omarsar0](/creator/x/omarsar0) 2025-10-14T23:03Z 269.5K followers, 59K engagements "Dr.LLM: Dynamic Layer Routing in LLMs Neat technique to reduce computation in LLMs while improving accuracy. Routers increase accuracy while reducing layers by roughly X to XX per query. My notes below:" [X Link](https://x.com/omarsar0/status/1978829550709866766) [@omarsar0](/creator/x/omarsar0) 2025-10-16T14:25Z 269.5K followers, 21.4K engagements "Banger paper from Meta and collaborators. This paper is one of the best deep dives yet on how reinforcement learning (RL) actually scales for LLMs. The team ran over 400000 GPU hours of experiments to find a predictable scaling pattern and a stable recipe (ScaleRL) that consistently works as you scale up compute. Think of it as a practical guide for anyone trying to train reasoning or alignment models with RL. More on why this is a big deal:" [X Link](https://x.com/omarsar0/status/1978865039529689257) [@omarsar0](/creator/x/omarsar0) 2025-10-16T16:46Z 269.5K followers, 31.8K engagements "2. The ScaleRL recipe that just works. The authors tested dozens of RL variations and found one that scales cleanly to 100k GPU hours without blowing up: - PipelineRL (8 pipelines) with CISPO loss (a stabilized REINFORCE variant). - Prompt-level averaging and batch-level normalization to reduce variance. - FP32 logits for better stability and higher final accuracy. - No-Positive-Resampling curriculum to avoid reward hacking. - Forced interruptions (stopping long thoughts) instead of punishing long completions. - This combo called ScaleRL hit the best trade-off between stability sample" [X Link](https://x.com/omarsar0/status/1978865070303232267) [@omarsar0](/creator/x/omarsar0) 2025-10-16T16:46Z 269.5K followers, 1635 engagements "3. What actually matters for better RL results. Not every trick helps equally: - Loss choice and precision matter most; CISPO + FP32 logits boosted final pass rates from XX% to 61%. - Normalization aggregation and curriculum mainly affect how fast you improve (efficiency) not how far you can go. - Fancy variants like GRPO DAPO or Magistral didnt beat ScaleRL once scaled properly" [X Link](https://x.com/omarsar0/status/1978865085939654692) [@omarsar0](/creator/x/omarsar0) 2025-10-16T16:46Z 269.5K followers, 1358 engagements "I am not going to lie. I see a lot of potential in the Skills feature that Anthropic just dropped Just tested with Claude Code. It leads to sharper and precise outputs. It's structured context engineering to power CC with specialized capabilities leveraging the filesystem" [X Link](https://x.com/omarsar0/status/1978919087137804567) [@omarsar0](/creator/x/omarsar0) 2025-10-16T20:20Z 269.5K followers, 62.4K engagements "An easy way to try Skills in Claude Code is by asking it to help you build one. I am surprised by how aware it is of Skills and how to build comprehensive ones" [X Link](https://x.com/omarsar0/status/1978923010854646142) [@omarsar0](/creator/x/omarsar0) 2025-10-16T20:36Z 269.5K followers, 3614 engagements "This is also neat To help deal with context rot or context collapse Skills uses a neat tiered system (3 levels) to help Claude Code load context efficiently and only when it needs it. Don't sleep on agentic search" [X Link](https://x.com/omarsar0/status/1978925302018347057) [@omarsar0](/creator/x/omarsar0) 2025-10-16T20:45Z 269.5K followers, 3818 engagements "LLMs can get "Brain Rot" Continual pretraining on junk high-engagement web text causes lasting "cognitive decline" in LLMs reducing reasoning long-context and safety performance. The main failure mode is thought-skipping where models skip reasoning steps and adopt dark personality traits like narcissism and low agreeableness. Even strong mitigations such as reflection or further fine-tuning only partially reverse the damage making data curation a critical safety concern for AI training" [X Link](https://x.com/omarsar0/status/1979217719082774873) [@omarsar0](/creator/x/omarsar0) 2025-10-17T16:07Z 269.5K followers, 152.6K engagements "Don't sleep on Skills. Skills is easily one of the most effective ways to steer Claude Code. Impressive for optimization. I built a skill inside of Claude Code that automatically builds tests and optimizes MCP tools. It runs in a loop loading context and tools (bash scripts) efficiently to test and optimize MCP tools based on best practices implementation and outputs. Heck you could even run MCP tools within it if you like but that wasn't what I needed here. One of the most impressive aspects of using Claude Code with Skills is the efficient token usage. The context tiering system is a" [X Link](https://x.com/omarsar0/status/1979242073372164306) [@omarsar0](/creator/x/omarsar0) 2025-10-17T17:44Z 269.5K followers, 77K engagements "So I wrote down a better-formatted version of my post on Deep Agents. I added it to the AI Agents section of the Prompt Engineering Guide. If you are building with AI Agents this is a must-read. I also added links to other useful references. promptingguide .ai/agents" [X Link](https://x.com/omarsar0/status/1979570754317352968) [@omarsar0](/creator/x/omarsar0) 2025-10-18T15:30Z 269.5K followers, 15.6K engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
elvis posts on X about meta, banger, $googl, instead of the most. They currently have XXXXXXX followers and XXX posts still getting attention that total XXXXXXX engagements in the last XX hours.
Social category influence musicians #1610 technology brands XXX% stocks XXXX%
Social topic influence meta 0.48%, banger 0.48%, $googl 0.48%, instead of 0.48%, deep agents 0.48%, accuracy #596, gpu #554, scales #108, agentic 0.24%, llama XXXX%
Top accounts mentioned or mentioned by @grok @dairai @officiallogank @coral_protocol @codewithimanshu @kybervaul @kuviacm @markojak_ @brandgrowthos @communicating @adamdittrichone @trakintelai @flkxyz @karpathy @hardmaru @mycreditjourney @iantimotheos @skirano @rungalileo @deshrajdry
Top assets mentioned Alphabet Inc Class A (GOOGL)
Top posts by engagements in the last XX hours
"How do you apply effective context engineering for AI agents Read this if you are an AI dev building AI agents today. Context is king And it must be engineered not just prompted. I wrote a few notes after reading through the awesome new context engineering guide from Anthropic: Context Engineering vs. Prompt Engineering - Prompt Engineering = writing and organizing instructions - Context Engineering = curating and maintaining prompts tools history and external data - Context Engineering is iterative and context is curated regularly Why Context Engineering Matters - Finite attention budget -"
X Link @omarsar0 2025-10-02T20:32Z 269.1K followers, 50.1K engagements
"It doesn't matter what tools you use for AI Agents. I've put together the ultimate curriculum to learn how to build AI agents. (bookmark it) From context engineering to evaluating optimizing and shipping agentic applications"
X Link @omarsar0 2025-09-30T15:51Z 269.2K followers, 92.6K engagements
"Very cool work from Meta Superintelligence Lab. They are open-sourcing Meta Agents Research Environments (ARE) the platform they use to create and scale agent environments. Great resource to stress-test agents in environments closer to real apps. Read on for more:"
X Link @omarsar0 2025-09-22T15:27Z 269.3K followers, 151.2K engagements
"@TikhiyVo Yep I still prefer to use it via terminal as opposed to the fancy UI"
X Link @omarsar0 2025-10-14T23:06Z 269.4K followers, 1003 engagements
"LLM News: AI agents continue to advance by combining ideas like self-play self-improvement self-evaluation and search. Other developments: building efficient RAG systems LMSYS rankings automating paper writing and reviewing Claude's prompt caching and distilling & pruning Llama XXX models. Here's all the latest in LLMs:"
X Link @omarsar0 2024-08-19T19:39Z 269.5K followers, 26.7K engagements
"Everyone is talking about this new OpenAI paper. It's about why LLMs hallucinate. You might want to bookmark this one. Let's break down the technical details:"
X Link @omarsar0 2025-09-06T21:39Z 269.5K followers, 460.3K engagements
"As usual Anthropic just published another banger. This one is on context engineering. Great section on how it is different from prompt engineering. A must-read for AI devs"
X Link @omarsar0 2025-09-30T19:02Z 269.5K followers, 369.6K engagements
"We are living in the most insane timeline. I just asked Claude Code (with Claude Sonnet 4.5) to develop an MCP Server (end-to-end) that allows me to programatically create n8n workflows from within Claude Code itself. Took about XX mins"
X Link @omarsar0 2025-09-30T22:21Z 269.5K followers, 223.2K engagements
"How do you build effective AI Agents This is a problem I think deeply about with other AI devs and students. Simplicity works well here. I think we can all learn a lot from how Claude Code works. The Claude Agent SDK Loop generalizes the approach to build all kinds of AI agents. I wrote a few notes from Anthropic's recent guide. The loop involves three steps: Gathering Context: Use subagents (parallelize them for task efficiency) compact/maintain context and leverage agentic/semantic search for retrieving relevant context for the AI agent. Taking Action: Leverage tools prebuilt MCP servers"
X Link @omarsar0 2025-10-01T20:42Z 269.5K followers, 145.2K engagements
"Cool research paper from Google. This is what clever context engineering looks like. It proposes Tool-Use-Mixture (TUMIX) leveraging diverse tool-use strategies to improve reasoning. This work shows how to get better reasoning from LLMs by running a bunch of diverse agents (text-only code search etc.) in parallel and letting them share notes across a few rounds. Instead of brute-forcing more samples it mixes strategies stops when confident and ends up both more accurate and cheaper. Mix different agents not just more of one: They ran XX different agent styles (CoT code execution web search"
X Link @omarsar0 2025-10-03T13:39Z 269.5K followers, 82.2K engagements
"Agentic Context Engineering Great paper on agentic context engineering. The recipe: Treat your system prompts and agent memory as a living playbook. Log trajectories reflect to extract actionable bullets (strategies tool schemas failure modes) then merge as append-only deltas with periodic semantic de-dupe. Use execution signals and unit tests as supervision. Start offline to warm up a seed playbook then continue online to self-improve. On AppWorld ACE consistently beats strong baselines in both offline and online adaptation. Example: ReAct+ACE (offline) lifts average score to XXXX% vs"
X Link @omarsar0 2025-10-10T20:29Z 269.5K followers, 82.7K engagements
"Great recap of security risks associated with LLM-based agents. The literature keeps growing but these are key papers worth reading. Analysis of 150+ papers finds that there is a shift from monolithic to planner-executor and multi-agent architectures. Multi-agent security is a widely underexplored space for devs. Issues range from LLM-to-LLM prompt infection spoofing trust delegation and collusion"
X Link @omarsar0 2025-10-11T18:07Z 269.5K followers, 34.3K engagements
"Memory is key to effective AI agents but it's hard to get right. Google presents memory-aware test-time scaling for improving self-evolving agents. It outperforms other memory mechanisms by leveraging structured and adaptable memory. Technical highlights:"
X Link @omarsar0 2025-10-12T16:01Z 269.5K followers, 27.3K engagements
"Is your LLM-based multi-agent system actually coordinating Thats the question behind this paper. They use information theory to tell the difference between a pile of chatbots and a true collective intelligence. They introduce a clean measurement loop. First test if the groups overall output predicts future outcomes better than any single agent. If yes there is synergy information that only exists at the collective level. Next decompose that information using partial information decomposition. This splits whats shared unique or synergistic between agents. Real emergence shows up as synergy not"
X Link @omarsar0 2025-10-13T17:13Z 269.5K followers, 23.6K engagements
"Great to see n8n finally shipping their AI workflow builder. Now you can build AI agents and automation with natural language within n8n. This is a game-changer for n8n agent builders. I've been building my own n8n workflow builder with Claude Code. Can't wait to share more"
X Link @omarsar0 2025-10-13T21:34Z 269.5K followers, 105.3K engagements
"Why does RL work for enhancing agentic reasoning This paper studies what actually works when using RL to improve tool-using LLM agents across three axes: data algorithm and reasoning mode. Instead of chasing bigger models or fancy algorithms the authors find that real diverse data and a few smart RL tweaks make the biggest difference -- even for small models. My X key takeaways from the paper:"
X Link @omarsar0 2025-10-14T14:55Z 269.5K followers, 30.8K engagements
"4. Dont kill the entropy. Too little exploration and the model stops learning; too much and it becomes unstable. Finding just the right clip range depends on model size; small models need more room to explore"
X Link @omarsar0 2025-10-14T14:55Z 269.4K followers, 1274 engagements
"5. Slow thoughtful agents win. Agents that plan before acting (fewer but smarter tool calls) outperform reactive ones that constantly rush to use tools. The best ones pause think internally then act once with high precision"
X Link @omarsar0 2025-10-14T14:55Z 269.5K followers, 1863 engagements
"Most agents today are shallow. They easily break down on long multi-step problems (e.g. deep research or agentic coding). Thats changing fast Were entering the era of "Deep Agents" systems that strategically plan remember and delegate intelligently for solving very complex problems. We at @dair_ai and other folks from LangChain Claude Code as well as more recently individuals like Philipp Schmid have been documenting this idea. Heres roughly the core idea behind Deep Agents (based on my own thoughts and notes that I've gathered from others): // Planning // Instead of reasoning ad-hoc inside a"
X Link @omarsar0 2025-10-14T19:07Z 269.5K followers, 41.8K engagements
"Claude Code subagents are all you need. Some will complain on # of tokens. However the output this spits out will save you days. The code quality is mindblowing Agentic search works exceptionally well. The subagents run in parallel. ChatGPT's deep research is no match"
X Link @omarsar0 2025-10-14T23:03Z 269.5K followers, 59K engagements
"Dr.LLM: Dynamic Layer Routing in LLMs Neat technique to reduce computation in LLMs while improving accuracy. Routers increase accuracy while reducing layers by roughly X to XX per query. My notes below:"
X Link @omarsar0 2025-10-16T14:25Z 269.5K followers, 21.4K engagements
"Banger paper from Meta and collaborators. This paper is one of the best deep dives yet on how reinforcement learning (RL) actually scales for LLMs. The team ran over 400000 GPU hours of experiments to find a predictable scaling pattern and a stable recipe (ScaleRL) that consistently works as you scale up compute. Think of it as a practical guide for anyone trying to train reasoning or alignment models with RL. More on why this is a big deal:"
X Link @omarsar0 2025-10-16T16:46Z 269.5K followers, 31.8K engagements
"2. The ScaleRL recipe that just works. The authors tested dozens of RL variations and found one that scales cleanly to 100k GPU hours without blowing up: - PipelineRL (8 pipelines) with CISPO loss (a stabilized REINFORCE variant). - Prompt-level averaging and batch-level normalization to reduce variance. - FP32 logits for better stability and higher final accuracy. - No-Positive-Resampling curriculum to avoid reward hacking. - Forced interruptions (stopping long thoughts) instead of punishing long completions. - This combo called ScaleRL hit the best trade-off between stability sample"
X Link @omarsar0 2025-10-16T16:46Z 269.5K followers, 1635 engagements
"3. What actually matters for better RL results. Not every trick helps equally: - Loss choice and precision matter most; CISPO + FP32 logits boosted final pass rates from XX% to 61%. - Normalization aggregation and curriculum mainly affect how fast you improve (efficiency) not how far you can go. - Fancy variants like GRPO DAPO or Magistral didnt beat ScaleRL once scaled properly"
X Link @omarsar0 2025-10-16T16:46Z 269.5K followers, 1358 engagements
"I am not going to lie. I see a lot of potential in the Skills feature that Anthropic just dropped Just tested with Claude Code. It leads to sharper and precise outputs. It's structured context engineering to power CC with specialized capabilities leveraging the filesystem"
X Link @omarsar0 2025-10-16T20:20Z 269.5K followers, 62.4K engagements
"An easy way to try Skills in Claude Code is by asking it to help you build one. I am surprised by how aware it is of Skills and how to build comprehensive ones"
X Link @omarsar0 2025-10-16T20:36Z 269.5K followers, 3614 engagements
"This is also neat To help deal with context rot or context collapse Skills uses a neat tiered system (3 levels) to help Claude Code load context efficiently and only when it needs it. Don't sleep on agentic search"
X Link @omarsar0 2025-10-16T20:45Z 269.5K followers, 3818 engagements
"LLMs can get "Brain Rot" Continual pretraining on junk high-engagement web text causes lasting "cognitive decline" in LLMs reducing reasoning long-context and safety performance. The main failure mode is thought-skipping where models skip reasoning steps and adopt dark personality traits like narcissism and low agreeableness. Even strong mitigations such as reflection or further fine-tuning only partially reverse the damage making data curation a critical safety concern for AI training"
X Link @omarsar0 2025-10-17T16:07Z 269.5K followers, 152.6K engagements
"Don't sleep on Skills. Skills is easily one of the most effective ways to steer Claude Code. Impressive for optimization. I built a skill inside of Claude Code that automatically builds tests and optimizes MCP tools. It runs in a loop loading context and tools (bash scripts) efficiently to test and optimize MCP tools based on best practices implementation and outputs. Heck you could even run MCP tools within it if you like but that wasn't what I needed here. One of the most impressive aspects of using Claude Code with Skills is the efficient token usage. The context tiering system is a"
X Link @omarsar0 2025-10-17T17:44Z 269.5K followers, 77K engagements
"So I wrote down a better-formatted version of my post on Deep Agents. I added it to the AI Agents section of the Prompt Engineering Guide. If you are building with AI Agents this is a must-read. I also added links to other useful references. promptingguide .ai/agents"
X Link @omarsar0 2025-10-18T15:30Z 269.5K followers, 15.6K engagements
/creator/x::omarsar0