[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
elvis posts on X about $googl, open ai, builders, instead of the most. They currently have XXXXXXX followers and 1751 posts still getting attention that total XXXXXX engagements in the last XX hours.
Social category influence musicians #2274 technology brands stocks
Social topic influence $googl #3839, open ai, builders, instead of #570, claude #200, banger, devs, work for, productivity, meta
Top assets mentioned Alphabet Inc Class A (GOOGL)
Top posts by engagements in the last XX hours
"Cool research paper from Google. This is what clever context engineering looks like. It proposes Tool-Use-Mixture (TUMIX) leveraging diverse tool-use strategies to improve reasoning. This work shows how to get better reasoning from LLMs by running a bunch of diverse agents (text-only code search etc.) in parallel and letting them share notes across a few rounds. Instead of brute-forcing more samples it mixes strategies stops when confident and ends up both more accurate and cheaper. Mix different agents not just more of one: They ran XX different agent styles (CoT code execution web search"
X Link @omarsar0 2025-10-03T13:39Z 268.7K followers, 81.9K engagements
"How do you build effective AI Agents This is a problem I think deeply about with other AI devs and students. Simplicity works well here. I think we can all learn a lot from how Claude Code works. The Claude Agent SDK Loop generalizes the approach to build all kinds of AI agents. I wrote a few notes from Anthropic's recent guide. The loop involves three steps: Gathering Context: Use subagents (parallelize them for task efficiency) compact/maintain context and leverage agentic/semantic search for retrieving relevant context for the AI agent. Taking Action: Leverage tools prebuilt MCP servers"
X Link @omarsar0 2025-10-01T20:42Z 268.7K followers, 144.8K engagements
"As usual Anthropic just published another banger. This one is on context engineering. Great section on how it is different from prompt engineering. A must-read for AI devs"
X Link @omarsar0 2025-09-30T19:02Z 268.7K followers, 368.8K engagements
"What it is ReasoningBank distills structured transferable memory items from past trajectories using an LLM-as-judge to self-label success or failure. Each item has a title description and content with strategy-level hints. At inference the agent retrieves top-k relevant items and injects them into the system prompt then appends new items after each task"
X Link @omarsar0 2025-10-12T16:01Z 268.7K followers, 1070 engagements
"Claude Code subagents are all you need. Some will complain on # of tokens. However the output this spits out will save you days. The code quality is mindblowing Agentic search works exceptionally well. The subagents run in parallel. ChatGPT's deep research is no match"
X Link @omarsar0 2025-10-14T23:03Z 268.7K followers, 46K engagements
"Everyone is talking about this new OpenAI paper. It's about why LLMs hallucinate. You might want to bookmark this one. Let's break down the technical details:"
X Link @omarsar0 2025-09-06T21:39Z 268.7K followers, 459.9K engagements
"Why does RL work for enhancing agentic reasoning This paper studies what actually works when using RL to improve tool-using LLM agents across three axes: data algorithm and reasoning mode. Instead of chasing bigger models or fancy algorithms the authors find that real diverse data and a few smart RL tweaks make the biggest difference -- even for small models. My X key takeaways from the paper:"
X Link @omarsar0 2025-10-14T14:55Z 268.7K followers, 26.1K engagements
"Most agents today are shallow. They easily break down on long multi-step problems (e.g. deep research or agentic coding). Thats changing fast Were entering the era of "Deep Agents" systems that strategically plan remember and delegate intelligently for solving very complex problems. We at @dair_ai and other folks from LangChain Claude Code as well as more recently individuals like Philipp Schmid have been documenting this idea. Heres roughly the core idea behind Deep Agents (based on my own thoughts and notes that I've gathered from others): // Planning // Instead of reasoning ad-hoc inside a"
X Link @omarsar0 2025-10-14T19:07Z 268.7K followers, 37.6K engagements
"2025 is the year of AI agents. But they need a lot more work. More work is needed on architecture design optimization context engineering environments observability reliability evaluations scaling and more"
X Link @omarsar0 2025-10-08T13:28Z 268.7K followers, 13.2K engagements
"Small models can also be good reasoners. Here is the issue and the proposed solution: Small models often get worse when you SFT them on teacher CoT traces. This paper pins the failure on distributional misalignment and introduces Reverse Speculative Decoding (RSD): during trace generation the teacher proposes tokens but the student only accepts tokens that are sufficiently probable under its own distribution. The result is student-friendly traces that preserve correctness while keeping step-to-step surprisal manageable. RSD uses rejection sampling to select correct aligned traces and pairs it"
X Link @omarsar0 2025-09-29T12:47Z 268.3K followers, 53.5K engagements
"Yup. Been building around this idea for the last couple of months now. Mostly for personal note-taking and research. OpenAI's ChatGPT Pulse is a glimpse of what's coming. Proactive AI/agents are going to be everywhere. I am seeing huge productivity boosts already. But you are right hard to get the personalization right"
X Link @omarsar0 2025-10-12T16:15Z 268.5K followers, 2620 engagements
"Very cool work from Meta Superintelligence Lab. They are open-sourcing Meta Agents Research Environments (ARE) the platform they use to create and scale agent environments. Great resource to stress-test agents in environments closer to real apps. Read on for more:"
X Link @omarsar0 2025-09-22T15:27Z 268.7K followers, 151K engagements
"4. Dont kill the entropy. Too little exploration and the model stops learning; too much and it becomes unstable. Finding just the right clip range depends on model size; small models need more room to explore"
X Link @omarsar0 2025-10-14T14:55Z 268.7K followers, XXX engagements
"Language Models that Think and Chat Better Proposes a simple RL recipe to improve small open models (eg 8B) that rivals GPT-4o and Claude XXX Sonnet (thinking). Pay attention to this one AI devs Here are my notes:"
X Link @omarsar0 2025-09-25T14:10Z 268.7K followers, 31.7K engagements
"We are excited to launch our first-ever hybrid course. Lots of building is going to happen on this one. We are going deep My goal is to certify and help train the top builders around AI agents"
X Link @omarsar0 2025-08-14T21:20Z 268.6K followers, 1917 engagements
"Is your LLM-based multi-agent system actually coordinating Thats the question behind this paper. They use information theory to tell the difference between a pile of chatbots and a true collective intelligence. They introduce a clean measurement loop. First test if the groups overall output predicts future outcomes better than any single agent. If yes there is synergy information that only exists at the collective level. Next decompose that information using partial information decomposition. This splits whats shared unique or synergistic between agents. Real emergence shows up as synergy not"
X Link @omarsar0 2025-10-13T17:13Z 268.7K followers, 22.9K engagements
"Great to see n8n finally shipping their AI workflow builder. Now you can build AI agents and automation with natural language within n8n. This is a game-changer for n8n agent builders. I've been building my own n8n workflow builder with Claude Code. Can't wait to share more"
X Link @omarsar0 2025-10-13T21:34Z 268.7K followers, 103.2K engagements
"It doesn't matter what tools you use for AI Agents. I've put together the ultimate curriculum to learn how to build AI agents. (bookmark it) From context engineering to evaluating optimizing and shipping agentic applications"
X Link @omarsar0 2025-09-30T15:51Z 268.7K followers, 92.4K engagements
"Very excited about OpenAIs new AgentKit. Visual agent builders are a game changer for iterating on and shipping agents"
X Link @omarsar0 2025-10-06T18:22Z 268.7K followers, 46.9K engagements
"Agentic Context Engineering Great paper on agentic context engineering. The recipe: Treat your system prompts and agent memory as a living playbook. Log trajectories reflect to extract actionable bullets (strategies tool schemas failure modes) then merge as append-only deltas with periodic semantic de-dupe. Use execution signals and unit tests as supervision. Start offline to warm up a seed playbook then continue online to self-improve. On AppWorld ACE consistently beats strong baselines in both offline and online adaptation. Example: ReAct+ACE (offline) lifts average score to XXXX% vs"
X Link @omarsar0 2025-10-10T20:29Z 268.7K followers, 81.9K engagements
"We are living in the most insane timeline. I just asked Claude Code (with Claude Sonnet 4.5) to develop an MCP Server (end-to-end) that allows me to programatically create n8n workflows from within Claude Code itself. Took about XX mins"
X Link @omarsar0 2025-09-30T22:21Z 268.7K followers, 222.9K engagements
"How do you apply effective context engineering for AI agents Read this if you are an AI dev building AI agents today. Context is king And it must be engineered not just prompted. I wrote a few notes after reading through the awesome new context engineering guide from Anthropic: Context Engineering vs. Prompt Engineering - Prompt Engineering = writing and organizing instructions - Context Engineering = curating and maintaining prompts tools history and external data - Context Engineering is iterative and context is curated regularly Why Context Engineering Matters - Finite attention budget -"
X Link @omarsar0 2025-10-02T20:32Z 268.7K followers, 50K engagements
"5. Slow thoughtful agents win. Agents that plan before acting (fewer but smarter tool calls) outperform reactive ones that constantly rush to use tools. The best ones pause think internally then act once with high precision"
X Link @omarsar0 2025-10-14T14:55Z 268.7K followers, 1572 engagements
"Memory is key to effective AI agents but it's hard to get right. Google presents memory-aware test-time scaling for improving self-evolving agents. It outperforms other memory mechanisms by leveraging structured and adaptable memory. Technical highlights:"
X Link @omarsar0 2025-10-12T16:01Z 268.7K followers, 26.8K engagements
"TL;DR A memory framework that turns an agents own successes and failures into reusable reasoning strategies then pairs that memory with test-time scaling to compound gains over time"
X Link @omarsar0 2025-10-12T16:01Z 268.5K followers, 1109 engagements
"Great recap of security risks associated with LLM-based agents. The literature keeps growing but these are key papers worth reading. Analysis of 150+ papers finds that there is a shift from monolithic to planner-executor and multi-agent architectures. Multi-agent security is a widely underexplored space for devs. Issues range from LLM-to-LLM prompt infection spoofing trust delegation and collusion"
X Link @omarsar0 2025-10-11T18:07Z 268.7K followers, 33.8K engagements
"@TikhiyVo Yep I still prefer to use it via terminal as opposed to the fancy UI"
X Link @omarsar0 2025-10-14T23:06Z 268.7K followers, XXX engagements