[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Alex L Zhang posts on X about lm, why is the most. They currently have XXXXXX followers and XX posts still getting attention that total XXXXXXX engagements in the last XX hours.
Social topic influence lm #123, why is
Top posts by engagements in the last XX hours
"What if scaling the context windows of frontier LLMs is much easier than it sounds Were excited to share our work on Recursive Language Models (RLMs). A new inference strategy where LLMs can decompose and recursively interact with input prompts of seemingly unbounded length as a REPL environment. On the OOLONG benchmark RLMs with GPT-5-mini outperforms GPT-5 by over XXX% gains (more than double) on 132k-token sequences and is cheaper to query on average. On the BrowseComp-Plus benchmark RLMs with GPT-5 can take in 10M+ tokens as their prompt and answer highly compositional queries without"
X Link @a1zhang 2025-10-15T14:32Z 15.8K followers, 686.3K engagements
""Why is this not just an agent with access to your file system e.g. something like SWE-Agent" Perhaps because we implemented it with a REPL environment there seems to be some link to coding but an RLM is entirely task-agnostic. Think of it as an extension of the bitter lesson -- our design of how we handle context (e.g. how we design agents) should entirely be up to an LM not a human. When you think about an agent with file system or terminal access it is generally given tools to look around some codebase execute code write tests etc. An RLM like an LM call is a function from text -- text."
X Link @a1zhang 2025-10-15T18:59Z 15.8K followers, 28.3K engagements
"So you can think of RLMs as a generalization of this idea and several others. In our post we wanted to show that code execution was the right instantiation of this more general idea. Most prior works don't really consider the context / prompt our the root LM in the way that we've been framing it (which makes sense they're very agentic in nature). In terms of why the REPL environment is different than the setup you described in some sense we don't want to manually engineer tools for the RLMs. We want full flexibility and using code is the most flexible form of this (e.g. using regex queries to"
X Link @a1zhang 2025-10-15T17:14Z 15.8K followers, 3618 engagements
"Blogpost: There are several examples of interesting behavior that emerge from RLMs that can be found in the blogpost. We wrote a visualizer to make these examples clearer and highlight different strategies that these models can take. OOLONG: BCP: Id like to thank my wonderful advisor @lateinteraction the person Ive spammed on Slack while working on this @noahziems and the rest of my labmates @jacobli99 @dianetc_ for their support and discussion in this project"
X Link @a1zhang 2025-10-15T14:32Z 15.8K followers, 20.5K engagements