[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @lateinteraction Omar Khattab Omar Khattab posts on X about llms, open ai, shield, october 2025 the most. They currently have XXXXXX followers and XXX posts still getting attention that total XXXXX engagements in the last XX hours. ### Engagements: XXXXX [#](/creator/twitter::1605274291569799168/interactions)  - X Week XXXXXXX +50% - X Month XXXXXXX -XX% - X Months XXXXXXXXX +277% - X Year XXXXXXXXX +79% ### Mentions: XX [#](/creator/twitter::1605274291569799168/posts_active)  - X Week XX +2.90% - X Month XX -XXXX% - X Months XXX +236% - X Year XXX +110% ### Followers: XXXXXX [#](/creator/twitter::1605274291569799168/followers)  - X Week XXXXXX +3.10% - X Month XXXXXX +7% - X Months XXXXXX +39% - X Year XXXXXX +64% ### CreatorRank: XXXXXXX [#](/creator/twitter::1605274291569799168/influencer_rank)  ### Social Influence [#](/creator/twitter::1605274291569799168/influence) --- **Social category influence** [technology brands](/list/technology-brands) [currencies](/list/currencies) **Social topic influence** [llms](/topic/llms) #112, [open ai](/topic/open-ai) #3070, [shield](/topic/shield), [october 2025](/topic/october-2025), [ranks](/topic/ranks), [scratch](/topic/scratch), [a very](/topic/a-very), [1b](/topic/1b), [scales](/topic/scales), [leaderboard](/topic/leaderboard) ### Top Social Posts [#](/creator/twitter::1605274291569799168/posts) --- Top posts by engagements in the last XX hours "What are the most natural/realistic VERY long context problems out there" [X Link](https://x.com/lateinteraction/status/1976964409139642716) [@lateinteraction](/creator/x/lateinteraction) 2025-10-11T10:53Z 26.2K followers, 36K engagements "Ankur doing so well what I should have started a long time ago. Collecting a few of the more notable DSPy use cases or other news in a given week and packaging them up for your convenience:" [X Link](https://x.com/lateinteraction/status/1979154245065589077) [@lateinteraction](/creator/x/lateinteraction) 2025-10-17T11:55Z 26.2K followers, 11.4K engagements "Missing nuance in the collective realization today: The non-trivial negative result is not that "RL just amplifies skills that are already there with low probability". Duh that's obvious and not an issue actually. What got questioned today is that "dumb pretraining teaches the model all these things and then RL surfaces them". Had it been like that we'd all be celebrating that. The problem is that it isn't this The non-trivial negative result is that "this form of RLVR seems (in certain cases) to ONLY work if your mid-training data mixtures deliberately encode these *specific* math and coding" [X Link](https://x.com/lateinteraction/status/1927445094002487554) [@lateinteraction](/creator/x/lateinteraction) 2025-05-27T19:21Z 26.2K followers, 9294 engagements "@tomaarsen They didnt do it autoregressively. Thats the goal" [X Link](https://x.com/lateinteraction/status/1976222364078834130) [@lateinteraction](/creator/x/lateinteraction) 2025-10-09T09:45Z 26.2K followers, XXX engagements "I can no longer see the word judge and think it refers to a anything but an LLM anymore" [X Link](https://x.com/lateinteraction/status/1976440058254328088) [@lateinteraction](/creator/x/lateinteraction) 2025-10-10T00:10Z 26.2K followers, 7164 engagements "I find it hilarious that there's a dozen 'experts' below responding with "yeah obviously bro training is definitely MUCH MORE expensive than inference". If these estimates are accurate it's not an inevitability. It's a deliberate strategy + many-researchers-gotta-experiment" [X Link](https://x.com/lateinteraction/status/1977510540173087054) [@lateinteraction](/creator/x/lateinteraction) 2025-10-12T23:03Z 26.2K followers, 5605 engagements "I choose a good handle I guess 😅 Late interaction models of absurdly small scales: the 17M variant ranks first for models under 1B parameters is a VERY weird statement" [X Link](https://x.com/lateinteraction/status/1978861817636987309) [@lateinteraction](/creator/x/lateinteraction) 2025-10-16T16:33Z 26.2K followers, 9372 engagements "It just hit me that sub 1B-parameter models that are way better than 175B GPT-3 are a dime a dozen today. Kinda cool" [X Link](https://x.com/lateinteraction/status/1966625219566522827) [@lateinteraction](/creator/x/lateinteraction) 2025-09-12T22:09Z 26.2K followers, 495.5K engagements "For a long time we and others have thought about general-purpose inference scaling axes. But only CoT reasoning and ReAct-style loops stuck around. I think Recursive Language Models may be the next one. Your current LLM can already process 10M+ prompt tokens recursively" [X Link](https://x.com/lateinteraction/status/1978471277149970824) [@lateinteraction](/creator/x/lateinteraction) 2025-10-15T14:41Z 26.2K followers, 28.3K engagements "BTW a different way to state this result is: Modern frontier LLMs are *really* good and are under-utilized. Better models are even *harder* to use to their fullest extent. Also openai definitely did a great job with GPT-5(-mini) imo. Excellent stuff" [X Link](https://x.com/lateinteraction/status/1978527037183631647) [@lateinteraction](/creator/x/lateinteraction) 2025-10-15T18:22Z 26.2K followers, 16.1K engagements "There was a weird mistake historically in machine learning where learning got overly associated with optimization and optimization with gradient-based methods. SGD works incredibly well until you get foundation models. Then learning starts to return to its more general roots" [X Link](https://x.com/lateinteraction/status/1979612318255456353) [@lateinteraction](/creator/x/lateinteraction) 2025-10-18T18:15Z 26.2K followers, 19.4K engagements "wait what most of that compute wasnt inference" [X Link](https://x.com/lateinteraction/status/1977007105149485153) [@lateinteraction](/creator/x/lateinteraction) 2025-10-11T13:43Z 26.2K followers, 159.4K engagements "Everyone talks about LLM as a judge. But what about LLM as a witness" [X Link](https://x.com/lateinteraction/status/1977573313951211790) [@lateinteraction](/creator/x/lateinteraction) 2025-10-13T03:13Z 26.2K followers, 79K engagements "btw Alex is a second-month PhD student; he did this work in X weeks i have my suspicions that Alex has secret recursive Alexes that do his work for him but i haven't been able to confirm that haha really fun post on recursive LMs with interesting trace examples check it out" [X Link](https://x.com/lateinteraction/status/1978476154022457404) [@lateinteraction](/creator/x/lateinteraction) 2025-10-15T15:00Z 26.2K followers, 205.7K engagements "@skylar_b_payne @DSPyOSS module.set_lm(obj)" [X Link](https://x.com/lateinteraction/status/1980030619393347759) [@lateinteraction](/creator/x/lateinteraction) 2025-10-19T21:57Z 26.2K followers, 1082 engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Omar Khattab posts on X about llms, open ai, shield, october 2025 the most. They currently have XXXXXX followers and XXX posts still getting attention that total XXXXX engagements in the last XX hours.
Social category influence technology brands currencies
Social topic influence llms #112, open ai #3070, shield, october 2025, ranks, scratch, a very, 1b, scales, leaderboard
Top posts by engagements in the last XX hours
"What are the most natural/realistic VERY long context problems out there"
X Link @lateinteraction 2025-10-11T10:53Z 26.2K followers, 36K engagements
"Ankur doing so well what I should have started a long time ago. Collecting a few of the more notable DSPy use cases or other news in a given week and packaging them up for your convenience:"
X Link @lateinteraction 2025-10-17T11:55Z 26.2K followers, 11.4K engagements
"Missing nuance in the collective realization today: The non-trivial negative result is not that "RL just amplifies skills that are already there with low probability". Duh that's obvious and not an issue actually. What got questioned today is that "dumb pretraining teaches the model all these things and then RL surfaces them". Had it been like that we'd all be celebrating that. The problem is that it isn't this The non-trivial negative result is that "this form of RLVR seems (in certain cases) to ONLY work if your mid-training data mixtures deliberately encode these specific math and coding"
X Link @lateinteraction 2025-05-27T19:21Z 26.2K followers, 9294 engagements
"@tomaarsen They didnt do it autoregressively. Thats the goal"
X Link @lateinteraction 2025-10-09T09:45Z 26.2K followers, XXX engagements
"I can no longer see the word judge and think it refers to a anything but an LLM anymore"
X Link @lateinteraction 2025-10-10T00:10Z 26.2K followers, 7164 engagements
"I find it hilarious that there's a dozen 'experts' below responding with "yeah obviously bro training is definitely MUCH MORE expensive than inference". If these estimates are accurate it's not an inevitability. It's a deliberate strategy + many-researchers-gotta-experiment"
X Link @lateinteraction 2025-10-12T23:03Z 26.2K followers, 5605 engagements
"I choose a good handle I guess 😅 Late interaction models of absurdly small scales: the 17M variant ranks first for models under 1B parameters is a VERY weird statement"
X Link @lateinteraction 2025-10-16T16:33Z 26.2K followers, 9372 engagements
"It just hit me that sub 1B-parameter models that are way better than 175B GPT-3 are a dime a dozen today. Kinda cool"
X Link @lateinteraction 2025-09-12T22:09Z 26.2K followers, 495.5K engagements
"For a long time we and others have thought about general-purpose inference scaling axes. But only CoT reasoning and ReAct-style loops stuck around. I think Recursive Language Models may be the next one. Your current LLM can already process 10M+ prompt tokens recursively"
X Link @lateinteraction 2025-10-15T14:41Z 26.2K followers, 28.3K engagements
"BTW a different way to state this result is: Modern frontier LLMs are really good and are under-utilized. Better models are even harder to use to their fullest extent. Also openai definitely did a great job with GPT-5(-mini) imo. Excellent stuff"
X Link @lateinteraction 2025-10-15T18:22Z 26.2K followers, 16.1K engagements
"There was a weird mistake historically in machine learning where learning got overly associated with optimization and optimization with gradient-based methods. SGD works incredibly well until you get foundation models. Then learning starts to return to its more general roots"
X Link @lateinteraction 2025-10-18T18:15Z 26.2K followers, 19.4K engagements
"wait what most of that compute wasnt inference"
X Link @lateinteraction 2025-10-11T13:43Z 26.2K followers, 159.4K engagements
"Everyone talks about LLM as a judge. But what about LLM as a witness"
X Link @lateinteraction 2025-10-13T03:13Z 26.2K followers, 79K engagements
"btw Alex is a second-month PhD student; he did this work in X weeks i have my suspicions that Alex has secret recursive Alexes that do his work for him but i haven't been able to confirm that haha really fun post on recursive LMs with interesting trace examples check it out"
X Link @lateinteraction 2025-10-15T15:00Z 26.2K followers, 205.7K engagements
"@skylar_b_payne @DSPyOSS module.set_lm(obj)"
X Link @lateinteraction 2025-10-19T21:57Z 26.2K followers, 1082 engagements
/creator/twitter::lateinteraction