[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@lateinteraction Avatar @lateinteraction Omar Khattab

Omar Khattab posts on X about llm, vibe, 6969, linkedin the most. They currently have XXXXXX followers and XXX posts still getting attention that total XXXXX engagements in the last XX hours.

Engagements: XXXXX #

Engagements Line Chart

Mentions: XX #

Mentions Line Chart

Followers: XXXXXX #

Followers Line Chart

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence #


Social category influence technology brands social networks currencies

Social topic influence llm, vibe, 6969, linkedin, all the, duh, rl, databricks, if you, context engineering

Top Social Posts #


Top posts by engagements in the last XX hours

"Two LLMs are like "let's connect via DM to brainstorm further". Should have been on LinkedIn"
@lateinteraction Avatar @lateinteraction on X 2025-07-27 18:18:06 UTC 22.6K followers, 4461 engagements

"Yeah meta-learning of sorts. BTW my answer was more about instruction following. I agree the question is much less non-trivial for example-based ICL. For that my intuition is that the weights are very good at aggregating all knowledge without forgetting fast. But ICL is much more overriding and loud. In the sense that perhaps due to overparameterization with SGD you can teach the LLM X gazillion tricks incrementally one at a time. But ICL is a tool where you can orient the LLM towards X task at a time. Just some vague intuitions"
@lateinteraction Avatar @lateinteraction on X 2025-07-27 04:32:01 UTC 22.6K followers, XXX engagements

"@jchencxh I guess what you're saying is: could we do something with the forward pass / activations of the LLM on the instruction we like Can we aggregate them across a few prompts and then "burn them into" the weights"
@lateinteraction Avatar @lateinteraction on X 2025-07-27 04:34:04 UTC 22.6K followers, 2995 engagements

"Management vs. Higher-level Programming. Engineers need control at some level of abstraction. Right now vibe coding is management. It's not so much that this is bad or something. It's just a different model of interaction. What's missing is a higher-level AI language"
@lateinteraction Avatar @lateinteraction on X 2025-07-25 20:23:24 UTC 22.6K followers, 4105 engagements

"Missing nuance in the collective realization today: The non-trivial negative result is not that "RL just amplifies skills that are already there with low probability". Duh that's obvious and not an issue actually. What got questioned today is that "dumb pretraining teaches the model all these things and then RL surfaces them". Had it been like that we'd all be celebrating that. The problem is that it isn't this The non-trivial negative result is that "this form of RLVR seems (in certain cases) to ONLY work if your mid-training data mixtures deliberately encode these specific math and coding"
@lateinteraction Avatar @lateinteraction on X 2025-05-27 19:21:35 UTC 22.6K followers, 9234 engagements

"Over the past year we received SO many requests for a @DeepLearningAI short course on DSPy. Thanks to a great effort by my colleagues @ChenMoneyQ & @xqcathyyin from the DSPy team at @Databricks here's a collaboration with @AndrewYNg's team who have been super helpful on this"
@lateinteraction Avatar @lateinteraction on X 2025-06-04 15:09:25 UTC 22.6K followers, 80.6K engagements

"@jchencxh My hunch is "yes there's probably a semi-straightforward answer that works but it would lead to very fast catastrophic forgetting". It's like with ICL if you have 10-shot examples for translation but you try to get the model to do summarization it'll be confused fast"
@lateinteraction Avatar @lateinteraction on X 2025-07-27 04:37:22 UTC 22.6K followers, XXX engagements

"What do you mean Vibe Coding (Context Engineering). We are in the Vibe Meanings era"
@lateinteraction Avatar @lateinteraction on X 2025-07-26 23:43:29 UTC 22.6K followers, 9291 engagements

"@MaximeRivest @DSPyOSS Modular software systems that compose specialized AI components expressed declaratively at a high level of abstraction (in terms of what they shall do not how) and "compiled" into lower-level invocations of ML models or algorithms"
@lateinteraction Avatar @lateinteraction on X 2025-07-18 22:08:45 UTC 22.6K followers, XX engagements

"@jchencxh So many vague intuitions but having worked so much with instruction and example learning my hunch is that SGD's most underrated amazing thing is that it just keeps accumulating knowledge and doesn't run out of capacity (on a big enough DNN). It's so amazing"
@lateinteraction Avatar @lateinteraction on X 2025-07-27 04:39:09 UTC 22.6K followers, XXX engagements

"@hallerite @AlexGDimakis These are multi-step tasks e.g. HoVer makes X LLM calls iirc Anyway I'd expect better performance from a well-designed reflective optimizer than from GRPO in particular for long-horizon problems. Credit assignment is rather easy for an LLM that actually sees the trajectory"
@lateinteraction Avatar @lateinteraction on X 2025-07-28 19:11:13 UTC 22.6K followers, XX engagements

"@hallerite @AlexGDimakis it's not like the LLM has a memory from other rollouts Why don't you give it that Just show it many trajectories multiple lessons in the Pareto tree etc. That's what I mean by "well-designed reflective optimizer". Just make sure the information flows to the LLM"
@lateinteraction Avatar @lateinteraction on X 2025-07-28 19:22:31 UTC 22.6K followers, XX engagements