[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @deredleritt3r prinz prinz posts on X about open ai, $googl, $1b, $5b the most. They currently have XXXXX followers and 1670 posts still getting attention that total XXXXXX engagements in the last XX hours. ### Engagements: XXXXXX [#](/creator/twitter::1744055410108080128/interactions)  - X Week XXXXXXX -XX% - X Month XXXXXXXXX -XX% - X Months XXXXXXXXXX +915% - X Year XXXXXXXXXX +3,932% ### Mentions: XX [#](/creator/twitter::1744055410108080128/posts_active)  - X Week XX -XX% - X Month XXX -XX% - X Months XXX +44% - X Year XXXXX +65% ### Followers: XXXXX [#](/creator/twitter::1744055410108080128/followers)  - X Week XXXXX +13% - X Month XXXXX +30% - X Months XXXXX +423% - X Year XXXXX +1,357% ### CreatorRank: XXXXXXX [#](/creator/twitter::1744055410108080128/influencer_rank)  ### Social Influence **Social category influence** [technology brands](/list/technology-brands) [stocks](/list/stocks) **Social topic influence** [open ai](/topic/open-ai) #623, [$googl](/topic/$googl), [$1b](/topic/$1b) #282, [$5b](/topic/$5b) #22, [to the](/topic/to-the), [imo](/topic/imo), [gemini 3](/topic/gemini-3), [sat](/topic/sat), [$3b](/topic/$3b), [$4b](/topic/$4b) **Top assets mentioned** [Alphabet Inc Class A (GOOGL)](/topic/$googl) ### Top Social Posts Top posts by engagements in the last XX hours "Here is how DeepSeek-V3.2-Speciale stacks up against OpenAI and Google in IMO IOI and ICPC performance: X. Speciale adhered to each competition's time and attempt limits. No tools or internet access were used. X. For the IOI and ICPC OpenAI had GPT-5 generate candidate solutions. The unreleased experimental model reviewed these solutions and decided which were to be submitted. For the hardest ICPC problem the unreleased experimental model generated the solution on its own. DeepSeek used a much more complex 3-stage process for Speciale: (1) First solutions that exceeded "the length" [X Link](https://x.com/deredleritt3r/status/1995512202598482359) 2025-12-01T15:15Z 7736 followers, 15.4K engagements "New interview with Mark Chen (OpenAI): - Ashlee Vance the interviewer has apparently been spending a lot of time at OpenAI including sitting in on meetings (he seems to be writing a book). And he seems to think that OpenAI has made some huge advance in pre-training: "Pre-training seems like this area where it seems is you've figured something out you're excited about. You think this is going to be a major advance." Mark doesn't spill the beans though. He does say: "We think there's a lot of room in pre-training. A lot of people say scaling is dead. We don't think so at all." And also this:" [X Link](https://x.com/deredleritt3r/status/1995699999229837570) 2025-12-02T03:42Z 7736 followers, 143.4K engagements "Anthropic annualized revenue: - January: $1B - May: $3B - June: $4B - August: $5B - October: $7B - December: $8B to $10B" [X Link](https://x.com/deredleritt3r/status/1996294139843862618) 2025-12-03T19:03Z 7736 followers, 231.7K engagements "Retro Bio the AI-powered longevity biotech company backed by Sam Altman is looking to raise $1B at a $5B valuation. - Retro Bio hopes to enter clinical trials by 2027. - It is developing X types of drugs: X. RTR-242 a small molecule designed to boost autophagic flux (cellular recycling) a mechanism that may help clear accumulated damaged proteins in brain cells of Alzheimer's disease patients. RDR-242 is currently in preclinical/IND-enabling work ahead of first-in-human studies. X. RTR-888 an induced pluripotent stem cell-derived product (iPSC) which targets Alzheimer's by" [X Link](https://x.com/deredleritt3r/status/1996980161468583958) 2025-12-05T16:29Z 7736 followers, 12.3K engagements "This is a fantastic post. It's not just the price per share that's important to the board. Closing certainty (which hinges on regulatory approvals) is critical. Provisions around how the target will conduct its business until it's acquired and what happens if the transaction doesn't go forward are very important. There can also be a myriad other considerations including those related to financing" [X Link](https://x.com/deredleritt3r/status/1998467567414079972) 2025-12-09T18:59Z 7735 followers, 1009 engagements "@Miles_Brundage Claude Code for drafting stock purchase agreements with stupid-easy integration for us non-technical lawyers. I anticipate that this is a dam that will eventually crack and break but it's been holding quite steady thus far" [X Link](https://x.com/deredleritt3r/status/1998526329843859801) 2025-12-09T22:53Z 7734 followers, XXX engagements "Get ready for 2026: - Demis Hassabis says that video models (like Veo 3) and LLMs (like Gemini) will combine next year. - Anthropic is looking for models to become more proactive. "The boundary between you prompting the model and the model prompting you is going to get blurry in 2026."" [X Link](https://x.com/deredleritt3r/status/1997056259615244546) 2025-12-05T21:31Z 7740 followers, 192.1K engagements "@karpathy Similarly I never use the word "I" when talking to LLMs. "I think that UFOs are real; is that correct" risks sycophancy. "Please analyze whether UFOs are real based on verified data in high-quality sources." should provide a somewhat better answer" [X Link](https://x.com/deredleritt3r/status/1997740121115054146) 2025-12-07T18:48Z 7740 followers, 50.3K engagements "Gemini XXX Pro generating new knowledge:" [X Link](https://x.com/deredleritt3r/status/1998062768671313927) 2025-12-08T16:11Z 7740 followers, 60.2K engagements "It is absolutely critical to be able to construct a prompt that: - has no extraneous words BUT - has XXX% of the facts and context that the model needs to successfully complete its work You never want to apologize to the model for a lengthy prompt because you had no time to write a short one" [X Link](https://x.com/deredleritt3r/status/1998498232230412478) 2025-12-09T21:01Z 7738 followers, XXX engagements "I actually need my ChatGPT Enterprise subscription to deliver this to me on a silver platter because I can't use anything else with sensitive client data Word is the key integration but I'd also want the model to be able to access a data room probably some research databases etc" [X Link](https://x.com/deredleritt3r/status/1998557191574335618) 2025-12-10T00:55Z 7735 followers, XX engagements "Understood and thank you for the explanation. When I read "contrary to the optimism about LLMs' problem-solving abilities" the implication is that the paper covers ALL LLMs not just the handful that you tested. If your abstract said something like: "We tested LLMs X Y and Z. These LLMs' problem-solving capabilities are not good enough to solve this problem. We know that the models that won gold on the IMO are more powerful and look forward to testing these and other models in the future" far fewer people would have been confused. In the "Limitations" section of your paper you anticipated that" [X Link](https://x.com/deredleritt3r/status/1998731998265819519) 2025-12-10T12:30Z 7736 followers, XX engagements "@JasonBotterill GPT-5.1 Instant is a reasoning model (occasionally does a "short COT" that is not shown to the user) and I assume GPT-5.2 Instant is also" [X Link](https://x.com/deredleritt3r/status/1999370285062009237) 2025-12-12T06:46Z 7740 followers, XXX engagements "I say "similarly" because in each case the word "you" or "I" introduces extraneous data into the prompt; this extraneous data can decrease the quality of the response. Prompting without introducing extraneous data - but with XXX% of the context and facts that the model needs to provide the best response - is the current LLM prompting superpower" [X Link](https://x.com/deredleritt3r/status/1997741023661851095) 2025-12-07T18:52Z 7740 followers, 2237 engagements "@KMarwatov @karpathy I always assume that anything I ask the model winds up in the training data (unless I'm using my ChatGPT Enrerprise subscription). I absolutely want future models to know that prinz is polite to the models" [X Link](https://x.com/deredleritt3r/status/1997748658469769697) 2025-12-07T19:22Z 7740 followers, 1778 engagements "I use Google search for things that can be found in under X seconds. For everything else ChatGPT is significantly better. But replacing Google search with ChatGPT has required me to unwind XX years of habitually using Google to search everything + 2-3 more years of using search engines before Google existed. Old habits die hard. I assume the same is true for many other people" [X Link](https://x.com/deredleritt3r/status/1998769126693560714) 2025-12-10T14:57Z 7740 followers, XXX engagements "This is really great but do note that the X axis (cost per task) also matters. You would expect that performance on any benchmark would improve with additional test-time compute. E.g. on ARC-AGI-2 GPT-5 (high) was spending $XXXX per task; GPT-5.1 Thinking (high) was spending $XXXX and GPT-5.2 (high) $1.39" [X Link](https://x.com/deredleritt3r/status/1999502865262395766) 2025-12-12T15:33Z 7740 followers, XXX engagements "Sam Altman: "Very little discussed are the hardware implications of recursive self-improvement: robots that can build other robots data centers that can build other data centers chips that can design their own next generation. Maybe the problem of chip design will turn out to be a very good problem for previous generations of chips."" [X Link](https://x.com/deredleritt3r/status/1986115691150233789) 2025-11-05T16:57Z 7740 followers, 49.8K engagements "METR (50% accuracy): GPT-5.1-Codex-Max = X hours XX minutes This is XX minutes longer than GPT-5" [X Link](https://x.com/deredleritt3r/status/1991245055017820236) 2025-11-19T20:39Z 7740 followers, 755.5K engagements "Agreed. Dwarkesh is just wrong here. GPT-5 Pro can now do legal research and analysis at a very high level (with limitations - may need to run even longer for certain searches; can't connect to proprietary databases). I use it to enhance my work all the time with excellent results. I would REALLY miss the model if it became unavailable to me for some reason. And yet the percentage of lawyers who actually use GPT-5 Pro for these kinds of tasks is probably 1%. Why There's a myriad reasons - none having anything to do with the model's capabilities. Lawyers are conservative lawyers are" [X Link](https://x.com/deredleritt3r/status/1996056522607018179) 2025-12-03T03:18Z 7740 followers, 518.3K engagements "@Laksh_Kanojiya GPT-5.1 Thinking is already much better than Gemini X Pro for all my use cases. I assume that GPT-5.2 will be even better but let's wait and see" [X Link](https://x.com/deredleritt3r/status/1999120093649805387) 2025-12-11T14:12Z 7740 followers, 2172 engagements "Achieving AGI is OpenAI's primary goal. ChatGPT Sora X the image models "adult mode" and all the other little things on which these timeline is constantly focused are *merely a stepping stone* towards AGI. (And these product things are important ofc - both because AGI will be distributed to the world and because they generate revenue which turns into funding which is converted into progress towards AGI. But achieving them is not OpenAI's primary mission.)" [X Link](https://x.com/deredleritt3r/status/1999245475770221051) 2025-12-11T22:30Z 7740 followers, 7004 engagements "Test-time compute (the X axis) matters. The longer the model thinks and the more test-time compute it expends the more accurate the response should generally be. ARC-AGI is now saying that o3-preview scored XX% on ARC-AGI-1 at $4500 per task. GPT-5.2 Pro is spending just $XXXXX. If a model like GPT-5.2 Pro were allowed to spend 100X more compute on these tasks I think it's likely that you would see its score improve significantly" [X Link](https://x.com/deredleritt3r/status/1999511939974610949) 2025-12-12T16:09Z 7740 followers, XX engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
@deredleritt3r prinzprinz posts on X about open ai, $googl, $1b, $5b the most. They currently have XXXXX followers and 1670 posts still getting attention that total XXXXXX engagements in the last XX hours.
Social category influence technology brands stocks
Social topic influence open ai #623, $googl, $1b #282, $5b #22, to the, imo, gemini 3, sat, $3b, $4b
Top assets mentioned Alphabet Inc Class A (GOOGL)
Top posts by engagements in the last XX hours
"Here is how DeepSeek-V3.2-Speciale stacks up against OpenAI and Google in IMO IOI and ICPC performance: X. Speciale adhered to each competition's time and attempt limits. No tools or internet access were used. X. For the IOI and ICPC OpenAI had GPT-5 generate candidate solutions. The unreleased experimental model reviewed these solutions and decided which were to be submitted. For the hardest ICPC problem the unreleased experimental model generated the solution on its own. DeepSeek used a much more complex 3-stage process for Speciale: (1) First solutions that exceeded "the length"
X Link 2025-12-01T15:15Z 7736 followers, 15.4K engagements
"New interview with Mark Chen (OpenAI): - Ashlee Vance the interviewer has apparently been spending a lot of time at OpenAI including sitting in on meetings (he seems to be writing a book). And he seems to think that OpenAI has made some huge advance in pre-training: "Pre-training seems like this area where it seems is you've figured something out you're excited about. You think this is going to be a major advance." Mark doesn't spill the beans though. He does say: "We think there's a lot of room in pre-training. A lot of people say scaling is dead. We don't think so at all." And also this:"
X Link 2025-12-02T03:42Z 7736 followers, 143.4K engagements
"Anthropic annualized revenue: - January: $1B - May: $3B - June: $4B - August: $5B - October: $7B - December: $8B to $10B"
X Link 2025-12-03T19:03Z 7736 followers, 231.7K engagements
"Retro Bio the AI-powered longevity biotech company backed by Sam Altman is looking to raise $1B at a $5B valuation. - Retro Bio hopes to enter clinical trials by 2027. - It is developing X types of drugs: X. RTR-242 a small molecule designed to boost autophagic flux (cellular recycling) a mechanism that may help clear accumulated damaged proteins in brain cells of Alzheimer's disease patients. RDR-242 is currently in preclinical/IND-enabling work ahead of first-in-human studies. X. RTR-888 an induced pluripotent stem cell-derived product (iPSC) which targets Alzheimer's by"
X Link 2025-12-05T16:29Z 7736 followers, 12.3K engagements
"This is a fantastic post. It's not just the price per share that's important to the board. Closing certainty (which hinges on regulatory approvals) is critical. Provisions around how the target will conduct its business until it's acquired and what happens if the transaction doesn't go forward are very important. There can also be a myriad other considerations including those related to financing"
X Link 2025-12-09T18:59Z 7735 followers, 1009 engagements
"@Miles_Brundage Claude Code for drafting stock purchase agreements with stupid-easy integration for us non-technical lawyers. I anticipate that this is a dam that will eventually crack and break but it's been holding quite steady thus far"
X Link 2025-12-09T22:53Z 7734 followers, XXX engagements
"Get ready for 2026: - Demis Hassabis says that video models (like Veo 3) and LLMs (like Gemini) will combine next year. - Anthropic is looking for models to become more proactive. "The boundary between you prompting the model and the model prompting you is going to get blurry in 2026.""
X Link 2025-12-05T21:31Z 7740 followers, 192.1K engagements
"@karpathy Similarly I never use the word "I" when talking to LLMs. "I think that UFOs are real; is that correct" risks sycophancy. "Please analyze whether UFOs are real based on verified data in high-quality sources." should provide a somewhat better answer"
X Link 2025-12-07T18:48Z 7740 followers, 50.3K engagements
"Gemini XXX Pro generating new knowledge:"
X Link 2025-12-08T16:11Z 7740 followers, 60.2K engagements
"It is absolutely critical to be able to construct a prompt that: - has no extraneous words BUT - has XXX% of the facts and context that the model needs to successfully complete its work You never want to apologize to the model for a lengthy prompt because you had no time to write a short one"
X Link 2025-12-09T21:01Z 7738 followers, XXX engagements
"I actually need my ChatGPT Enterprise subscription to deliver this to me on a silver platter because I can't use anything else with sensitive client data Word is the key integration but I'd also want the model to be able to access a data room probably some research databases etc"
X Link 2025-12-10T00:55Z 7735 followers, XX engagements
"Understood and thank you for the explanation. When I read "contrary to the optimism about LLMs' problem-solving abilities" the implication is that the paper covers ALL LLMs not just the handful that you tested. If your abstract said something like: "We tested LLMs X Y and Z. These LLMs' problem-solving capabilities are not good enough to solve this problem. We know that the models that won gold on the IMO are more powerful and look forward to testing these and other models in the future" far fewer people would have been confused. In the "Limitations" section of your paper you anticipated that"
X Link 2025-12-10T12:30Z 7736 followers, XX engagements
"@JasonBotterill GPT-5.1 Instant is a reasoning model (occasionally does a "short COT" that is not shown to the user) and I assume GPT-5.2 Instant is also"
X Link 2025-12-12T06:46Z 7740 followers, XXX engagements
"I say "similarly" because in each case the word "you" or "I" introduces extraneous data into the prompt; this extraneous data can decrease the quality of the response. Prompting without introducing extraneous data - but with XXX% of the context and facts that the model needs to provide the best response - is the current LLM prompting superpower"
X Link 2025-12-07T18:52Z 7740 followers, 2237 engagements
"@KMarwatov @karpathy I always assume that anything I ask the model winds up in the training data (unless I'm using my ChatGPT Enrerprise subscription). I absolutely want future models to know that prinz is polite to the models"
X Link 2025-12-07T19:22Z 7740 followers, 1778 engagements
"I use Google search for things that can be found in under X seconds. For everything else ChatGPT is significantly better. But replacing Google search with ChatGPT has required me to unwind XX years of habitually using Google to search everything + 2-3 more years of using search engines before Google existed. Old habits die hard. I assume the same is true for many other people"
X Link 2025-12-10T14:57Z 7740 followers, XXX engagements
"This is really great but do note that the X axis (cost per task) also matters. You would expect that performance on any benchmark would improve with additional test-time compute. E.g. on ARC-AGI-2 GPT-5 (high) was spending $XXXX per task; GPT-5.1 Thinking (high) was spending $XXXX and GPT-5.2 (high) $1.39"
X Link 2025-12-12T15:33Z 7740 followers, XXX engagements
"Sam Altman: "Very little discussed are the hardware implications of recursive self-improvement: robots that can build other robots data centers that can build other data centers chips that can design their own next generation. Maybe the problem of chip design will turn out to be a very good problem for previous generations of chips.""
X Link 2025-11-05T16:57Z 7740 followers, 49.8K engagements
"METR (50% accuracy): GPT-5.1-Codex-Max = X hours XX minutes This is XX minutes longer than GPT-5"
X Link 2025-11-19T20:39Z 7740 followers, 755.5K engagements
"Agreed. Dwarkesh is just wrong here. GPT-5 Pro can now do legal research and analysis at a very high level (with limitations - may need to run even longer for certain searches; can't connect to proprietary databases). I use it to enhance my work all the time with excellent results. I would REALLY miss the model if it became unavailable to me for some reason. And yet the percentage of lawyers who actually use GPT-5 Pro for these kinds of tasks is probably 1%. Why There's a myriad reasons - none having anything to do with the model's capabilities. Lawyers are conservative lawyers are"
X Link 2025-12-03T03:18Z 7740 followers, 518.3K engagements
"@Laksh_Kanojiya GPT-5.1 Thinking is already much better than Gemini X Pro for all my use cases. I assume that GPT-5.2 will be even better but let's wait and see"
X Link 2025-12-11T14:12Z 7740 followers, 2172 engagements
"Achieving AGI is OpenAI's primary goal. ChatGPT Sora X the image models "adult mode" and all the other little things on which these timeline is constantly focused are merely a stepping stone towards AGI. (And these product things are important ofc - both because AGI will be distributed to the world and because they generate revenue which turns into funding which is converted into progress towards AGI. But achieving them is not OpenAI's primary mission.)"
X Link 2025-12-11T22:30Z 7740 followers, 7004 engagements
"Test-time compute (the X axis) matters. The longer the model thinks and the more test-time compute it expends the more accurate the response should generally be. ARC-AGI is now saying that o3-preview scored XX% on ARC-AGI-1 at $4500 per task. GPT-5.2 Pro is spending just $XXXXX. If a model like GPT-5.2 Pro were allowed to spend 100X more compute on these tasks I think it's likely that you would see its score improve significantly"
X Link 2025-12-12T16:09Z 7740 followers, XX engagements
/creator/x::deredleritt3r