[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Epoch AI posts on X about open ai, hardware, has been, battle royale the most. They currently have XXXXXX followers and XXX posts still getting attention that total XXXXX engagements in the last XX hours.
Social category influence technology brands XXXX% finance XXXX%
Social topic influence open ai #1459, hardware 0.51%, has been 0.51%, battle royale 0.51%, did we 0.51%, overnight XXXX%
Top accounts mentioned or mentioned by @ansonwhho @jmannhart @apotlogea @logist_bcb @letsgoelsewhere @greghburnham @0xcheeezzyyyy @jsevillamol @vblablova @robirahman @eli5_defi @grok @turquoisesound @abhi_umar62723 @openai @ardenaberg @flyingfishtrump @egeerdil2 @justjoshinyou13 @proof_steve
Top posts by engagements in the last XX hours
"Next is reinforcement learning during post-training which adds more uncertainty. In early 2025 RL compute was smallmaybe 1-10% of pre-training. But this is scaling up fast: OpenAI scaled RL by XX from o1 to o3 and xAI did the same from Grok X to 4"
X Link @EpochAIResearch 2025-10-09T20:11Z 28.5K followers, 3026 engagements
"Our best guess: GPT-5 was trained on 5e25 FLOP total including both pre-training and reinforcement learning. That would be more than twice as much as GPT-4 (2e25 FLOP) but less than GPT-4.5 (1e26 FLOP). Heres how it breaks down"
X Link @EpochAIResearch 2025-10-09T20:11Z 28.5K followers, 3325 engagements
"A healthy conversation about AI should be grounded in facts. Epochs datasets can help you track and understand the trajectory of AI. As a nonprofit our work is freely accessible for anyone to read replicate and build upon. Our datasets:"
X Link @EpochAIResearch 2025-10-10T15:19Z 28.5K followers, 5339 engagements
"Why did OpenAI train GPT-5 with less compute than GPT-4.5 Due to the higher returns to post-training they scaled post-training as much as possible on a smaller model And since post-training started from a much lower base this meant a decrease in total training FLOP 🧵"
X Link @EpochAIResearch 2025-09-26T20:35Z 28.5K followers, 243.8K engagements
"AI Models: weve annotated 3000+ models released since 1950 with training compute parameter count dataset size hardware & more. Browse filter or download CSV (updated daily)"
X Link @EpochAIResearch 2025-10-10T15:19Z 28.5K followers, XXX engagements
"What are current economic models missing about AGI How would we know if we were approaching explosive growth Stanford economist Phil Trammell has been rigorously thinking about the intersection of economic theory and AI (incl. AGI) for over five years long before the recent surge of interest in large language models. In this episode of Epoch After Hours @pawtrammell and Epoch AI researcher @ansonwhho discuss what economic theory really has to say about the development and impacts of AGI: what current economic models get wrong the odds of explosive economic growth what real GDP actually"
X Link @EpochAIResearch 2025-10-01T19:14Z 28.5K followers, 93.3K engagements
"We manually evaluated three compute-intensive model settings on our extremely hard math benchmark. FrontierMath Tier 4: Battle Royale GPT-5 Pro set a new record (13%) edging out Gemini XXX Deep Think by a single problem (not statistically significant). Grok X Heavy lags. 🧵"
X Link @EpochAIResearch 2025-10-10T16:26Z 28.5K followers, 256.6K engagements
"Sora X can solve questions from LLM benchmarks despite being a video model. We tested Sora X on a small subset of GPQA questions and it scored XX% compared to GPT-5s score of 72%"
X Link @EpochAIResearch 2025-10-03T18:00Z 28.5K followers, 247.7K engagements
"New data insight: How does OpenAI allocate its compute OpenAI spent $X billion on compute last year. Most of this went to R&D meaning all research experiments and training. Only a minority of this R&D compute went to the final training runs of released models"
X Link @EpochAIResearch 2025-10-10T18:19Z 28.5K followers, 428.4K engagements
"We recently wrote that GPT-5 is likely the first mainline GPT release to be trained on less compute than its predecessor. How did we reach this conclusion and what do we actually know about how GPT-5 was trained 🧵"
X Link @EpochAIResearch 2025-10-09T20:11Z 28.5K followers, 75.1K engagements
"GPT-5s pre-train token count is unconfirmed but Llama X and Qwen3s were 30-40 trillion tokens. OpenAI has invested heavily into pre-training so GPT-5 was likely trained on at least 30T tokens possibly several times more. This gives a median of 3e25 FLOP pretrain"
X Link @EpochAIResearch 2025-10-09T20:11Z 28.5K followers, 3369 engagements
"Training compute scales in proportion to a models active parameters as well as training data. Based on price speed and prevailing industry trends GPT-5 is probably a mid-sized frontier model with 100B active params akin to Grok X (115B active) GPT-4o and Claude Sonnet"
X Link @EpochAIResearch 2025-10-09T20:11Z 28.5K followers, 4824 engagements
"When unemployment spiked during COVID a several trillion dollars stimulus package passed in the US almost overnight. In @YafahEdelman modal story of how AI unfolds by 2035 we see a similarly dramatic response to AI-driven unemployment and de-skilling"
X Link @EpochAIResearch 2025-09-19T18:03Z 28.5K followers, 3497 engagements
"How many "digital workers" could OpenAI deploy We did a back-of-the-envelope estimate: restricting to tasks GPT-5 can do OpenAI has the compute to run X million digital workers. As AI automates more tasks such a digital workforce could have big implications. 🧵"
X Link @EpochAIResearch 2025-10-04T17:57Z 28.5K followers, 80.3K engagements