[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@SebastienBubeck Avatar @SebastienBubeck Sebastien Bubeck

Sebastien Bubeck posts on X about math, science, the first, solve the most. They currently have XXXXXX followers and XXX posts still getting attention that total XXXXX engagements in the last XX hours.

Engagements: XXXXX #

Engagements Line Chart

Mentions: X #

Mentions Line Chart

Followers: XXXXXX #

Followers Line Chart

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence

Social category influence technology brands

Social topic influence math #2302, science #3599, the first #3159, solve, tao, at least, deep dive, showcase, ai, status

Top accounts mentioned or mentioned by @openai @marksellke @sama @iruletheworldmo @jxmnop @eldanronen @allieadg @satyanadella @ylecun @mojanjp @kevinscott @dimitrispapail @redmond @ns123abc @alupsasca @simonsinstitute @giffmana @tao115306424727150237 @obousquet @huggingface

Top Social Posts

Top posts by engagements in the last XX hours

"This problem fits in a broader context of understanding THE SHAPE OF LEARNING CURVES. The most basic property of such shapes is that hopefully . they are decreasing Specifically from the statistical perspective assume that you add more data can you prove that your test loss will be lower Surprisingly this is quite non-obvious and there are many counterexamples. This was discussed at length in the classic book Devroye Gyorfi Lugosi 1996 (which I remember reading voraciously XX years ago but that's a different story). More recently in a 2019 COLT Open Problem it was pointed out that some"
X Link 2025-12-11T18:34Z 63.1K followers, 47.7K engagements

"It's now on the arxiv enjoy"
X Link 2025-11-21T01:24Z 63.1K followers, 124.8K engagements

"It's becoming increasingly clear that gpt5 can solve MINOR open math problems those that would require a day/few days of a good PhD student. Ofc it's not a XXX% guarantee eg below gpt5 solves 3/5 optimization conjectures. Imo full impact of this has yet to be internalized"
X Link 2025-09-24T15:36Z 63.1K followers, 411.5K engagements

"Well this time it's by Terence Tao himself:"
X Link 2025-10-03T05:04Z 63.1K followers, 839.9K engagements

"My posts last week created a lot of unnecessary confusion* so today I would like to do a deep dive on one example to explain why I was so excited. In short its not about AIs discovering new results on their own but rather how tools like GPT-5 can help researchers navigate connect and understand our existing body of knowledge in ways that were never possible before (or at least much much more time consuming). Note that I did not pick the most impressive example (we will discuss that one at a later time) but rather one that illustrates many points at play that might have eluded people who see"
X Link 2025-10-20T16:35Z 63.1K followers, 614K engagements

"3 years ago we could showcase AI's frontier w. a unicorn drawing. Today we do so w. AI outputs touching the scientific frontier: Use the doc to judge for yourself the status of AI-aided science acceleration and hopefully be inspired by a couple examples"
X Link 2025-11-20T18:03Z 63.1K followers, 1.5M engagements

"GPT XXX is our best model for science yet: XXXX% GPQA XX% Frontier Math XXXX% ARC-AGI-2 XX% CharXiv (w. tools) HLE XX% (w. tools) . Moreover at research level the model has become a lot more reliable. It now one-shots the convex optimization problem to its optimal value"
X Link 2025-12-11T18:34Z 63.1K followers, 35.4K engagements

"If you recall this was the first problem I used to show GPT-5's research capabilities and the goal was to determine the step-size condition under which gradient descent for smooth convex optimization admits a learning curve which itself is convex There was a nice paper showing that eta 1/L is sufficient and eta 1.75/L is necessary and a v2 of that paper closed the gap showing 1.75/L is the right "if and only if" condition. Back in August (4 months ago) given the v1 of the paper in context GPT-5 was able to improve the sufficient condition from 1/L to 1.5/L (so short of the optimal 1.75/L)."
X Link 2025-12-11T18:34Z 63.1K followers, 4897 engagements

"I almost failed my basic duty; here is the XXX unicorn"
X Link 2025-12-12T06:00Z 63.1K followers, 248.1K engagements