[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@saagnikkk Avatar @saagnikkk Sagnik

Sagnik posts on X about llm, momentum, r1, deep dive the most. They currently have XXX followers and XX posts still getting attention that total XXX engagements in the last XX hours.

Engagements: XXX #

Engagements Line Chart

Mentions: X #

Mentions Line Chart

Followers: XXX #

Followers Line Chart

CreatorRank: undefined #

CreatorRank Line Chart

Social Influence

Social category influence currencies XX%

Social topic influence llm 10%, momentum 10%, r1 10%, deep dive 10%, deep 10%, rl XX%

Top accounts mentioned or mentioned by @lifanyuan @pavanjayasinha @dilekhakkanitur @haopenguiuc @haqueishfaq @johnschulman2

Top Social Posts

Top posts by engagements in the last XX hours

"🚨New Blog Alert: Is AdamW an overkill for RLVR We found that vanilla SGD is X. As performant as AdamW X. 36x more parameter efficient naturally. (much more than a rank X lora) 🤯 Looks like a "free lunch". Maybe Its time to rethink the optimizers for RLVR 🧵"
X Link 2025-11-30T17:40Z XXX followers, 167K engagements

"The assumption: You need adaptive optimizers because the LLM loss landscape is complex. Our earlier observation: RL finetunes a small subnetwork. Maybe the loss landscape is simpler than we think Maybe we dont need momentum"
X Link 2025-11-30T17:40Z XXX followers, 7343 engagements

"@HaqueIshfaq @pavanjayasinha @lifan__yuan @dilekhakkanitur @haopeng_uiuc The learning rates are actually not comparable in SGD/AdamW actually since AdamW uses an adaptive learning rate. SGD typically needs a much higher LR than AdamW"
X Link 2025-12-10T07:19Z XXX followers, XX engagements