[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@shubham_arora_0 Avatar @shubham_arora_0 Shubham Arora

Shubham Arora posts on X about $2413t, token the most. They currently have XXX followers and XXX posts still getting attention that total XXXXX engagements in the last XX hours.

Engagements: XXXXX #

Engagements Line Chart

Mentions: X #

Mentions Line Chart

Followers: XXX #

Followers Line Chart

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence #


Social category influence technology brands

Social topic influence $2413t #192, token #727

Top Social Posts #


Top posts by engagements in the last XX hours

"@danielmerja m3 max mxfp4 gpt-oss-20b with mlx you get about XX tok/s"
X Link @shubham_arora_0 2025-10-15T04:40Z XXX followers, 5144 engagements

"so I guess the conclusion to draw is just slower memory bandwidth XXX GB/s (spark) vs XXX GB/s (m3 max)"
X Link @shubham_arora_0 2025-10-15T18:04Z XXX followers, XX engagements

"curious if I can replicate results with using a single rtx 5070 working XX hours"
X Link @shubham_arora_0 2025-10-13T16:22Z XXX followers, XX engagements

"on my m3 max running the gpt-oss-20b using the MXFP4 quant running via MLX (via LMstudio) I get XX tok/sec chat performance 0.46s to first token"
X Link @shubham_arora_0 2025-10-15T04:39Z XXX followers, 5415 engagements

"@DanAdvantage this is the model that comes to mind. it is actually great for learning local inference experimentation. i use mac with huge ram for basically for the same reason. wouldn't recommend for work though"
X Link @shubham_arora_0 2025-10-15T18:15Z XXX followers, X engagements