[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
@_xjdr xjdrxjdr posts on X about jax, level, gpu, i am the most. They currently have XXXXXX followers and X posts still getting attention that total XXXXXX engagements in the last XX hours.
Social category influence stocks XXXXX% technology brands XXXXX% finance XXXXX%
Social topic influence jax #906, level 14.29%, gpu 14.29%, i am 14.29%, bloomberg 14.29%, $googl 14.29%, just a 14.29%, k2 XXXXX%
Top accounts mentioned or mentioned by @stochasticchasm @pehdrew_ @vegamyhre @fujikanaeda @eddiberd @code_star @aidan_mclau @mangosweet78 @rokobasili @zectbynmo @kellogh @vega_myhre @chhillee
Top assets mentioned Alphabet Inc Class A (GOOGL)
Top posts by engagements in the last XX hours
"# Why Training MoEs is So Hard recently i have found myself wanting a small research focused training repo that i can do small experiments on quickly and easily. these experiments range from trying out new attention architectures (MLA SWA NSA KDA - all pluggable) to multi-precision training to most recently multi optimizer setups with 'new' optimizers. i tried the X major contenders (Nemo Megatron and Torchtitan) but for many and various reasons they very much did not fit the bill for my purposes and were all pretty painful to setup use and get running stably. I once again missed my tooling"
X Link 2025-12-07T00:15Z 25.9K followers, 186.4K engagements
"@vega_myhre nvfp4 is huge but low level control over NVSwitch domain cuteDSL and torch integration and GB300s when i need to scale up. its hard for me to find a place where TPUs win at this moment. Jax is amazing but xla is making it hard to work with and mosaic is still young"
X Link 2025-12-08T05:50Z 25.9K followers, 4761 engagements
"im curious what everyone's ideal 'tiny' moe size would be the X shapes i typically work with are 7A2B and 16B4A but those seem to still be on the 'large' end when people discuss 'small' models"
X Link 2025-12-10T23:20Z 25.9K followers, 9657 engagements
"this is a reasonable way to bootstrap a gpu poor lab but im surprised MSL would do this. I am not even doing this as a GPU middle class (there are much better ways)"
X Link 2025-12-11T05:06Z 25.9K followers, 29.3K engagements
"almost ready now"
X Link 2025-12-11T15:49Z 25.9K followers, 9003 engagements
"If google ever started selling TPU hardware and released internal tooling they'd MOG nvidia so bad. Just a trillion dollar company waiting to be built. most people don't realize how good JAX + TPUs + (other stuff) really is"
X Link 2024-08-16T18:42Z 25.9K followers, 220.7K engagements
"@stochasticchasm @fujikanaeda i tested a bunch of different models (k2 glm dsv3 qwen etc) and gpt-oss-20B was consistently the best (even more consistent than 120B)"
X Link 2025-12-11T16:01Z 25.9K followers, XXX engagements