[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
@dudeman6790 RomboDawgRomboDawg posts on X about llamacpp, vram, native, open ai the most. They currently have XXX followers and X posts still getting attention that total XXX engagements in the last XX hours.
Social category influence technology brands XX%
Social topic influence llamacpp 20%, vram 20%, native 20%, open ai XX%
Top accounts mentioned or mentioned by @josephsarnecki @jeethu @essentialai @teknium @unslothai @kalomaze @tszzl @pcuenq @rannisfang @mistralai @alibabaqwen @aiatmeta @anthropicai @openai @zaiorg @deepseekai @nousresearch @kimimoonshot
Top posts by engagements in the last XX hours
"@JosephSarnecki @essential_ai @Teknium @jeethu @UnslothAI @kalomaze @tszzl I saw yea when its supported in llamacpp im gonna def compare it to the new mistral model. Id love to see which one performs better"
X Link 2025-12-07T06:33Z XXX followers, XX engagements
"@rannisfang Has to be fake ive ran through phantoms before and never got ulta"
X Link 2025-12-12T03:17Z XXX followers, 3199 engagements
"GPT-OSS-120b has the best efficiency to this day: 120B 15t/s @ 64GB RAM/8GB VRAM only takes up 60-70GB on your drive. Smart companies would train native fp4/NVfp4. @MistralAI @Alibaba_Qwen @AIatMeta @AnthropicAI @OpenAI @Zai_org @deepseek_ai @NousResearch @Kimi_Moonshot"
X Link 2025-12-13T19:51Z XXX followers, XX engagements