[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@dudeman6790 Avatar @dudeman6790 RomboDawg

RomboDawg posts on X about llamacpp, vram, native, open ai the most. They currently have XXX followers and XX posts still getting attention that total XXX engagements in the last XX hours.

Engagements: XXX #

Engagements Line Chart

Mentions: X #

Mentions Line Chart

Followers: XXX #

Followers Line Chart

CreatorRank: XXXXXXXXX #

CreatorRank Line Chart

Social Influence

Social category influence technology brands

Social topic influence llamacpp, vram, native, open ai

Top Social Posts

Top posts by engagements in the last XX hours

"@pcuenq Qwen3-Next just got llamacpp support. And it hits way above its weight class for a 80b model"
X Link 2025-11-29T18:32Z XXX followers, XX engagements

"@jeethu @JosephSarnecki @essential_ai @Teknium @UnslothAI @kalomaze @tszzl tell that to llamacpp lol it doesnt"
X Link 2025-12-07T06:41Z XXX followers, XX engagements

"@JosephSarnecki @essential_ai @Teknium @jeethu @UnslothAI @kalomaze @tszzl I saw yea when its supported in llamacpp im gonna def compare it to the new mistral model. Id love to see which one performs better"
X Link 2025-12-07T06:33Z XXX followers, XX engagements

"@rannisfang Has to be fake ive ran through phantoms before and never got ulta"
X Link 2025-12-12T03:17Z XXX followers, 3188 engagements

"GPT-OSS-120b has the best efficiency to this day: 120B 15t/s @ 64GB RAM/8GB VRAM only takes up 60-70GB on your drive. Smart companies would train native fp4/NVfp4. @MistralAI @Alibaba_Qwen @AIatMeta @AnthropicAI @OpenAI @Zai_org @deepseek_ai @NousResearch @Kimi_Moonshot"
X Link 2025-12-13T19:51Z XXX followers, XX engagements