Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

[Llamacpp](/topic/llamacpp)

### Top Social Posts

*Showing only X posts for non-authenticated requests. Use your API key in requests for full results.*

"Run multiple local llama.cpp servers with FlexLLama"  
[@yazoniak](/creator/reddit/yazoniak) on [Reddit](/post/reddit-post/t3_1m36ipz) 2025-07-18 16:12:40 UTC X followers, X engagements


"🚀 Secret AI just released some updates in X weeks 📱 ✅ Llama.cpp b5919 upgrade - Gemma 3n (text only) + latest AI models support ✅ Enhanced llama.cpp compatibility - huge crash reduction & support for more devices ✅ Lightning-fast MNN image analysis - Qwen2.5 VL analyzes images in few seconds ✅ Portrait & landscape mode support ✅ Rock-solid stability improvements Your private AI just got even better 💪 #SecretAI #Gemma3n #OfflineAI #PrivateAI #LocalLLM #AIUpdate"  
[@SecretAILabs](/creator/x/SecretAILabs) on [X](/post/tweet/1946224190040035838) 2025-07-18 15:03:00 UTC 1036 followers, XX engagements


"support for EXAONE XXX model architecture has been merged into llama.cpp"  
[@jacek2023](/creator/reddit/jacek2023) on [Reddit](/post/reddit-post/t3_1m31z4z) 2025-07-18 13:12:14 UTC X followers, XX engagements


"I have not had opportunity to use it for real but it's next on the list to try. Closed/paying ones I use daily - OpenAI's ChatGPT XXX for conversation. Gemini XXX Pro too and especially for programming that often has a reasoning component to it. Claude never warmed up to it seemed to be too flowery and flattery to my taste. Local ones - seeing llama.cpp I was mind-blown 🤯 haha 😆 so bought 2nd hand M2 mbp 96gb ram so I try everything that fits in memory. :-) Local models I have used more than try&forget over time listed below. 1) dots.llm1 # MoE localhost 75GB RAM XX tps"  
[@ljupc0](/creator/x/ljupc0) on [X](/post/tweet/1946145451050401820) 2025-07-18 09:50:07 UTC 5247 followers, XX engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

Llamacpp

Top Social Posts

Showing only X posts for non-authenticated requests. Use your API key in requests for full results.

"Run multiple local llama.cpp servers with FlexLLama"
@yazoniak on Reddit 2025-07-18 16:12:40 UTC X followers, X engagements

"🚀 Secret AI just released some updates in X weeks 📱 ✅ Llama.cpp b5919 upgrade - Gemma 3n (text only) + latest AI models support ✅ Enhanced llama.cpp compatibility - huge crash reduction & support for more devices ✅ Lightning-fast MNN image analysis - Qwen2.5 VL analyzes images in few seconds ✅ Portrait & landscape mode support ✅ Rock-solid stability improvements Your private AI just got even better 💪 #SecretAI #Gemma3n #OfflineAI #PrivateAI #LocalLLM #AIUpdate"
@SecretAILabs on X 2025-07-18 15:03:00 UTC 1036 followers, XX engagements

"support for EXAONE XXX model architecture has been merged into llama.cpp"
@jacek2023 on Reddit 2025-07-18 13:12:14 UTC X followers, XX engagements

"I have not had opportunity to use it for real but it's next on the list to try. Closed/paying ones I use daily - OpenAI's ChatGPT XXX for conversation. Gemini XXX Pro too and especially for programming that often has a reasoning component to it. Claude never warmed up to it seemed to be too flowery and flattery to my taste. Local ones - seeing llama.cpp I was mind-blown 🤯 haha 😆 so bought 2nd hand M2 mbp 96gb ram so I try everything that fits in memory. :-) Local models I have used more than try&forget over time listed below. 1) dots.llm1 # MoE localhost 75GB RAM XX tps"
@ljupc0 on X 2025-07-18 09:50:07 UTC 5247 followers, XX engagements

Llamacpp
/topic/llamacpp/posts