[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @ljupc0 Ljubomir Josifovski Ljubomir Josifovski posts on X about llamacpp, claude, open ai, moe the most. They currently have XXXXX followers and XXX posts still getting attention that total XX engagements in the last XX hours. ### Engagements: XX [#](/creator/twitter::1223118409/interactions)  - X Week XXX -XX% - X Month XXXXXX +432% - X Months XXXXXX -XX% - X Year XXXXXXX +60% ### Mentions: X [#](/creator/twitter::1223118409/posts_active)  ### Followers: XXXXX [#](/creator/twitter::1223118409/followers)  - X Week XXXXX +1.40% - X Month XXXXX +5.10% - X Months XXXXX +26% - X Year XXXXX +43% ### CreatorRank: undefined [#](/creator/twitter::1223118409/influencer_rank)  ### Social Influence [#](/creator/twitter::1223118409/influence) --- **Social category influence** [technology brands](/list/technology-brands) **Social topic influence** [llamacpp](/topic/llamacpp), [claude](/topic/claude), [open ai](/topic/open-ai), [moe](/topic/moe), [amx](/topic/amx) ### Top Social Posts [#](/creator/twitter::1223118409/posts) --- Top posts by engagements in the last XX hours "I have not had opportunity to use it for real but it's next on the list to try. Closed/paying ones I use daily - OpenAI's ChatGPT XXX for conversation. Gemini XXX Pro too and especially for programming that often has a reasoning component to it. Claude never warmed up to it seemed to be too flowery and flattery to my taste. Local ones - seeing llama.cpp I was mind-blown 🤯 haha 😆 so bought 2nd hand M2 mbp 96gb ram so I try everything that fits in memory. :-) Local models I have used more than try&forget over time listed below. 1) dots.llm1 # MoE localhost 75GB RAM XX tps"  [@ljupc0](/creator/x/ljupc0) on [X](/post/tweet/1946145451050401820) 2025-07-18 09:50:07 UTC 5251 followers, XX engagements "@awnihannun Awesome Sorry to be nosey - but have you maybe tried the minimal size max quant model Seems to be 300gb - does it run What tps"  [@ljupc0](/creator/x/ljupc0) on [X](/post/tweet/1945218225194557713) 2025-07-15 20:25:39 UTC 5251 followers, XXX engagements "This looks hopeful - reminder to self to seek Xeons with "Intel Advanced Matrix Extensions (AMX)" for Achieve 6-14x speedup for TTFT and 2-4x for TPOT v.s. llama.cpp. Achieve XX% memory bandwidth efficiency with highly optimized MoE kernels"  [@ljupc0](/creator/x/ljupc0) on [X](/post/tweet/1945607328788570413) 2025-07-16 22:11:49 UTC 5251 followers, XXX engagements "Love-hate relationship with the market while quant trading. HATE: I'm losing $$$ - it's physically painful. I'm sick in the stomach my heart is stone my head heavy. Not only I lost my shirt and it's getting worse through the day to the close as if Mr Market screaming at me "Idiot Imbecile Cretin". AndI kind of can't disagree with it much. LOVE: We put our where our mouth is. We Bet-On-It We are true to ourselves. We do as we say we're not hypocrites. (normies civilians the rest of humanity - are hypocritical) And to top all that offwe the chosen half-dozen we make sh*t ton of $$$ the size of"  [@ljupc0](/creator/x/ljupc0) on [X](/post/tweet/1945776346749354158) 2025-07-17 09:23:26 UTC 5251 followers, XXX engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Ljubomir Josifovski posts on X about llamacpp, claude, open ai, moe the most. They currently have XXXXX followers and XXX posts still getting attention that total XX engagements in the last XX hours.
Social category influence technology brands
Social topic influence llamacpp, claude, open ai, moe, amx
Top posts by engagements in the last XX hours
"I have not had opportunity to use it for real but it's next on the list to try. Closed/paying ones I use daily - OpenAI's ChatGPT XXX for conversation. Gemini XXX Pro too and especially for programming that often has a reasoning component to it. Claude never warmed up to it seemed to be too flowery and flattery to my taste. Local ones - seeing llama.cpp I was mind-blown 🤯 haha 😆 so bought 2nd hand M2 mbp 96gb ram so I try everything that fits in memory. :-) Local models I have used more than try&forget over time listed below. 1) dots.llm1 # MoE localhost 75GB RAM XX tps" @ljupc0 on X 2025-07-18 09:50:07 UTC 5251 followers, XX engagements
"@awnihannun Awesome Sorry to be nosey - but have you maybe tried the minimal size max quant model Seems to be 300gb - does it run What tps" @ljupc0 on X 2025-07-15 20:25:39 UTC 5251 followers, XXX engagements
"This looks hopeful - reminder to self to seek Xeons with "Intel Advanced Matrix Extensions (AMX)" for Achieve 6-14x speedup for TTFT and 2-4x for TPOT v.s. llama.cpp. Achieve XX% memory bandwidth efficiency with highly optimized MoE kernels" @ljupc0 on X 2025-07-16 22:11:49 UTC 5251 followers, XXX engagements
"Love-hate relationship with the market while quant trading. HATE: I'm losing $$$ - it's physically painful. I'm sick in the stomach my heart is stone my head heavy. Not only I lost my shirt and it's getting worse through the day to the close as if Mr Market screaming at me "Idiot Imbecile Cretin". AndI kind of can't disagree with it much. LOVE: We put our where our mouth is. We Bet-On-It We are true to ourselves. We do as we say we're not hypocrites. (normies civilians the rest of humanity - are hypocritical) And to top all that offwe the chosen half-dozen we make sh*t ton of $$$ the size of" @ljupc0 on X 2025-07-17 09:23:26 UTC 5251 followers, XXX engagements
/creator/x::ljupc0