[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Ljubomir Josifovski posts on X about llamacpp, currency, cryptos, k2 the most. They currently have XXXXX followers and XXX posts still getting attention that total XXX engagements in the last XX hours.
Social topic influence llamacpp, currency, cryptos, k2, op, mbp, glm
Top accounts mentioned or mentioned by @elonmusk @keirstarmer @openai @plinz @mtpennycook @geoffreyhinton @richardhanania @unslothai @garyseconomics @henriccont @anthropicai @geminicli @peterrhague @paulg @matthewberman @casperhansen @jacobin @inenkovkolev @richardjmurphy @sigh321
Top posts by engagements in the last XX hours
"They themselves don't know for sure. Still - not futile to ask the Q as it may prod them to think. Not the worst of things. 😆 This is what I think is going on with cryptos goldbugs libertarians rejecting "the MMT" completely and vociferously. Whereas I thought they will have easy time with the MMT premises. For - how can I be a libertarian and think "gov monopoly on currency is achieved via monopoly on violence and taxation asking the citizens to return back to the state the currency the state issued; and everyone is forbidden from creating any other currency" is not the right story How can" @ljupc0 on X 2025-07-23 12:01:08 UTC 5314 followers, XXX engagements
"If not coding then guessing reasoning on top of existing model. Recent massive non-reasoning is K2. Maybe Kimi release variant of K2 but with reasoning on top. But feels too early+ don't know the people if their group is doing it. GLM guy the ones with GLM-4 GLM-Z1 - we have not heard from them since April So maybe them" @ljupc0 on X 2025-07-24 06:56:21 UTC 5309 followers, XX engagements
"Lucky for us then that minimal NN forward pass in C is XXX lines or so Didn't @karpathy have some repo llm.c or some such So 20yrs is not a problem pfft. can run MS-DOS XXXX programs afaik. Nah - NN software implements a function is a platonic idea. So software + weights are immortal all right. To come to life though they need hardware any hardware. Like a soul seeking a body haha 😱 More amusing to me was the connection of low(er) energy analogue human (and animal and any living thing really) intelligence versus high(er) energy digital intelligence. Mortal-in-time analogue intelligence" @ljupc0 on X 2025-07-24 19:49:02 UTC 5309 followers, XXX engagements
"It's a head-scratcher to me how the algofeed puts on my followers screens my forwards of my own posts in a way that it doesn't put the original posts Puzzling. Related but not on algofeed. When I spend time to reply (like now) I never know whether my (this) reply was pushed to OP (your) screen or not. So IDK I'm in the dark if the OP (unless s/he explicitly scans the replies) has even gotten a non trivial chance to seeing it. The only way I can think of requires a cooperation protocol: for the OP to Like the reply and for I to turn on notifications so I'm notified of the OP Like. Than I know" @ljupc0 on X 2025-07-25 04:15:41 UTC 5314 followers, XXX engagements
"Good question 🙂 They are the majority that's why. I'd say XX% of the population is in favour for more/repression at any given time. In the UK there is omni-party consensus. This was brought in by CON-s but the then opposition (nov gov) LAB only criticism at the time was that it does not go far enough. That it should be even harsher. Smaller parties like LDM REF (nowadays REF big not small) maybe more of a mixed bag but I'd assume they'd break 80:20 too. So the pro-freedom side lacks numbers. This is at most if not all times I remember. So we better make for lack of numbers with smarts. 😬" @ljupc0 on X 2025-07-27 09:45:34 UTC 5318 followers, XX engagements
"Heh - LAB probably howled for ever harsher law while in opposition. And if one is to survey the public opinion will probably get major support for even harsher laws. Probably closer to one Mary Whitehouse of old. 😂 Us terminally online thinking "this is bad" are a tiny minority" @ljupc0 on X 2025-07-26 21:31:10 UTC 5318 followers, XXX engagements
"@DaveShapi nothing 😂 that's why Madness of Crowds Extraordinary Popular Delusions etc that sort of stuff are impossible to predict - when will they break or how or why" @ljupc0 on X 2025-07-25 14:11:16 UTC 5316 followers, XX engagements
"@jamievoynow Ha I'm trying to go the other way. Did 10yrs of ML until 2004 (HMMs based ASR Ngrams LMs) then next 20yrs quant trading. Tinkering with llama.cpp and running open source open weights models in the last year. Now on X weeks notice - decided to try go back to ML/AI. :-)" @ljupc0 on X 2025-07-20 18:23:51 UTC 5311 followers, XX engagements
"Macs are good match for MoEs as they have lots of V/RAM even if lacking in gpu counts behind nvidias. Hope GLM runs on 96GB RAM (got oldish mbp m2 96gb ram.) I liked GLM prior 0414-s. Currently dots.llm1 is the largest model that runs local for me. That's 142B-A14B MoE. Using the 48gb dots.llm1.inst-UD-TQ1_0.gguf quants by @UnslothAI from In llama.cpp this: # MoE localhost 75GB RAM XX tps access at http://127.0.0.1:8080: # /llama.cpp$ sudo sysctl iogpu.wired_limit_mb=80000; build/bin/llama-server --model models/dots.llm1.inst-UD-TQ1_0.gguf --temp X --top_p XXXX --min_p X --ctx-size 32768" @ljupc0 on X 2025-07-26 20:43:32 UTC 5313 followers, XX engagements