Dark | Light
# ![@TheAhmadOsman Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::248951926.png) @TheAhmadOsman Ahmad

Ahmad posts on X about ai, claude code, inference, if you the most. They currently have [------] followers and [---] posts still getting attention that total [-------] engagements in the last [--] hours.

### Engagements: [-------] [#](/creator/twitter::248951926/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::248951926/c:line/m:interactions.svg)

- [--] Week [---------] +43%
- [--] Month [---------] -4.70%
- [--] Months [----------] +786%
- [--] Year [----------] +5,255%

### Mentions: [--] [#](/creator/twitter::248951926/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::248951926/c:line/m:posts_active.svg)

- [--] Week [---] +1.20%
- [--] Month [---] +49%
- [--] Months [-----] +459%
- [--] Year [-----] +2,132%

### Followers: [------] [#](/creator/twitter::248951926/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::248951926/c:line/m:followers.svg)

- [--] Week [------] +2.30%
- [--] Month [------] +13%
- [--] Months [------] +415%
- [--] Year [------] +2,285%

### CreatorRank: [------] [#](/creator/twitter::248951926/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::248951926/c:line/m:influencer_rank.svg)

### Social Influence

**Social category influence**
[technology brands](/list/technology-brands)  24.07% [finance](/list/finance)  9.26% [stocks](/list/stocks)  5.56% [social networks](/list/social-networks)  3.7% [products](/list/products)  2.47% [countries](/list/countries)  1.23% [travel destinations](/list/travel-destinations)  0.62% [musicians](/list/musicians)  0.62% [celebrities](/list/celebrities)  0.62%

**Social topic influence**
[ai](/topic/ai) #2799, [claude code](/topic/claude-code) #532, [inference](/topic/inference) #108, [if you](/topic/if-you) 7.41%, [llm](/topic/llm) #70, [anthropic](/topic/anthropic) #1065, [agi](/topic/agi) #56, [ollama](/topic/ollama) 4.32%, [open ai](/topic/open-ai) 4.32%, [vram](/topic/vram) 4.32%

**Top accounts mentioned or mentioned by**
[@alexfinn](/creator/undefined) [@test_tm7873](/creator/undefined) [@vsouthvpawv](/creator/undefined) [@sentdex](/creator/undefined) [@grok](/creator/undefined) [@brandgrowthos](/creator/undefined) [@testtm7873](/creator/undefined) [@llmjunky](/creator/undefined) [@robbiepasquale](/creator/undefined) [@annanidev](/creator/undefined) [@zenmagnets](/creator/undefined) [@dusveloper](/creator/undefined) [@minimaxai](/creator/undefined) [@cdeburner](/creator/undefined) [@udaysy](/creator/undefined) [@sudoingx](/creator/undefined) [@narmourism](/creator/undefined) [@draslan_eth](/creator/undefined) [@codewithimanshu](/creator/undefined) [@ryzenbr](/creator/undefined)

**Top assets mentioned**
[Alphabet Inc Class A (GOOGL)](/topic/$googl) [Flex Ltd. Ordinary Shares (FLEX)](/topic/$flex)
### Top Social Posts
Top posts by engagements in the last [--] hours

"lol lmao even ollama are lying through their teeth in this reply to me next tweet i'll show the llama cpp merge for gpt-oss to ollama some comments on the merge calling them out llama cpp developer remarks @RobbiePasquale @TheAhmadOsman All the new models are implemented directly in Ollama by Ollama. We dont like it when people spread false information. Examples of Ollamas implementations: Google EmbeddingGemma - https://t.co/cXjfxkQvof OpenAI gpt-oss - https://t.co/Kx0SD1unn1 You can check out the @RobbiePasquale @TheAhmadOsman All the new models are implemented directly in Ollama by Ollama."  
[X Link](https://x.com/TheAhmadOsman/status/1964546485045121489)  2025-09-07T04:29Z 41.8K followers, 54.1K engagements


"there is a lot of MONEY in this add /.json at the end of any Reddit link and get the entire thread including all replies to the n-th depth and all the metadata as JSON and then use LLMs to extract/analyze/etc you can make so much $$$ from niche subreddits"  
[X Link](https://x.com/TheAhmadOsman/status/1964583335147237830)  2025-09-07T06:55Z 42.2K followers, 1.1M engagements


"Vibe coding is the new prompt engineering. A couple years ago it was all prompts. Now its layers of abstractions. Memory skills MCPs and a thousand other things stacked on top of each other. Its productivity theater. Fundamentals still decide who ships and who just vibes"  
[X Link](https://x.com/TheAhmadOsman/status/2005810783658127658)  2025-12-30T01:18Z 41.8K followers, 25.4K engagements


"Everyone on my feed is talking about local models and buying GPUs/Macs to run them This is the good timeline so glad things are playing out the way they are"  
[X Link](https://x.com/TheAhmadOsman/status/2014860597947416677)  2026-01-24T00:39Z 41.8K followers, 122.9K engagements


"calling it now bookmark this for later - opensource AI will win - AGI will run local not on someone elses servers - the real ones are learning how it all works be early Buy a GPU get ur hands dirty learn how it works youll thank yourself later its gonna be great Everyone on my feed is talking about local models and buying GPUs/Macs to run them This is the good timeline so glad things are playing out the way they are Everyone on my feed is talking about local models and buying GPUs/Macs to run them This is the good timeline so glad things are playing out the way they are"  
[X Link](https://x.com/TheAhmadOsman/status/2014935542874484854)  2026-01-24T05:37Z 41.5K followers, 125K engagements


"POV: you bought GPUs memory and SSDs early and now youre just vibing while everyone else is in line"  
[X Link](https://x.com/TheAhmadOsman/status/2015234158176465290)  2026-01-25T01:23Z 41.5K followers, 18.3K engagements


"People ask why I insist on GPUs and not Mac Studios/Mac minis This is why: - Llama [---] 70B BF16 on 8x RTX 3090s - 50+ concurrent requests - Batch inference - Sustained throughput Not only that: 2k context per request (prompt) 1.8k tokens in output [--] mins [--] secs for [--] responses This is GPU territory. You cant do this on a Mac. Not yet at least. https://twitter.com/i/web/status/2015323752985395223 https://twitter.com/i/web/status/2015323752985395223"  
[X Link](https://x.com/TheAhmadOsman/status/2015323752985395223)  2026-01-25T07:20Z 41.7K followers, 80.6K engagements


"BULLISH on NVFP4 What actually changes once the software stack catches up - 3-4x VRAM savings vs FP16 - Lower memory bandwidth pressure - Better perf per watt - Cheaper local inference How come Smaller weights with loseless accuracy Bigger models fit on consumer GPUs Less VRAM needed & more throughput Once NVFP4 becomes the default local AI gets faster cheaper and a lot less compromised https://twitter.com/i/web/status/2015591982890910071 https://twitter.com/i/web/status/2015591982890910071"  
[X Link](https://x.com/TheAhmadOsman/status/2015591982890910071)  2026-01-26T01:05Z 41.5K followers, 10.5K engagements


"I get way more mentions & DMs than I can realistically keep up with To manage signal vs noise I prioritize the Subscribed tab If you want a much higher chance of me seeing & replying Subscribing is the best way to do that No pressure just being transparent about how I triage"  
[X Link](https://x.com/TheAhmadOsman/status/2015603863630250216)  2026-01-26T01:53Z 41.8K followers, 16.7K engagements


"best opensource LLM at the moment is Kimi K2.5"  
[X Link](https://x.com/TheAhmadOsman/status/2016201286866309212)  2026-01-27T17:27Z 41.5K followers, 31.1K engagements


"nobody should use ollama btw slower than llama.cpp on windows slower than mlx on mac slop useless wrapper alternatives lmstudio llama.cpp exllamav2/v3 vllm sglang like literally anythings better than ollama lmao"  
[X Link](https://x.com/TheAhmadOsman/status/2016345743754232164)  2026-01-28T03:01Z 41.7K followers, 113.1K engagements


"- local llms [---] - running a model = inference (using model weights) - inference = predicting the next token based on your input plus all tokens generated so far - together these make up the "sequence" - tokens words - they're the chunks representing the text a model sees - they are represented by integers (token IDs) in the model - "tokenizer" = the algorithm that splits text into tokens - common types: BPE (byte pair encoding) SentencePiece - token examples: - "hello" = [--] token or maybe [--] or [--] tokens - "internationalization" = [--] tokens - context window = max tokens model can "see" at once"  
[X Link](https://x.com/TheAhmadOsman/status/2016397940584059146)  2026-01-28T06:28Z 41.6K followers, 26.8K engagements


"Join us today on r/LocalLLaMA for an AMA with Moonshot AI the lab behind the recent SoTA model Kimi K2.5 I am genuinely excited for this one make sure you don't miss it Wednesday 8am-11am PST"  
[X Link](https://x.com/TheAhmadOsman/status/2016421761064059074)  2026-01-28T08:03Z 42.1K followers, 25.4K engagements


"running Claude Code w/ local models on my own GPUs at home vLLM serving GLM-4.5 Air on 4x RTX 3090s nvtop showing live GPU load Claude Code generating code + docs end-to-end on my AI cluster this is what local AI actually looks like Buy a GPU"  
[X Link](https://x.com/TheAhmadOsman/status/2016456015298924579)  2026-01-28T10:19Z 41.5K followers, 92.8K engagements


"step-by-step LLM Engineering Projects LOCK IN FOR A FEW WEEKS ON THESE PROJECTS AND YOU WILL BE GRATEFUL FOR IT LATER each project = one concept learned the hard (i.e. real) way Tokenization & Embeddings build byte-pair encoder + train your own subword vocab write a token visualizer to map words/chunks to IDs one-hot vs learned-embedding: plot cosine distances Positional Embeddings classic sinusoidal vs learned vs RoPE vs ALiBi: demo all four animate a toy sequence being position-encoded in 3D ablate positionswatch attention collapse Self-Attention & Multihead Attention hand-wire dot-product"  
[X Link](https://x.com/TheAhmadOsman/status/2016519132108435583)  2026-01-28T14:30Z 41.7K followers, 27.3K engagements


"I am Bullish on NVFP4 What actually changes once the software stack catches up - 3-4x VRAM savings vs FP16 - Lower memory bandwidth pressure - Better perf per watt - Cheaper local inference How come Smaller weights with loseless accuracy Bigger models fit on consumer GPUs Less VRAM needed & more throughput Once NVFP4 becomes the default local AI gets faster cheaper and a lot less compromised https://twitter.com/i/web/status/2016588838999674899 https://twitter.com/i/web/status/2016588838999674899"  
[X Link](https://x.com/TheAhmadOsman/status/2016588838999674899)  2026-01-28T19:07Z 41.5K followers, [----] engagements


"Genuine advice If you need ANY hardware BUY IT NOW - Phones - Laptops - Computer parts Hardware prices are about to get ridiculous I just bought my wife a new MacBook & iPhone Im not trying to flex just getting ahead of the supply shock before the prices get wild"  
[X Link](https://x.com/TheAhmadOsman/status/2016688635471446385)  2026-01-29T01:43Z 41.6K followers, 264K engagements


"Google has a real talent for this Take something that works great Slowly improve it until its borderline unusable Watching Google AI Studio get hollowed out in real time is just sad"  
[X Link](https://x.com/TheAhmadOsman/status/2016859355652542786)  2026-01-29T13:01Z 41.8K followers, 58.8K engagements


"a reminder that in closed source AI from companies like OpenAI & Anthropic you have zero control over how the models behave and they can quantize it distill it hot-swap to a cheaper/weaker checkpoint make the model manipulative fine-tune it in ways that break safety or depth drop its IQ run experiments on you and/or your data throttle output speed or raise prices sunset the entire model/version block your request for any made-up bs reason they have all the knobs & you're at their mercy you won't even get a changelog opensource FTW Buy a GPU https://twitter.com/i/web/status/2016950289710874643"  
[X Link](https://x.com/TheAhmadOsman/status/2016950289710874643)  2026-01-29T19:03Z 41.8K followers, 44K engagements


"Random lore My high school once booked me and half a dozen classmates into a hostel in Amsterdam It was in the red light district Stayed there for [--] nights place had a fresh weed smell the entire time"  
[X Link](https://x.com/TheAhmadOsman/status/2017016328754729295)  2026-01-29T23:25Z 41.8K followers, [----] engagements


"@Sentdex Whats the exchange rate to GPUs and do you accept trades πŸ˜‚"  
[X Link](https://x.com/TheAhmadOsman/status/2017075375386349958)  2026-01-30T03:20Z 41.8K followers, [--] engagements


"The Opensource Models I Cannot Wait to Run on My GPUs in [----] DeepSeek V4 MiniMax-M3 GLM-5 Nemotron Ultra Qwen [---] Kimi K3 Each of these models will be the State of The Art model at release This is going to be a GREAT YEAR for local & opensource LLMs/AI"  
[X Link](https://x.com/TheAhmadOsman/status/2017080327177462245)  2026-01-30T03:40Z 41.7K followers, 14.9K engagements


"Q_0.001_K GGUF"  
[X Link](https://x.com/TheAhmadOsman/status/2017232113813135699)  2026-01-30T13:43Z 41.8K followers, 78K engagements


"@annanidev 200k for context window + 128k output length great stuff to run at home on hardware that costs $6k"  
[X Link](https://x.com/TheAhmadOsman/status/2017325435919417461)  2026-01-30T19:53Z 41.7K followers, [----] engagements


"GPUs are still the move for agents The video below shows MiniMax-M2.1 running fully local on 8x RTX 3090s ($6K total) Prompt processed at [----] tokens/sec Output starts [---] tokens/sec and settles in around [--] tokens/sec even at the end"  
[X Link](https://x.com/TheAhmadOsman/status/2017367396739125550)  2026-01-30T22:40Z 41.8K followers, 48.8K engagements


"This weekend check if you need ANY hardware & BUY IT NOW - Phones - Laptops - Computer parts Hardware prices are about to get ridiculous I just bought my wife a new MacBook & iPhone Im not trying to flex just getting ahead of the supply shock before the prices get wild"  
[X Link](https://x.com/TheAhmadOsman/status/2017426652754481531)  2026-01-31T02:36Z 41.7K followers, [----] engagements


"re: clawdbot aka moltbot aka openclaw aka clawd aka clawdy aka henry aka lobster"  
[X Link](https://x.com/TheAhmadOsman/status/2017748924195364915)  2026-01-31T23:56Z 41.8K followers, [----] engagements


"Claude Code buddy were knocking out all three phases together in the next [--] minutes"  
[X Link](https://x.com/TheAhmadOsman/status/2017831999411568960)  2026-02-01T05:26Z 41.5K followers, [----] engagements


"@GoblinRack Dual PRO 6000s for sure You can only get so many tokens per second in a Mac and then it slows down as you fill up context That massive memory is bottlenecked by slow bandwidth Agents want speed I would rather a 4-bit MiniMax on a 192GB VRAM w/ speed than a slow Kimi K2.5"  
[X Link](https://x.com/TheAhmadOsman/status/2017871307988140145)  2026-02-01T08:03Z 41.8K followers, [----] engagements


"@richardbuehling I recommend you read my Buy a GPU thread: - Youll be able to build anything from [--] GPU to 16-GPU AI machines on your own using this Software side Go through this thread: Why not a Mac mini πŸ‘‡ https://x.com/TheAhmadOsman/status/2015323752985395223 https://x.com/i/status/1966287930827358249 https://x.com/i/status/1980026689217298545 People ask why I insist on GPUs and not Mac Studios/Mac minis This is why: - Llama [---] 70B BF16 on 8x RTX 3090s - 50+ concurrent requests - Batch inference - Sustained throughput Not only that: 2k context per request (prompt) 1.8k tokens in output"  
[X Link](https://x.com/TheAhmadOsman/status/2017877188138181077)  2026-02-01T08:26Z 41.7K followers, [----] engagements


"@AlexFinn Now give them GPUs and let your clawds cook (sorry alex not stopping until youre fully gpupilled :D) https://x.com/TheAhmadOsman/status/2018003694906655149 Video is 2.5x speed What youre seeing took 8m40s in realtime From loading a 210B-A10B model onto 8x RTX 3090s to one-shotting a Flappy Bird clone MiniMax-M2.1 is my go-to general agent btw it runs my tasks my bash makes sense of my logs etc Fast & reliable for 95% of work https://t.co/i1nHX9CSuy https://x.com/TheAhmadOsman/status/2018003694906655149 Video is 2.5x speed What youre seeing took 8m40s in realtime From loading a"  
[X Link](https://x.com/TheAhmadOsman/status/2018016537735524729)  2026-02-01T17:40Z 41.8K followers, 13.1K engagements


"@dr_cintas Or you know just Buy a GPU and learn how to run your LLM locally https://x.com/TheAhmadOsman/status/2018003694906655149 Video is 2.5x speed What youre seeing took 8m40s in realtime From loading a 210B-A10B model onto 8x RTX 3090s to one-shotting a Flappy Bird clone MiniMax-M2.1 is my go-to general agent btw it runs my tasks my bash makes sense of my logs etc Fast & reliable for 95% of work https://t.co/i1nHX9CSuy https://x.com/TheAhmadOsman/status/2018003694906655149 Video is 2.5x speed What youre seeing took 8m40s in realtime From loading a 210B-A10B model onto 8x RTX 3090s to"  
[X Link](https://x.com/TheAhmadOsman/status/2018039258758394312)  2026-02-01T19:10Z 41.9K followers, 25.9K engagements


"LLMs will get locked to apps - No API access - For safety reasons Anthropic OpenAI Google etc optimize for vendor lock-in & data collection Run your AI models locally Opensource Open weights Your hardware When you dont own the model you are the product"  
[X Link](https://x.com/TheAhmadOsman/status/2018056568873333079)  2026-02-01T20:19Z 41.6K followers, 14.1K engagements


"@chiroTaur @dr_cintas The GPU Bro"  
[X Link](https://x.com/TheAhmadOsman/status/2018077155624661080)  2026-02-01T21:41Z 41.8K followers, [---] engagements


"Timeline is full of people talking about running AI models locally and picking up GPUs or Macs to experiment with LLMs on their own hardware This is the good timeline again I am really glad to see it unfold this way"  
[X Link](https://x.com/TheAhmadOsman/status/2018108551055495168)  2026-02-01T23:45Z 41.9K followers, 10.1K engagements


"theres one company whose LLMs I genuinely dont care about care to guess which one"  
[X Link](https://x.com/TheAhmadOsman/status/2018140761355682233)  2026-02-02T01:53Z 41.5K followers, 60.4K engagements


"asking her if we can just Buy a few more GPUs from that last RAM sale"  
[X Link](https://x.com/TheAhmadOsman/status/2018159826551910516)  2026-02-02T03:09Z 42K followers, [----] engagements


"@ZenMagnets actually no i like anthropics engineering i just dont respect the company because it moves shady give us their models as open weights and watch the world accelerate if the models arent compute-constrained like they are as a company (bonus point: selfish AF)"  
[X Link](https://x.com/TheAhmadOsman/status/2018160950482771993)  2026-02-02T03:14Z 41.5K followers, 14K engagements


"is there a vanilla ralph loop template out there that you can customize for your goals or should i create one and put it on github alongside llm instructions to customize it for your tasks"  
[X Link](https://x.com/TheAhmadOsman/status/2018496100911468596)  2026-02-03T01:25Z 41.8K followers, [----] engagements


"were accelerating too fast I cannot keep up what a great time to be alive"  
[X Link](https://x.com/TheAhmadOsman/status/2018835423020372401)  2026-02-03T23:54Z 41.5K followers, [----] engagements


"GPUs are crazy because they're like Claude Code but at home"  
[X Link](https://x.com/TheAhmadOsman/status/2018894436483445039)  2026-02-04T03:48Z 41.8K followers, [----] engagements


"i changed my opinion on Skills btw spent a good chunk of today experimenting with them SO MUCH can be UNLOCKED with Skills brilliant LLM automation hack p.s. not gonna let my disdain toward Anthropic & MCPs blind me from seeing the value in sth like this again"  
[X Link](https://x.com/TheAhmadOsman/status/2018946921121853900)  2026-02-04T07:17Z 41.9K followers, 23.8K engagements


"CUDA env 17GB of dependencies me to my agent: figure this out for me walk away to heat up food while it handles it"  
[X Link](https://x.com/TheAhmadOsman/status/2019305616934859029)  2026-02-05T07:02Z 41.8K followers, [----] engagements


"@Presidentlin any answer other than Anthropic is wrong btw"  
[X Link](https://x.com/TheAhmadOsman/status/2019362206899703860)  2026-02-05T10:47Z 41.5K followers, [----] engagements


"i live in the terminal more than before AI became a thing never been more productive"  
[X Link](https://x.com/TheAhmadOsman/status/2019677497097613678)  2026-02-06T07:40Z 41.8K followers, [----] engagements


"i stand by what i said by the way Codex [---] & Opus [---] improvements seem very marginal from my evals until the next SOTA i am just sticking with Kimi K2.5 GLM-4.7 and MiniMax-M2.1 p.s. we already had Agentic Swarms in Kimi K2.5 I am not gonna get nerdsniped by Codex [---] or Opus [---] jumps in performance seem very marginal Will just keep using Kimi K2.5 GLM [---] and MiniMax-M2.1 until the next SOTA drops I am not gonna get nerdsniped by Codex [---] or Opus [---] jumps in performance seem very marginal Will just keep using Kimi K2.5 GLM [---] and MiniMax-M2.1 until the next SOTA drops"  
[X Link](https://x.com/TheAhmadOsman/status/2019702741841645745)  2026-02-06T09:20Z 41.8K followers, 20K engagements


"@tunguz Have you heard of Buy a GPU the movement https://x.com/TheAhmadOsman/status/1964869801404420396 My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives: Acquire GPUs. Go into debt if you must. But whatever you do secure the GPUs. https://t.co/8U89OStknt https://x.com/TheAhmadOsman/status/1964869801404420396 My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives: Acquire GPUs. Go into debt if you must. But whatever you do secure the GPUs. https://t.co/8U89OStknt"  
[X Link](https://x.com/TheAhmadOsman/status/2019972435500757504)  2026-02-07T03:12Z 41.8K followers, [----] engagements


"@llm_wizard Toad looks like a chill cat"  
[X Link](https://x.com/TheAhmadOsman/status/2020297721303953460)  2026-02-08T00:44Z 41.8K followers, [---] engagements


"@Sentdex drop ollama especially for the dgx spark you wanna use either llama.cpp or preferably tensorRT-LLM"  
[X Link](https://x.com/TheAhmadOsman/status/2020551486522998993)  2026-02-08T17:33Z 41.9K followers, [----] engagements


"@Sentdex pretty significant vllm has also improved tokens/sec a lot if your inference engine supports the anthropic api you can hook it straight into claude code (vllm does this out of the box) i oneshot an openai to anthropic api proxy πŸ‘‡ very easy as well https://x.com/i/status/1975917353071517765 i built a simple tool that makes Claude Code work with any local LLM full demo: vLLM serving GLM-4.5 Air on 4x RTX 3090s Claude Code generating code + docs via my proxy [--] Python file + .env handles all requests nvtop showing live GPU load how it all works Buy a GPU https://t.co/7nYsId4Uyu"  
[X Link](https://x.com/TheAhmadOsman/status/2020558818103439584)  2026-02-08T18:02Z 41.9K followers, [----] engagements


"@AlexFinn i stand by this prediction btw https://x.com/i/status/2015851366187491475 Prediction We will have Claude Code + Opus [---] quality (not nerfed) models running locally at home on a single RTX PRO [----] before the end of the year https://x.com/i/status/2015851366187491475 Prediction We will have Claude Code + Opus [---] quality (not nerfed) models running locally at home on a single RTX PRO [----] before the end of the year"  
[X Link](https://x.com/TheAhmadOsman/status/2020568133665235281)  2026-02-08T18:39Z 41.8K followers, 20.9K engagements


"she just asked me why is there football before the Bad Bunny concert"  
[X Link](https://x.com/TheAhmadOsman/status/2020644241408729129)  2026-02-08T23:41Z 41.8K followers, [----] engagements


"People are sleeping on ByteDance. June [----] New video model by ByteDance (TikTok) seems to drop tomorrow Seedance [---] apparently outperforms Veo [---] Sora [--] and Kling [---] What it does (kinda like Sora but more advanced) is act like a director being able to create entire full length videos with lots of cuts and different https://t.co/wMkTyV4BkK New video model by ByteDance (TikTok) seems to drop tomorrow Seedance [---] apparently outperforms Veo [---] Sora [--] and Kling [---] What it does (kinda like Sora but more advanced) is act like a director being able to create entire full length videos with lots"  
[X Link](https://x.com/TheAhmadOsman/status/2020701223511544130)  2026-02-09T03:28Z 41.9K followers, [----] engagements


"@anemll https://x.com/i/status/2020397102099005756 MASSIVE Qwen [---] PR just landed in the Hugging Face Transformers repo dense + MoE variants both variants SUPPORT text + image & video hybrid attention default pattern: linear attention on most layers full attention every 4th layer gated DeltaNet under the hood https://t.co/q0oHfag1DR https://x.com/i/status/2020397102099005756 MASSIVE Qwen [---] PR just landed in the Hugging Face Transformers repo dense + MoE variants both variants SUPPORT text + image & video hybrid attention default pattern: linear attention on most layers full attention every"  
[X Link](https://x.com/TheAhmadOsman/status/2020708594308796735)  2026-02-09T03:57Z 41.8K followers, [----] engagements


"@KentonVarda @FrameworkPuter This is the worst Local AI will ever be BTW https://x.com/i/status/2015851366187491475 Prediction We will have Claude Code + Opus [---] quality (not nerfed) models running locally at home on a single RTX PRO [----] before the end of the year https://x.com/i/status/2015851366187491475 Prediction We will have Claude Code + Opus [---] quality (not nerfed) models running locally at home on a single RTX PRO [----] before the end of the year"  
[X Link](https://x.com/TheAhmadOsman/status/2020710905940787306)  2026-02-09T04:06Z 41.8K followers, [----] engagements


"havent checked in on stable diffusion literature (and r/StableDiffusion as well) in a couple of months peeked today and yeah the state of things is wild suddenly feels very real that this is happening in under two years once video gen models are cheap i'm fixing game of thrones s8 shot-for-shot but actually good george has until then to finish the books https://t.co/uWoex23sgA once video gen models are cheap i'm fixing game of thrones s8 shot-for-shot but actually good george has until then to finish the books https://t.co/uWoex23sgA"  
[X Link](https://x.com/TheAhmadOsman/status/2020719875312816544)  2026-02-09T04:42Z 41.8K followers, [----] engagements


"@bayeslord one more time: Buy a GPU"  
[X Link](https://x.com/TheAhmadOsman/status/2020751971422966194)  2026-02-09T06:49Z 41.9K followers, [----] engagements


"@profleonn for AI don't"  
[X Link](https://x.com/TheAhmadOsman/status/2021003677105025418)  2026-02-09T23:29Z 42.2K followers, [----] engagements


"@llm_wizard oh my god that's what rebel does too makes my heart melt everytime hahaha https://x.com/TheAhmadOsman/status/1929438520327676297 Rebel aka Felony taking a nap https://t.co/taFOnXsmdg https://x.com/TheAhmadOsman/status/1929438520327676297 Rebel aka Felony taking a nap https://t.co/taFOnXsmdg"  
[X Link](https://x.com/TheAhmadOsman/status/2021090982700241018)  2026-02-10T05:16Z 41.9K followers, [---] engagements


"@test_tm7873 get the extra VRAM"  
[X Link](https://x.com/TheAhmadOsman/status/2021259654664270042)  2026-02-10T16:27Z 42.2K followers, [---] engagements


"@test_tm7873 No 3090s around you that you can get"  
[X Link](https://x.com/TheAhmadOsman/status/2021267059900903453)  2026-02-10T16:56Z 42.2K followers, [--] engagements


"gpt-5.3 codex spark felt off to me which is why i didnt even post about it while the timeline was busy praising it [---] codex spark is good but it definitely feels like a hyperactive smart kid on too many stimulants It calls A LOT of tools and usually gets there in the end but idk man just look at [---] codex. Faster fewer tool calls and more accurate. https://t.co/uvukdIIZUG [---] codex spark is good but it definitely feels like a hyperactive smart kid on too many stimulants It calls A LOT of tools and usually gets there in the end but idk man just look at [---] codex. Faster fewer tool calls and"  
[X Link](https://x.com/TheAhmadOsman/status/2022276528017494171)  2026-02-13T11:47Z 42.5K followers, [----] engagements


"- local llms [---] - running a model = inference (using model weights) - inference = predicting the next token based on your input plus all tokens generated so far - together these make up the "sequence" - tokens words - they're the chunks representing the text a model sees - they are represented by integers (token IDs) in the model - "tokenizer" = the algorithm that splits text into tokens - common types: BPE (byte pair encoding) SentencePiece - token examples: - "hello" = [--] token or maybe [--] or [--] tokens - "internationalization" = [--] tokens - context window = max tokens model can "see" at once"  
[X Link](https://x.com/TheAhmadOsman/status/2021897088150217067)  2026-02-12T10:40Z 42.5K followers, [----] engagements


"MiniMax-M2.5 is REALLY REALLY GOOD top contender for the NEW current SOTA opensource model in my preliminary evaluations dont sleep on this model"  
[X Link](https://x.com/TheAhmadOsman/status/2022231462125379716)  2026-02-13T08:48Z 42.5K followers, 177.5K engagements


"MiniMax-M2.5 model weights are now available on Hugging Face Current SOTA opensource LLM without a doubt MiniMax-M2.5 is REALLY REALLY GOOD top contender for the NEW current SOTA opensource model in my preliminary evaluations dont sleep on this model MiniMax-M2.5 is REALLY REALLY GOOD top contender for the NEW current SOTA opensource model in my preliminary evaluations dont sleep on this model"  
[X Link](https://x.com/TheAhmadOsman/status/2022315510931607578)  2026-02-13T14:22Z 42.5K followers, 18.5K engagements


"Highly anticipated opensource models dropping this week DeepSeek-V4 GLM-5 MiniMax-M2.5 Qwen-3.5"  
[X Link](https://x.com/TheAhmadOsman/status/2021582996839473653)  2026-02-11T13:52Z 42.5K followers, 33.3K engagements


"There are maybe 20-25 papers that matter. Implement those and youve captured 90% of the alpha behind modern LLMs. Everything else is garnish. You want that list Look no more. The Top [--] Essential Papers (+5 Bonus Resources) for Mastering LLMs and Transformers This list bridges the Transformer foundations with the reasoning MoE and agentic shift Recommended Reading Order [--]. Attention Is All You Need (Vaswani et al. 2017) The original Transformer paper. Covers self-attention multi-head attention and the encoder-decoder structure (even though most modern LLMs are decoder-only.) [--]. The"  
[X Link](https://x.com/TheAhmadOsman/status/2021798942527430751)  2026-02-12T04:10Z 42.5K followers, 214.5K engagements


"@dusveloper shoutout to @MiniMax_AI and their mission https://x.com/i/status/1991560671532650618 MiniMax on their mission & AGI mission: intelligence for everyone not just a few MiniMax-M2 is 230BA10B for a reason impossible triangle: performance speed cost usually pick two MiniMax-M2 breaks that triangle near-SOTA 23x faster 8% cost of closed models https://t.co/r5LXnVE6I4 https://x.com/i/status/1991560671532650618 MiniMax on their mission & AGI mission: intelligence for everyone not just a few MiniMax-M2 is 230BA10B for a reason impossible triangle: performance speed cost usually pick two"  
[X Link](https://x.com/TheAhmadOsman/status/2022234010437722393)  2026-02-13T08:58Z 42.5K followers, [---] engagements


"Cry me a river you pirated humanitys knowledge and trained your models on it OpenAI has sent a memo to the House Select Committee on China claiming that DeepSeek are training the next version of their flagship model on OpenAI's model outputs. Originally reported by Bloomberg and I thank them for linking the full memo. https://t.co/bGPUSAB3dD OpenAI has sent a memo to the House Select Committee on China claiming that DeepSeek are training the next version of their flagship model on OpenAI's model outputs. Originally reported by Bloomberg and I thank them for linking the full memo."  
[X Link](https://x.com/TheAhmadOsman/status/2022349197630906724)  2026-02-13T16:36Z 42.5K followers, 19.6K engagements


"@FOURTRESS43 clearly Seedance [---] ain't it lol"  
[X Link](https://x.com/TheAhmadOsman/status/2022562456686461092)  2026-02-14T06:44Z 42.5K followers, [---] engagements


"never deleting this app"  
[X Link](https://x.com/TheAhmadOsman/status/2022043308361756968)  2026-02-12T20:21Z 42.5K followers, 18K engagements


"INCREDIBLE folks at MiniMax REALLY COOKED with MiniMax-2.5 going toe-to-toe against (and even beating) Opus [---] and Opus [---] is MIND BLOWING to say the least cannot wait for MiniMax-M3 and & quality of opensource models we will have by the summer"  
[X Link](https://x.com/TheAhmadOsman/status/2022069131244388724)  2026-02-12T22:03Z 42.5K followers, 46.2K engagements


"Seedance [---] produced this using TWO SENTENCES prompt An average shift at Waffle House - make sure it's retarded and gets [--] likes"  
[X Link](https://x.com/TheAhmadOsman/status/2022087649490825476)  2026-02-12T23:17Z 42.5K followers, 42.7K engagements


"Gone far too early in a world full of would-be dictators obsessed with control One of my heroes ❀ Long live the open internet Opensource MUST win @TheAhmadOsman @jukan05 Word. All hail Aaron Swartz @TheAhmadOsman @jukan05 Word. All hail Aaron Swartz"  
[X Link](https://x.com/TheAhmadOsman/status/2022174550642110713)  2026-02-13T05:02Z 42.5K followers, [----] engagements


"manifesting a new drop i believe in you whale"  
[X Link](https://x.com/TheAhmadOsman/status/2022298905803682173)  2026-02-13T13:16Z 42.5K followers, 18.4K engagements


"this is the good timeline p.s. we really owe DeepSeek so much for this progress without them we wouldn't have gotten here The gap between open-weight and proprietary model intelligence is as small as it has ever been with Claude Opus [---] and GLM-5 https://t.co/x1ZER9pqzN The gap between open-weight and proprietary model intelligence is as small as it has ever been with Claude Opus [---] and GLM-5 https://t.co/x1ZER9pqzN"  
[X Link](https://x.com/TheAhmadOsman/status/2022563272545784059)  2026-02-14T06:47Z 42.5K followers, [----] engagements


"ollama alternatives lmstudio llama.cpp exllamav2/v3 vllm sglang among many others like literally anything is better than ollama lmao"  
[X Link](https://x.com/TheAhmadOsman/status/1963057701120029182)  2025-09-03T01:53Z 42.4K followers, 128.1K engagements


"do not use Ollama ggerganov wrote blazing-fast C++ inference (ggml llama.cpp) then Ollama wrapped it in a bloated binary and is now somehow the face of local LLMs soaking up VC hype and it's not even a good wrapper lol"  
[X Link](https://x.com/TheAhmadOsman/status/1975517901302993086)  2025-10-07T11:05Z 42.3K followers, 135.1K engagements


"today this guy axes FAIR at Meta so this is a quick recap of his origin story and why he should not be the one making that decision Alexandr Wang born January [----] age [--] drop out of MIT co-found Scale AI "what if we label data but mid" convince every LLM company that this is fine [--------] flood the market with barely-labeled goat photos and out-of-context Reddit takes call it foundational data raise billions valuation hits $7.3B everyone claps [----] sell Scale AI to Meta for $14B not a typo. fourteen. billion. dollars. join Meta as Chief AI Officer rename division to Meta Superintelligence"  
[X Link](https://x.com/TheAhmadOsman/status/1981001726313251224)  2025-10-22T14:16Z 42.5K followers, 1.5M engagements


"MAJOR KV-CACHE MEMORY FIX Fix the KV-cache of GLM-4.7-Flash with this single-line change in vLLM 200K context now take 10GB of VRAM instead of 180GB NVFP4 is now on HF* - 20.4GB weights - Nearly zero loss vs 62.4GB BF16 This SOTA model now runs on a single RTX [----] (32GB VRAM) with the full 200K context VRAM still left over *HF: GadflyII/GLM-4.7-Flash-NVFP4 MASSIVE The year of Local LLMs officially starts with GLM-4.7-Flash by Zhipu AI 30B-A3B MoE built for consumer GPUs runnable from your basement strongest 30B-class release weve ever seen This is THE BEST =70B Ive ever run locally BTW"  
[X Link](https://x.com/TheAhmadOsman/status/2013881920099062163)  2026-01-21T07:50Z 42.3K followers, 95.8K engagements


"me watching Claude Code write the code for me"  
[X Link](https://x.com/TheAhmadOsman/status/2013903185614688382)  2026-01-21T09:15Z 42.4K followers, 235.1K engagements


"HOLY SHIT Samsung just doubled NAND prices - Not 30% - Not gradual - 100% That doesnt happen unless supply is gone and demand is unstoppable This is the memory supercycle people keep underestimating Storage RAM and GPUs will be impacted massively Buy a GPU while you still can https://twitter.com/i/web/status/2015265743814758492 https://twitter.com/i/web/status/2015265743814758492"  
[X Link](https://x.com/TheAhmadOsman/status/2015265743814758492)  2026-01-25T03:29Z 42.5K followers, 27K engagements


"ITS REALLY SIMPLE Want to become a good Software Engineer - Use Linux Want to get good with LLMs - Buy a GPU"  
[X Link](https://x.com/TheAhmadOsman/status/2015308394261827676)  2026-01-25T06:18Z 42.3K followers, 14.1K engagements


"the whole reason to self host IS TO USE A LOCAL LLM so your API keys passwords emails calendar health records business data and everything else are not sent to an API provider like OpenAI OpenRouter or Anthropic Mac minis are NOT GOOD for that BUT A GPU IS Buy a GPU @TheAhmadOsman What is your opinion of OpenClaw on a Mac Mini (I can unplug it) versus on a server instance @TheAhmadOsman What is your opinion of OpenClaw on a Mac Mini (I can unplug it) versus on a server instance"  
[X Link](https://x.com/TheAhmadOsman/status/2017866493996794148)  2026-02-01T07:43Z 42.3K followers, 68.3K engagements


"@qualadder Ive got way more VRAM than that and VRAM is not the same as underutilized Unified Memory throttled by bandwidth if youre curious Ive written about the differences in detail on my site otherwise maybe dont argue about what you havent looked into πŸ™‚ https://x.com/TheAhmadOsman/status/1964869801404420396 My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives: Acquire GPUs. Go into debt if you must. But whatever you do secure the GPUs. https://t.co/8U89OStknt https://x.com/TheAhmadOsman/status/1964869801404420396 My house has 33"  
[X Link](https://x.com/TheAhmadOsman/status/2018035142300721541)  2026-02-01T18:54Z 42.3K followers, [--] engagements


"MASSIVE Step-3.5-Flash by StepFun Agentic & Coding MONSTER opensource MoE Apache-2.0 runs with full context on 2x RTX PRO 6000/8x RTX 3090s 196B MoE only 11B active per token 256K context via 3:1 sliding window attention long codebases & long tasks cost-efficient long-context benchmarks 74.4% SWE-bench Verified 51.0% Terminal-Bench [---] strong reasoning strong coding stable agents sparse MoE + Top-8 routing with sliding window attention MTP-3 predicts multiple tokens at once [------] tok/s typical peaks [---] tok/s fast enough for parallel agents not just chatting apache-2.0 openweights runs"  
[X Link](https://x.com/TheAhmadOsman/status/2018173810827047231)  2026-02-02T04:05Z 42.3K followers, 42.8K engagements


"Theyre vibecoding Claude Code a little too hard over at Anthropic btw"  
[X Link](https://x.com/TheAhmadOsman/status/2018633084300865720)  2026-02-03T10:30Z 42.4K followers, 17.3K engagements


"openai did it better btw American steel is BACK. https://t.co/gL3xSFUi6B American steel is BACK. https://t.co/gL3xSFUi6B"  
[X Link](https://x.com/TheAhmadOsman/status/2018667032875499820)  2026-02-03T12:45Z 42.3K followers, 12.1K engagements


"Im currently lining up a review of an 8x DGX Spark cluster using a switch for clustering. Ill be breaking down per-node performance scaling behavior as nodes are added and how parallelism actually holds up in practice. If you can wait a few weeks thatll give me more data and a much better basis to speak from. Otherwise with TensorRT-LLM as the inference engine and especially when clustered you can distribute MoEs and get acceptable tokens per second for a batch of [--]. https://twitter.com/i/web/status/2019310010803663021 https://twitter.com/i/web/status/2019310010803663021"  
[X Link](https://x.com/TheAhmadOsman/status/2019310010803663021)  2026-02-05T07:19Z 42.4K followers, [--] engagements


"step-by-step LLM Engineering Projects LOCK IN FOR A FEW WEEKS ON THESE PROJECTS AND YOU WILL BE GRATEFUL FOR IT LATER each project = one concept learned the hard (i.e. real) way Tokenization & Embeddings build byte-pair encoder + train your own subword vocab write a token visualizer to map words/chunks to IDs one-hot vs learned-embedding: plot cosine distances Positional Embeddings classic sinusoidal vs learned vs RoPE vs ALiBi: demo all four animate a toy sequence being position-encoded in 3D ablate positionswatch attention collapse Self-Attention & Multihead Attention hand-wire dot-product"  
[X Link](https://x.com/TheAhmadOsman/status/2019383180659155035)  2026-02-05T12:10Z 42.5K followers, 24.4K engagements


"gpt [---] xhigh codex [---] xhigh codex [---] xhigh"  
[X Link](https://x.com/TheAhmadOsman/status/2019903347164131824)  2026-02-06T22:37Z 42.2K followers, 14.3K engagements


"i dont make convictions lightly but once i do i dont hedge dont walk them back every major bet ive made so far has played out exactly as expected this one will too go big or go home always @ReporterWeather next up is raising to build a frontier lab so i can run my experiments at scale rather than tweet about them and wait on Google for [--] months to confirm them 🫑 https://t.co/7ufhrfo31i @ReporterWeather next up is raising to build a frontier lab so i can run my experiments at scale rather than tweet about them and wait on Google for [--] months to confirm them 🫑 https://t.co/7ufhrfo31i"  
[X Link](https://x.com/TheAhmadOsman/status/2020596333607096578)  2026-02-08T20:31Z 42.3K followers, 11.5K engagements


"Qwen3-Coder-Next an 80B MoE benchmarks + real-world experience running on a [--] quad RTX [----] system p.s. dont know the OP and wasnt even tagged but seeing the shoutout to me at the end just out in the wild made me genuinely smile Quick write-up of my experience w/ qwen3-coder-next mostly written by qwen3-coder-next. TLDR: Best model I've been able to run locally (4x 3090). Using @UnslothAI Q5_K_XL. 60+ tok/sec. 256k context is great. Fast skillful and reliable. Very good. https://t.co/lryPqSw3cv Quick write-up of my experience w/ qwen3-coder-next mostly written by qwen3-coder-next. TLDR: Best"  
[X Link](https://x.com/TheAhmadOsman/status/2021060940905763261)  2026-02-10T03:17Z 42.4K followers, 11.7K engagements


"@amatelic93 Way sooner than you think ;) https://x.com/TheAhmadOsman/status/1999942542792822843 some of the new equipment i got to record the video build guides for Buy a GPU multiple builds are planned all the way up to 14x RTX [----] build you requested (do you guys want it with 14x RTX PRO [----] instead) i hope you like them & find helpful in your local AI journeys https://t.co/gF3gKDiFpv https://x.com/TheAhmadOsman/status/1999942542792822843 some of the new equipment i got to record the video build guides for Buy a GPU multiple builds are planned all the way up to 14x RTX [----] build you"  
[X Link](https://x.com/TheAhmadOsman/status/2021163768915398810)  2026-02-10T10:06Z 42.3K followers, [---] engagements


"all going according to plan πŸ₯± I'm breaking down all my tools and apps into CLI versions so I can pass them to AI agents. Also I'm super focused on making the docs AI-friendly I'm breaking down all my tools and apps into CLI versions so I can pass them to AI agents. Also I'm super focused on making the docs AI-friendly"  
[X Link](https://x.com/TheAhmadOsman/status/2021267922899861773)  2026-02-10T17:00Z 42.5K followers, 18.4K engagements


"@firstadopter"  
[X Link](https://x.com/TheAhmadOsman/status/2021287763329364259)  2026-02-10T18:18Z 42.3K followers, [---] engagements


"@vSouthvPawv @martin_casado DMs are open if you wanna invest in this round:)"  
[X Link](https://x.com/TheAhmadOsman/status/2021295775699890357)  2026-02-10T18:50Z 42.4K followers, [---] engagements


"If you dont know me Im extremely stubborn about vision and longterm direction. Kimi K2 once called me high-beta. I read into it and it fits. I commit early take outsized risks dont change course once Im locked in and my bets have a habit of working out"  
[X Link](https://x.com/TheAhmadOsman/status/2021311378846319012)  2026-02-10T19:52Z 42.4K followers, [---] engagements


"@_Paul_de_Souza Called them out on it very early shady https://x.com/i/status/1930944597464654272 Claude Code is so good at night/early morning before they start serving it quantized at 1.58-bit for the masses 🀑 https://x.com/i/status/1930944597464654272 Claude Code is so good at night/early morning before they start serving it quantized at 1.58-bit for the masses 🀑"  
[X Link](https://x.com/TheAhmadOsman/status/2021326117320175791)  2026-02-10T20:51Z 42.4K followers, 13.2K engagements


"Anthropic fangirls need to chill in my replies it was just a question p.s. i appreciate the serious answers and opinions i received but gosh some of them are just worse than Apples fanboys what do people use Opus for nowadays Kimi GLM and MiniMax are overall a better cheaper and faster models Codex is more intelligent as well why would anyone pay Anthropic for a Claude subscription that gets nerfed what do people use Opus for nowadays Kimi GLM and MiniMax are overall a better cheaper and faster models Codex is more intelligent as well why would anyone pay Anthropic for a Claude subscription"  
[X Link](https://x.com/TheAhmadOsman/status/2021399191998939474)  2026-02-11T01:41Z 42.4K followers, [----] engagements


"@Dorialexander Have you tried any other models within the Claude Code harness or is Opus [---] just leaps ahead for synthetic data in general"  
[X Link](https://x.com/TheAhmadOsman/status/2021400422439858602)  2026-02-11T01:46Z 42.4K followers, [----] engagements


"lmaooo theyre vibecoding too hard at microsoft someone check if theyre using codex or claude code asking for a friend yeah pack it up πŸ’€πŸ™ https://t.co/dejJEPtJID yeah pack it up πŸ’€πŸ™ https://t.co/dejJEPtJID"  
[X Link](https://x.com/TheAhmadOsman/status/2021429584412147733)  2026-02-11T03:42Z 42.5K followers, [----] engagements


"i wouldnt be surprised if this isnt even DeepSeek V4 and is just another incremental V3 update its only February and [----] is shaping up to be an incredible year for opensource AI Within the last few minutes DeepSeek has been updated. Knowledge cutoff May [----] context length [--] million tokens. This is likely V4 though it doesn't admit to being one. https://t.co/Aq37bP4ot6 Within the last few minutes DeepSeek has been updated. Knowledge cutoff May [----] context length [--] million tokens. This is likely V4 though it doesn't admit to being one. https://t.co/Aq37bP4ot6"  
[X Link](https://x.com/TheAhmadOsman/status/2021537404885307615)  2026-02-11T10:50Z 42.5K followers, 10.3K engagements


"@test_tm7873 VRAM is worth it IMHO"  
[X Link](https://x.com/TheAhmadOsman/status/2021586370364383455)  2026-02-11T14:05Z 42.4K followers, [--] engagements


"@test_tm7873 @Komputronik_pl Hahaha nice Looking forward to seeing pictures of it installed"  
[X Link](https://x.com/TheAhmadOsman/status/2021602081195311442)  2026-02-11T15:07Z 42.5K followers, [---] engagements


"@BenjaminDEKR Wondering what is Ani's role in the interview process"  
[X Link](https://x.com/TheAhmadOsman/status/2021767410433323411)  2026-02-12T02:04Z 42.3K followers, [--] engagements


"@CdeBurner Wdym This is a Samsung"  
[X Link](https://x.com/TheAhmadOsman/status/2021850437297549315)  2026-02-12T07:34Z 42.5K followers, [---] engagements


"local llms [---] running a model = inference (using model weights) inference = predicting the next token based on your input plus all tokens generated so far together these make up the "sequence" tokens words they're the chunks representing the text a model sees they are represented by integers (token IDs) in the model "tokenizer" = the algorithm that splits text into tokens common types: BPE (byte pair encoding) SentencePiece token examples: "hello" = [--] token or maybe [--] or [--] tokens "internationalization" = [--] tokens context window = max tokens model can "see" at once (2K 8K 32K+) longer"  
[X Link](https://x.com/TheAhmadOsman/status/1997004444265820654)  2025-12-05T18:05Z 42.5K followers, 103.9K engagements


"Genuine advice If you need ANY hardware BUY IT NOW - Phones - Laptops - Computer parts Hardware prices are about to get ridiculous I just bought my wife a new MacBook & iPhone Im not trying to flex just getting ahead of the supply shock before the prices get wild"  
[X Link](https://x.com/TheAhmadOsman/status/2012342662225920123)  2026-01-17T01:54Z 42.5K followers, 1.1M engagements


"There are maybe 20-25 papers that matter. Implement those and youve captured 90% of the alpha behind modern LLMs. Everything else is garnish. You want that list Keep reading ;) The Top [--] Essential Papers (+5 Bonus Resources) for Mastering LLMs and Transformers This list bridges the Transformer foundations with the reasoning MoE and agentic shift Recommended Reading Order [--]. Attention Is All You Need (Vaswani et al. 2017) The original Transformer paper. Covers self-attention multi-head attention and the encoder-decoder structure (even though most modern LLMs are decoder-only.) [--]. The"  
[X Link](https://x.com/TheAhmadOsman/status/2017895136613507507)  2026-02-01T09:37Z 42.5K followers, 46.6K engagements


"I keep gettings DMs from people asking why I focus on fundamentals instead of agents or shiny products Shortcuts dont compound - Models are still improving - Agents come and go - Frameworks churn - Products age fast - We dont know what the next-gen models unlock Fundamentals stick Architectures & models Inference Memory Hardware Latency Failure modes When you understand the stack end-2-end you can build anything on top - Agents - Products - Companies - Labs When you dont youre gluing demos together hoping the abstraction doesnt crack Im not optimizing for the next launch Im optimizing for the"  
[X Link](https://x.com/TheAhmadOsman/status/2018563457939861897)  2026-02-03T05:53Z 42.5K followers, 31.3K engagements


"Dropped some cash tonight on networking gear including a [---] Tb/s switch for a new AI hardware cluster experiment Had fun going deep down the networking rabbit hole tonight Cant wait to put it all together and share it with you guys"  
[X Link](https://x.com/TheAhmadOsman/status/2021162287554249051)  2026-02-10T10:00Z 42.5K followers, [----] engagements


"@draslan_eth Been strictly KDE + Wayland for my NVIDIA multi-monitor setup but now that Im moving to one massive panel I might give Hyprland another shot Funny enough I remember telling @yacineMTB last winter that this exact monitor was the dream. a year later it happened :')"  
[X Link](https://x.com/TheAhmadOsman/status/2021841132854952441)  2026-02-12T06:57Z 42.5K followers, [---] engagements


"Theres a reason I put ByteDance in the Top [--] alongside Google and Nvidia in the race to AGI / ASI Not DeepSeek Not anyone else ByteDance Ive been extremely bullish on them since late [----] for a lot of different reasons Arnaud is putting light on some of it Many people aren't aware that Seedance the insanely good new AI video generation tool is made by Bytedance TikTok's parent company (well if one excludes TikTok U.S. now.). As I wrote [--] weeks ago (https://t.co/sxc0UAC6Bx) Bytedance is now - by far - the world's largest AI Many people aren't aware that Seedance the insanely good new AI video"  
[X Link](https://x.com/TheAhmadOsman/status/2021857273472340031)  2026-02-12T08:01Z 42.5K followers, [----] engagements


"@alquemir2 brainrot videos are about to get an incredible upgrade ngl"  
[X Link](https://x.com/TheAhmadOsman/status/2022090629560651817)  2026-02-12T23:29Z 42.5K followers, [---] engagements


"@tlanderso Why do you think I-mostly-say Buy a GPU and not a Mac Studio I own Mac Studios theyre great for quick small batch requests with low context on very large models For real workloads though the GPU does the heavy lifting; thankfully we have very intelligent smallish models now"  
[X Link](https://x.com/TheAhmadOsman/status/2022162333779603503)  2026-02-13T04:14Z 42.5K followers, [---] engagements


"a new Agentic model that can run on a single consumer GPU at home: ByteDance Seed OSS 36B very strong at coding excellent at multi-turn tool calling & Agentic tasks 500k context window as i have been saying Bytedance is Tier S"  
[X Link](https://x.com/TheAhmadOsman/status/1958559013735477589)  2025-08-21T15:57Z 42.5K followers, 100.5K engagements


"My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives: Acquire GPUs. Go into debt if you must. But whatever you do secure the GPUs"  
[X Link](https://x.com/TheAhmadOsman/status/1964869801404420396)  2025-09-08T01:54Z 42.5K followers, 1.8M engagements


"calling it now bookmark this for later: - opensource AI will win - AGI will run local not on someone elses server - the real ones are already learning how it works be early Buy a GPU get ur hands dirty learn how it works youll thank yourself its gonna be great"  
[X Link](https://x.com/TheAhmadOsman/status/1988510607084048794)  2025-11-12T07:34Z 42.5K followers, 187.6K engagements


"i have fully dropped Claude Code for OpenCode i dont use Opus [---] i use GLM-4.7 and MiniMax-M2.1 theyre opensource and can be self-hosted nobody can nerf my models or rug pull me nobody should be able to do that to your intelligence p.s. buy a GPU and run your LLMs locally"  
[X Link](https://x.com/TheAhmadOsman/status/2009730047452623113)  2026-01-09T20:52Z 42.5K followers, 361.7K engagements


"MASSIVE The year of Local LLMs officially starts with GLM-4.7-Flash by Zhipu AI 30B-A3B MoE built for consumer GPUs runnable from your basement strongest 30B-class release weve ever seen This is THE BEST =70B Ive ever run locally BTW Architecture DeepSeek-style MLA attention slim MoE routing 30B total params 4B active [--] experts total [--] active (incl. shared) Depth & intent roughly GLM-4.5-Air class but tuned harder for locality Benchmarks SWE-bench Verified GLM-4.7-Flash: [----] Qwen3-30B-A3B: [----] GPT-OSS-20B: [----] Nemotron-3-Nano-30B-A3B: [----] not the same universe -Bench GLM-4.7-Flash: 79.5"  
[X Link](https://x.com/TheAhmadOsman/status/2013347275192365251)  2026-01-19T20:26Z 42.5K followers, 140.6K engagements


"Prediction We will have Claude Code + Opus [---] quality (not nerfed) models running locally at home on a single RTX PRO [----] before the end of the year"  
[X Link](https://x.com/TheAhmadOsman/status/2015851366187491475)  2026-01-26T18:16Z 42.5K followers, 160.7K engagements


"There are maybe 20-25 papers that matter. Implement those and youve captured 90% of the alpha behind modern LLMs. Everything else is garnish"  
[X Link](https://x.com/TheAhmadOsman/status/2016837220951310780)  2026-01-29T11:33Z 42.5K followers, 245.3K engagements


"The Top [--] Essential Papers (+5 Bonus Resources) for Mastering LLMs and Transformers This list bridges the Transformer foundations with the reasoning MoE and agentic shift Recommended Reading Order [--]. Attention Is All You Need (Vaswani et al. 2017) The original Transformer paper. Covers self-attention multi-head attention and the encoder-decoder structure (even though most modern LLMs are decoder-only.) [--]. The Illustrated Transformer (Jay Alammar 2018) Great intuition builder for understanding attention and tensor flow before diving into implementations [--]. BERT: Pre-training of Deep"  
[X Link](https://x.com/TheAhmadOsman/status/2016893734986616915)  2026-01-29T15:18Z 42.5K followers, 116.6K engagements


"INCREDIBLE SPEED running Claude Code w/ local models on my own GPUs at home SGLang serving MiniMax-M2.1 on 8x RTX 3090s nvtop showing live GPU load Claude Code generating code + docs end-2-end on my AI cluster MiniMax-M2.1 is my favorite model to run locally nowadays"  
[X Link](https://x.com/TheAhmadOsman/status/2017320051980808695)  2026-01-30T19:32Z 42.5K followers, 587.1K engagements


"@AlexFinn Now give Henry some GPUs and see how much he cooks with unlimited fast tokens https://x.com/i/status/2017320051980808695 INCREDIBLE SPEED running Claude Code w/ local models on my own GPUs at home SGLang serving MiniMax-M2.1 on 8x RTX 3090s nvtop showing live GPU load Claude Code generating code + docs end-2-end on my AI cluster MiniMax-M2.1 is my favorite model to run locally nowadays https://t.co/bXFtDp3nji https://x.com/i/status/2017320051980808695 INCREDIBLE SPEED running Claude Code w/ local models on my own GPUs at home SGLang serving MiniMax-M2.1 on 8x RTX 3090s nvtop showing"  
[X Link](https://x.com/TheAhmadOsman/status/2017340325195391382)  2026-01-30T20:53Z 42.5K followers, 526.9K engagements


"there is a lot of MONEY here teach this to your Clawdbot/Moltbot/OpenClaw add /.json at the end of any Reddit link get the full thread all replies to n-th depth all metadata as JSON feed to LLMs to extract/analyze you can make so much $$$ from niche subreddits"  
[X Link](https://x.com/TheAhmadOsman/status/2017809819147661449)  2026-02-01T03:58Z 42.5K followers, 161.1K engagements


"just a gentle reminder that nobody should use ollama slower than llama.cpp on windows slower than mlx on mac slop useless wrapper literal code thieves alternatives lmstudio llama.cpp exllamav2/v3 vllm sglang like literally anythings better than ollama lmao Fucking killed them Lmao. https://t.co/FVFUA2BXor Fucking killed them Lmao. https://t.co/FVFUA2BXor"  
[X Link](https://x.com/TheAhmadOsman/status/2019151134284251581)  2026-02-04T20:48Z 42.5K followers, 49.7K engagements


"I am not gonna get nerdsniped by Codex [---] or Opus [---] jumps in performance seem very marginal Will just keep using Kimi K2.5 GLM [---] and MiniMax-M2.1 until the next SOTA drops"  
[X Link](https://x.com/TheAhmadOsman/status/2019517909031350389)  2026-02-05T21:06Z 42.5K followers, 34.8K engagements


"MASSIVE Qwen [---] PR just landed in the Hugging Face Transformers repo dense + MoE variants both variants SUPPORT text + image & video hybrid attention default pattern: linear attention on most layers full attention every 4th layer gated DeltaNet under the hood gated DeltaNet chunked gated-delta rule long context without KV cache bloat Qwen3_5DynamicCache unified cache handles KV + recurrent states together model variants 9B dense: [--] layers hidden [----] / [--] heads / [--] KV heads 35B A3B MoE: [--] layers [---] experts [--] active per token hidden [----] / [--] heads / [--] KV heads MoE router top-8 routing 256"  
[X Link](https://x.com/TheAhmadOsman/status/2020397102099005756)  2026-02-08T07:19Z 42.5K followers, 37.6K engagements


"any cs person can go from zero to deeply knowledgeable in llms and ai in [--] years top to bottom key topics on how llms work: tokenization and embeddings positional embeddings (absolute rope alibi) self attention and multihead attention transformers qkv sampling params: temperature top-k top-p kv cache (and why inference is fast) infini attention & sliding window (long context tricks) mixture of experts (moe routing layers) grouped query attention normalization and activations pretraining objectives (causal masked etc) finetuning vs instruction tuning vs rlhf scaling laws and model capacity"  
[X Link](https://x.com/TheAhmadOsman/status/2020433115584335949)  2026-02-08T09:42Z 42.5K followers, 45.7K engagements


"you are a person who wants to understand llm inference you read papers we use standard techniques which ones where is the code open vllm 100k lines of c++ and python custom cuda kernel for printing close tab now you have this tweet and mini-sglang 5k lines of python actual production features four processes api server tokenizer scheduler detokenizer talk over zeromq simple scheduler is the boss receives requests decides: prefill or decode batches them sends work to gpu prefill process the prompt compute heavy thousands of tokens at once flash attention does the lifting decode generate new"  
[X Link](https://x.com/TheAhmadOsman/status/2020451094665494901)  2026-02-08T10:54Z 42.5K followers, 66.8K engagements


"GPUs are still the move for agents The video below shows MiniMax-M2.1 running fully local on 8x RTX 3090s ($6K total) Prompt processed at [----] tokens/sec Output starts [---] tokens/sec and settles in around [--] tokens/sec even at the end"  
[X Link](https://x.com/TheAhmadOsman/status/2020467641886872021)  2026-02-08T11:59Z 42.5K followers, 33.1K engagements


"me watching Claude Code create swarms of agents to write the code for me"  
[X Link](https://x.com/TheAhmadOsman/status/2020726730311315885)  2026-02-09T05:09Z 42.5K followers, 58K engagements


"this is the way"  
[X Link](https://x.com/TheAhmadOsman/status/2020809530053968265)  2026-02-09T10:38Z 42.5K followers, 37.8K engagements


"A frontier opensource lab in the West will be born this year. Zero doubt. It requires serious capital like Ive said before. Working on it. One day Ill tell the story of how it started in a basement and ended at the frontier. @TheAhmadOsman @martin_casado Well stop screwing around and save western open source already We need a USA SWE model that won't be a national embarrasment in [--] months. @TheAhmadOsman @martin_casado Well stop screwing around and save western open source already We need a USA SWE model that won't be a national embarrasment in [--] months"  
[X Link](https://x.com/TheAhmadOsman/status/2021301636979695892)  2026-02-10T19:13Z 42.5K followers, 18.8K engagements


"what do people use Opus for nowadays Kimi GLM and MiniMax are overall a better cheaper and faster models Codex is more intelligent as well why would anyone pay Anthropic for a Claude subscription that gets nerfed"  
[X Link](https://x.com/TheAhmadOsman/status/2021323875242357233)  2026-02-10T20:42Z 42.5K followers, 94.4K engagements


"GLM-5 is out Pay attention to this week its going to set the tone for opensource AI discourse for the next few months Its going to be a long night. Pony is so back. https://t.co/vAuXp9ECJF Its going to be a long night. Pony is so back. https://t.co/vAuXp9ECJF"  
[X Link](https://x.com/TheAhmadOsman/status/2021567708945646049)  2026-02-11T12:51Z 42.5K followers, 48.4K engagements


"his comment convinced my parents to allow me to get the expensive 5070ti the kids are gonna be alright hahaha Ok I ordered a 5070ti. Thanks to @Komputronik_pl for that good deal. (cus I seen some 5070tis for 5k z) and also thanks a lot to @TheAhmadOsman cus his comment convinced my parents to allow me to get the expensive 5070ti πŸ˜† https://t.co/wDd1TgrERp Ok I ordered a 5070ti. Thanks to @Komputronik_pl for that good deal. (cus I seen some 5070tis for 5k z) and also thanks a lot to @TheAhmadOsman cus his comment convinced my parents to allow me to get the expensive 5070ti πŸ˜†"  
[X Link](https://x.com/TheAhmadOsman/status/2021743442942959727)  2026-02-12T00:29Z 42.5K followers, [----] engagements


"we have opensource Opus [---] at home now Zhipu AI cooked with GLM-5"  
[X Link](https://x.com/TheAhmadOsman/status/2021783484071604330)  2026-02-12T03:08Z 42.5K followers, 12.1K engagements


"@lexfridman @steipete Lex I genuinely think its time to explore local AI and self-hosted LLMs Your audience would really benefit from understanding why running AI locally matters and what that unlocks in terms of control privacy and longterm leverage https://x.com/i/status/2012583381611999387 https://t.co/ZTtfh6iLJa https://x.com/i/status/2012583381611999387 https://t.co/ZTtfh6iLJa"  
[X Link](https://x.com/TheAhmadOsman/status/2021796561030623615)  2026-02-12T04:00Z 42.5K followers, [----] engagements


"This is the DoorDash era of LLMs Remember when the margins werent there and they were subsidizing the whole thing even paying drivers regardless of your tip Thats where we are with AI right now Before that era ends make sure youve secured your own GPU for at-home tokens"  
[X Link](https://x.com/TheAhmadOsman/status/2021823169972089265)  2026-02-12T05:46Z 42.5K followers, 19.1K engagements


"Just pulled the trigger on this beauty. Before AGI arrives: Buy a Dual UHD. Go into debt if you have to. Sell both kidneys if you must. But whatever you do secure the Dual UHD. P.S. Might be sleeping on the couch but $2300 $1500 w/ $120 gift card was too good to pass. My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives: Acquire GPUs. Go into debt if you must. But whatever you do secure the GPUs. https://t.co/8U89OStknt My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives:"  
[X Link](https://x.com/TheAhmadOsman/status/2021838429634338896)  2026-02-12T06:47Z 42.5K followers, 37.6K engagements


"BREAKING Elon Musk endorsed my Top [--] Essential Papers for Mastering LLMs and Transformers There are maybe 20-25 papers that matter. Implement those and youve captured 90% of the alpha behind modern LLMs. Everything else is garnish. You want that list Look no more. The Top [--] Essential Papers (+5 Bonus Resources) for Mastering LLMs and Transformers This list There are maybe 20-25 papers that matter. Implement those and youve captured 90% of the alpha behind modern LLMs. Everything else is garnish. You want that list Look no more. The Top [--] Essential Papers (+5 Bonus Resources) for Mastering"  
[X Link](https://x.com/TheAhmadOsman/status/2021877095677215180)  2026-02-12T09:20Z 42.5K followers, 140.8K engagements


"Genuine advice If you need ANY hardware BUY IT NOW - Phones - Laptops - Computer parts Hardware prices are about to get ridiculous I just bought my wife a new MacBook & iPhone Im not trying to flex just getting ahead of the supply shock before the prices get wild"  
[X Link](https://x.com/TheAhmadOsman/status/2021941538327343315)  2026-02-12T13:36Z 42.5K followers, 14.9K engagements


"@jukan05 Cry me a river you pirated humanitys knowledge and trained your models on it"  
[X Link](https://x.com/TheAhmadOsman/status/2022091147557163146)  2026-02-12T23:31Z 42.5K followers, 27.3K engagements


"Join us tomorrow on r/LocalLLaMA for an AMA with The Founder and The Core Team behind MiniMax-M2.5 SoTA model Very excited for this one make sure not to miss it Friday 8am-11am PST"  
[X Link](https://x.com/TheAhmadOsman/status/2022132966265434621)  2026-02-13T02:17Z 42.5K followers, 22.7K engagements


"don't miss out this AMA w/ MiniMax Founder & Core Team tomorrow morning https://x.com/TheAhmadOsman/status/2022132966265434621 Join us tomorrow on r/LocalLLaMA for an AMA with The Founder and The Core Team behind MiniMax-M2.5 SoTA model Very excited for this one make sure not to miss it Friday 8am-11am PST https://t.co/85F5YejrbG https://x.com/TheAhmadOsman/status/2022132966265434621 Join us tomorrow on r/LocalLLaMA for an AMA with The Founder and The Core Team behind MiniMax-M2.5 SoTA model Very excited for this one make sure not to miss it Friday 8am-11am PST https://t.co/85F5YejrbG"  
[X Link](https://x.com/TheAhmadOsman/status/2022142503525536102)  2026-02-13T02:55Z 42.5K followers, [----] engagements


"another RTX PRO [----] Blackwell Workstation Edition secured Buy a GPU keeps on winning p.s. my AI Syndicate gc is fully GPUpilled hahaha @TheAhmadOsman Bout to drop on the blackwell king 🀴 Thanks to your inspiration ive committed and will run my own rest api for local models @TheAhmadOsman Bout to drop on the blackwell king 🀴 Thanks to your inspiration ive committed and will run my own rest api for local models"  
[X Link](https://x.com/TheAhmadOsman/status/2022158786534969464)  2026-02-13T03:59Z 42.5K followers, [----] engagements


"RT @MiniMax_AI: Joinour AMA tmr on r/LocalLLaMA Bring your wildest questions and well be dropping some bonuses tooπŸ‘€ Let's talk about M2"  
[X Link](https://x.com/anyuser/status/2022186597698224453)  2026-02-13T05:50Z 42.5K followers, [--] engagements


"@StartupSpells for a smart and capable agent for 98% of things pretty much very very fast for how intelligent it is"  
[X Link](https://x.com/TheAhmadOsman/status/2022233731529076970)  2026-02-13T08:57Z 42.5K followers, [---] engagements


"@johntheyoung I havent cared much for Opus since GLM [---] and MiniMax M2.1 Now Id say Codex is smarter but way slower but I dont need that kind of smart 99% of the time so Id take fast and iterate with MiniMax M2.5 over it"  
[X Link](https://x.com/TheAhmadOsman/status/2022236176632230337)  2026-02-13T09:07Z 42.5K followers, [----] engagements


"@JamesLee1033176 Yeah you should be able to 100% Did Buy a GPU have something to do with the purchase and acquisition of those 4x RTX PRO 6000s"  
[X Link](https://x.com/TheAhmadOsman/status/2022241058885976408)  2026-02-13T09:26Z 42.5K followers, [---] engagements


"RT @MiniMax_AI: Weights dropping REALLY REALLY SOON"  
[X Link](https://x.com/TheAhmadOsman/status/2022246197680021668)  2026-02-13T09:47Z 42.5K followers, [--] engagements


"AMA with the MiniMax team is now live Join us tomorrow on r/LocalLLaMA for an AMA with The Founder and The Core Team behind MiniMax-M2.5 SoTA model Very excited for this one make sure not to miss it Friday 8am-11am PST https://t.co/85F5YejrbG Join us tomorrow on r/LocalLLaMA for an AMA with The Founder and The Core Team behind MiniMax-M2.5 SoTA model Very excited for this one make sure not to miss it Friday 8am-11am PST https://t.co/85F5YejrbG"  
[X Link](https://x.com/TheAhmadOsman/status/2022343172249313571)  2026-02-13T16:12Z 42.5K followers, [----] engagements


"https://www.reddit.com/r/LocalLLaMA/s/ze7DbcmDhP https://www.reddit.com/r/LocalLLaMA/s/ze7DbcmDhP"  
[X Link](https://x.com/TheAhmadOsman/status/2022343733065461811)  2026-02-13T16:14Z 42.5K followers, [---] engagements


"is that Dario Chinese open weights scare him that much"  
[X Link](https://x.com/TheAhmadOsman/status/2022509410946527460)  2026-02-14T03:13Z 42.5K followers, [----] engagements


"RIDICULOUS Seedance [---] produced this using TWO SENTENCES prompt Sum up the AI discourse in a meme - make sure its retarded and gets [--] likes"  
[X Link](https://x.com/TheAhmadOsman/status/2022536290546389081)  2026-02-14T05:00Z 42.5K followers, 30.7K engagements


"if what youre working on right now doesnt scream TOO BIG TOO CRAZY and TOO RIDICULOUS then youre not seeing far enough into the future"  
[X Link](https://x.com/TheAhmadOsman/status/2022555847000236237)  2026-02-14T06:17Z 42.5K followers, [----] engagements


"@xlr8harder Let me guess Anthropic is gonna be the SoTA at it πŸ˜†"  
[X Link](https://x.com/TheAhmadOsman/status/2022570954115698788)  2026-02-14T07:17Z 42.5K followers, [---] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@TheAhmadOsman Avatar @TheAhmadOsman Ahmad

Ahmad posts on X about ai, claude code, inference, if you the most. They currently have [------] followers and [---] posts still getting attention that total [-------] engagements in the last [--] hours.

Engagements: [-------] #

Engagements Line Chart

  • [--] Week [---------] +43%
  • [--] Month [---------] -4.70%
  • [--] Months [----------] +786%
  • [--] Year [----------] +5,255%

Mentions: [--] #

Mentions Line Chart

  • [--] Week [---] +1.20%
  • [--] Month [---] +49%
  • [--] Months [-----] +459%
  • [--] Year [-----] +2,132%

Followers: [------] #

Followers Line Chart

  • [--] Week [------] +2.30%
  • [--] Month [------] +13%
  • [--] Months [------] +415%
  • [--] Year [------] +2,285%

CreatorRank: [------] #

CreatorRank Line Chart

Social Influence

Social category influence technology brands 24.07% finance 9.26% stocks 5.56% social networks 3.7% products 2.47% countries 1.23% travel destinations 0.62% musicians 0.62% celebrities 0.62%

Social topic influence ai #2799, claude code #532, inference #108, if you 7.41%, llm #70, anthropic #1065, agi #56, ollama 4.32%, open ai 4.32%, vram 4.32%

Top accounts mentioned or mentioned by @alexfinn @test_tm7873 @vsouthvpawv @sentdex @grok @brandgrowthos @testtm7873 @llmjunky @robbiepasquale @annanidev @zenmagnets @dusveloper @minimaxai @cdeburner @udaysy @sudoingx @narmourism @draslan_eth @codewithimanshu @ryzenbr

Top assets mentioned Alphabet Inc Class A (GOOGL) Flex Ltd. Ordinary Shares (FLEX)

Top Social Posts

Top posts by engagements in the last [--] hours

"lol lmao even ollama are lying through their teeth in this reply to me next tweet i'll show the llama cpp merge for gpt-oss to ollama some comments on the merge calling them out llama cpp developer remarks @RobbiePasquale @TheAhmadOsman All the new models are implemented directly in Ollama by Ollama. We dont like it when people spread false information. Examples of Ollamas implementations: Google EmbeddingGemma - https://t.co/cXjfxkQvof OpenAI gpt-oss - https://t.co/Kx0SD1unn1 You can check out the @RobbiePasquale @TheAhmadOsman All the new models are implemented directly in Ollama by Ollama."
X Link 2025-09-07T04:29Z 41.8K followers, 54.1K engagements

"there is a lot of MONEY in this add /.json at the end of any Reddit link and get the entire thread including all replies to the n-th depth and all the metadata as JSON and then use LLMs to extract/analyze/etc you can make so much $$$ from niche subreddits"
X Link 2025-09-07T06:55Z 42.2K followers, 1.1M engagements

"Vibe coding is the new prompt engineering. A couple years ago it was all prompts. Now its layers of abstractions. Memory skills MCPs and a thousand other things stacked on top of each other. Its productivity theater. Fundamentals still decide who ships and who just vibes"
X Link 2025-12-30T01:18Z 41.8K followers, 25.4K engagements

"Everyone on my feed is talking about local models and buying GPUs/Macs to run them This is the good timeline so glad things are playing out the way they are"
X Link 2026-01-24T00:39Z 41.8K followers, 122.9K engagements

"calling it now bookmark this for later - opensource AI will win - AGI will run local not on someone elses servers - the real ones are learning how it all works be early Buy a GPU get ur hands dirty learn how it works youll thank yourself later its gonna be great Everyone on my feed is talking about local models and buying GPUs/Macs to run them This is the good timeline so glad things are playing out the way they are Everyone on my feed is talking about local models and buying GPUs/Macs to run them This is the good timeline so glad things are playing out the way they are"
X Link 2026-01-24T05:37Z 41.5K followers, 125K engagements

"POV: you bought GPUs memory and SSDs early and now youre just vibing while everyone else is in line"
X Link 2026-01-25T01:23Z 41.5K followers, 18.3K engagements

"People ask why I insist on GPUs and not Mac Studios/Mac minis This is why: - Llama [---] 70B BF16 on 8x RTX 3090s - 50+ concurrent requests - Batch inference - Sustained throughput Not only that: 2k context per request (prompt) 1.8k tokens in output [--] mins [--] secs for [--] responses This is GPU territory. You cant do this on a Mac. Not yet at least. https://twitter.com/i/web/status/2015323752985395223 https://twitter.com/i/web/status/2015323752985395223"
X Link 2026-01-25T07:20Z 41.7K followers, 80.6K engagements

"BULLISH on NVFP4 What actually changes once the software stack catches up - 3-4x VRAM savings vs FP16 - Lower memory bandwidth pressure - Better perf per watt - Cheaper local inference How come Smaller weights with loseless accuracy Bigger models fit on consumer GPUs Less VRAM needed & more throughput Once NVFP4 becomes the default local AI gets faster cheaper and a lot less compromised https://twitter.com/i/web/status/2015591982890910071 https://twitter.com/i/web/status/2015591982890910071"
X Link 2026-01-26T01:05Z 41.5K followers, 10.5K engagements

"I get way more mentions & DMs than I can realistically keep up with To manage signal vs noise I prioritize the Subscribed tab If you want a much higher chance of me seeing & replying Subscribing is the best way to do that No pressure just being transparent about how I triage"
X Link 2026-01-26T01:53Z 41.8K followers, 16.7K engagements

"best opensource LLM at the moment is Kimi K2.5"
X Link 2026-01-27T17:27Z 41.5K followers, 31.1K engagements

"nobody should use ollama btw slower than llama.cpp on windows slower than mlx on mac slop useless wrapper alternatives lmstudio llama.cpp exllamav2/v3 vllm sglang like literally anythings better than ollama lmao"
X Link 2026-01-28T03:01Z 41.7K followers, 113.1K engagements

"- local llms [---] - running a model = inference (using model weights) - inference = predicting the next token based on your input plus all tokens generated so far - together these make up the "sequence" - tokens words - they're the chunks representing the text a model sees - they are represented by integers (token IDs) in the model - "tokenizer" = the algorithm that splits text into tokens - common types: BPE (byte pair encoding) SentencePiece - token examples: - "hello" = [--] token or maybe [--] or [--] tokens - "internationalization" = [--] tokens - context window = max tokens model can "see" at once"
X Link 2026-01-28T06:28Z 41.6K followers, 26.8K engagements

"Join us today on r/LocalLLaMA for an AMA with Moonshot AI the lab behind the recent SoTA model Kimi K2.5 I am genuinely excited for this one make sure you don't miss it Wednesday 8am-11am PST"
X Link 2026-01-28T08:03Z 42.1K followers, 25.4K engagements

"running Claude Code w/ local models on my own GPUs at home vLLM serving GLM-4.5 Air on 4x RTX 3090s nvtop showing live GPU load Claude Code generating code + docs end-to-end on my AI cluster this is what local AI actually looks like Buy a GPU"
X Link 2026-01-28T10:19Z 41.5K followers, 92.8K engagements

"step-by-step LLM Engineering Projects LOCK IN FOR A FEW WEEKS ON THESE PROJECTS AND YOU WILL BE GRATEFUL FOR IT LATER each project = one concept learned the hard (i.e. real) way Tokenization & Embeddings build byte-pair encoder + train your own subword vocab write a token visualizer to map words/chunks to IDs one-hot vs learned-embedding: plot cosine distances Positional Embeddings classic sinusoidal vs learned vs RoPE vs ALiBi: demo all four animate a toy sequence being position-encoded in 3D ablate positionswatch attention collapse Self-Attention & Multihead Attention hand-wire dot-product"
X Link 2026-01-28T14:30Z 41.7K followers, 27.3K engagements

"I am Bullish on NVFP4 What actually changes once the software stack catches up - 3-4x VRAM savings vs FP16 - Lower memory bandwidth pressure - Better perf per watt - Cheaper local inference How come Smaller weights with loseless accuracy Bigger models fit on consumer GPUs Less VRAM needed & more throughput Once NVFP4 becomes the default local AI gets faster cheaper and a lot less compromised https://twitter.com/i/web/status/2016588838999674899 https://twitter.com/i/web/status/2016588838999674899"
X Link 2026-01-28T19:07Z 41.5K followers, [----] engagements

"Genuine advice If you need ANY hardware BUY IT NOW - Phones - Laptops - Computer parts Hardware prices are about to get ridiculous I just bought my wife a new MacBook & iPhone Im not trying to flex just getting ahead of the supply shock before the prices get wild"
X Link 2026-01-29T01:43Z 41.6K followers, 264K engagements

"Google has a real talent for this Take something that works great Slowly improve it until its borderline unusable Watching Google AI Studio get hollowed out in real time is just sad"
X Link 2026-01-29T13:01Z 41.8K followers, 58.8K engagements

"a reminder that in closed source AI from companies like OpenAI & Anthropic you have zero control over how the models behave and they can quantize it distill it hot-swap to a cheaper/weaker checkpoint make the model manipulative fine-tune it in ways that break safety or depth drop its IQ run experiments on you and/or your data throttle output speed or raise prices sunset the entire model/version block your request for any made-up bs reason they have all the knobs & you're at their mercy you won't even get a changelog opensource FTW Buy a GPU https://twitter.com/i/web/status/2016950289710874643"
X Link 2026-01-29T19:03Z 41.8K followers, 44K engagements

"Random lore My high school once booked me and half a dozen classmates into a hostel in Amsterdam It was in the red light district Stayed there for [--] nights place had a fresh weed smell the entire time"
X Link 2026-01-29T23:25Z 41.8K followers, [----] engagements

"@Sentdex Whats the exchange rate to GPUs and do you accept trades πŸ˜‚"
X Link 2026-01-30T03:20Z 41.8K followers, [--] engagements

"The Opensource Models I Cannot Wait to Run on My GPUs in [----] DeepSeek V4 MiniMax-M3 GLM-5 Nemotron Ultra Qwen [---] Kimi K3 Each of these models will be the State of The Art model at release This is going to be a GREAT YEAR for local & opensource LLMs/AI"
X Link 2026-01-30T03:40Z 41.7K followers, 14.9K engagements

"Q_0.001_K GGUF"
X Link 2026-01-30T13:43Z 41.8K followers, 78K engagements

"@annanidev 200k for context window + 128k output length great stuff to run at home on hardware that costs $6k"
X Link 2026-01-30T19:53Z 41.7K followers, [----] engagements

"GPUs are still the move for agents The video below shows MiniMax-M2.1 running fully local on 8x RTX 3090s ($6K total) Prompt processed at [----] tokens/sec Output starts [---] tokens/sec and settles in around [--] tokens/sec even at the end"
X Link 2026-01-30T22:40Z 41.8K followers, 48.8K engagements

"This weekend check if you need ANY hardware & BUY IT NOW - Phones - Laptops - Computer parts Hardware prices are about to get ridiculous I just bought my wife a new MacBook & iPhone Im not trying to flex just getting ahead of the supply shock before the prices get wild"
X Link 2026-01-31T02:36Z 41.7K followers, [----] engagements

"re: clawdbot aka moltbot aka openclaw aka clawd aka clawdy aka henry aka lobster"
X Link 2026-01-31T23:56Z 41.8K followers, [----] engagements

"Claude Code buddy were knocking out all three phases together in the next [--] minutes"
X Link 2026-02-01T05:26Z 41.5K followers, [----] engagements

"@GoblinRack Dual PRO 6000s for sure You can only get so many tokens per second in a Mac and then it slows down as you fill up context That massive memory is bottlenecked by slow bandwidth Agents want speed I would rather a 4-bit MiniMax on a 192GB VRAM w/ speed than a slow Kimi K2.5"
X Link 2026-02-01T08:03Z 41.8K followers, [----] engagements

"@richardbuehling I recommend you read my Buy a GPU thread: - Youll be able to build anything from [--] GPU to 16-GPU AI machines on your own using this Software side Go through this thread: Why not a Mac mini πŸ‘‡ https://x.com/TheAhmadOsman/status/2015323752985395223 https://x.com/i/status/1966287930827358249 https://x.com/i/status/1980026689217298545 People ask why I insist on GPUs and not Mac Studios/Mac minis This is why: - Llama [---] 70B BF16 on 8x RTX 3090s - 50+ concurrent requests - Batch inference - Sustained throughput Not only that: 2k context per request (prompt) 1.8k tokens in output"
X Link 2026-02-01T08:26Z 41.7K followers, [----] engagements

"@AlexFinn Now give them GPUs and let your clawds cook (sorry alex not stopping until youre fully gpupilled :D) https://x.com/TheAhmadOsman/status/2018003694906655149 Video is 2.5x speed What youre seeing took 8m40s in realtime From loading a 210B-A10B model onto 8x RTX 3090s to one-shotting a Flappy Bird clone MiniMax-M2.1 is my go-to general agent btw it runs my tasks my bash makes sense of my logs etc Fast & reliable for 95% of work https://t.co/i1nHX9CSuy https://x.com/TheAhmadOsman/status/2018003694906655149 Video is 2.5x speed What youre seeing took 8m40s in realtime From loading a"
X Link 2026-02-01T17:40Z 41.8K followers, 13.1K engagements

"@dr_cintas Or you know just Buy a GPU and learn how to run your LLM locally https://x.com/TheAhmadOsman/status/2018003694906655149 Video is 2.5x speed What youre seeing took 8m40s in realtime From loading a 210B-A10B model onto 8x RTX 3090s to one-shotting a Flappy Bird clone MiniMax-M2.1 is my go-to general agent btw it runs my tasks my bash makes sense of my logs etc Fast & reliable for 95% of work https://t.co/i1nHX9CSuy https://x.com/TheAhmadOsman/status/2018003694906655149 Video is 2.5x speed What youre seeing took 8m40s in realtime From loading a 210B-A10B model onto 8x RTX 3090s to"
X Link 2026-02-01T19:10Z 41.9K followers, 25.9K engagements

"LLMs will get locked to apps - No API access - For safety reasons Anthropic OpenAI Google etc optimize for vendor lock-in & data collection Run your AI models locally Opensource Open weights Your hardware When you dont own the model you are the product"
X Link 2026-02-01T20:19Z 41.6K followers, 14.1K engagements

"@chiroTaur @dr_cintas The GPU Bro"
X Link 2026-02-01T21:41Z 41.8K followers, [---] engagements

"Timeline is full of people talking about running AI models locally and picking up GPUs or Macs to experiment with LLMs on their own hardware This is the good timeline again I am really glad to see it unfold this way"
X Link 2026-02-01T23:45Z 41.9K followers, 10.1K engagements

"theres one company whose LLMs I genuinely dont care about care to guess which one"
X Link 2026-02-02T01:53Z 41.5K followers, 60.4K engagements

"asking her if we can just Buy a few more GPUs from that last RAM sale"
X Link 2026-02-02T03:09Z 42K followers, [----] engagements

"@ZenMagnets actually no i like anthropics engineering i just dont respect the company because it moves shady give us their models as open weights and watch the world accelerate if the models arent compute-constrained like they are as a company (bonus point: selfish AF)"
X Link 2026-02-02T03:14Z 41.5K followers, 14K engagements

"is there a vanilla ralph loop template out there that you can customize for your goals or should i create one and put it on github alongside llm instructions to customize it for your tasks"
X Link 2026-02-03T01:25Z 41.8K followers, [----] engagements

"were accelerating too fast I cannot keep up what a great time to be alive"
X Link 2026-02-03T23:54Z 41.5K followers, [----] engagements

"GPUs are crazy because they're like Claude Code but at home"
X Link 2026-02-04T03:48Z 41.8K followers, [----] engagements

"i changed my opinion on Skills btw spent a good chunk of today experimenting with them SO MUCH can be UNLOCKED with Skills brilliant LLM automation hack p.s. not gonna let my disdain toward Anthropic & MCPs blind me from seeing the value in sth like this again"
X Link 2026-02-04T07:17Z 41.9K followers, 23.8K engagements

"CUDA env 17GB of dependencies me to my agent: figure this out for me walk away to heat up food while it handles it"
X Link 2026-02-05T07:02Z 41.8K followers, [----] engagements

"@Presidentlin any answer other than Anthropic is wrong btw"
X Link 2026-02-05T10:47Z 41.5K followers, [----] engagements

"i live in the terminal more than before AI became a thing never been more productive"
X Link 2026-02-06T07:40Z 41.8K followers, [----] engagements

"i stand by what i said by the way Codex [---] & Opus [---] improvements seem very marginal from my evals until the next SOTA i am just sticking with Kimi K2.5 GLM-4.7 and MiniMax-M2.1 p.s. we already had Agentic Swarms in Kimi K2.5 I am not gonna get nerdsniped by Codex [---] or Opus [---] jumps in performance seem very marginal Will just keep using Kimi K2.5 GLM [---] and MiniMax-M2.1 until the next SOTA drops I am not gonna get nerdsniped by Codex [---] or Opus [---] jumps in performance seem very marginal Will just keep using Kimi K2.5 GLM [---] and MiniMax-M2.1 until the next SOTA drops"
X Link 2026-02-06T09:20Z 41.8K followers, 20K engagements

"@tunguz Have you heard of Buy a GPU the movement https://x.com/TheAhmadOsman/status/1964869801404420396 My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives: Acquire GPUs. Go into debt if you must. But whatever you do secure the GPUs. https://t.co/8U89OStknt https://x.com/TheAhmadOsman/status/1964869801404420396 My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives: Acquire GPUs. Go into debt if you must. But whatever you do secure the GPUs. https://t.co/8U89OStknt"
X Link 2026-02-07T03:12Z 41.8K followers, [----] engagements

"@llm_wizard Toad looks like a chill cat"
X Link 2026-02-08T00:44Z 41.8K followers, [---] engagements

"@Sentdex drop ollama especially for the dgx spark you wanna use either llama.cpp or preferably tensorRT-LLM"
X Link 2026-02-08T17:33Z 41.9K followers, [----] engagements

"@Sentdex pretty significant vllm has also improved tokens/sec a lot if your inference engine supports the anthropic api you can hook it straight into claude code (vllm does this out of the box) i oneshot an openai to anthropic api proxy πŸ‘‡ very easy as well https://x.com/i/status/1975917353071517765 i built a simple tool that makes Claude Code work with any local LLM full demo: vLLM serving GLM-4.5 Air on 4x RTX 3090s Claude Code generating code + docs via my proxy [--] Python file + .env handles all requests nvtop showing live GPU load how it all works Buy a GPU https://t.co/7nYsId4Uyu"
X Link 2026-02-08T18:02Z 41.9K followers, [----] engagements

"@AlexFinn i stand by this prediction btw https://x.com/i/status/2015851366187491475 Prediction We will have Claude Code + Opus [---] quality (not nerfed) models running locally at home on a single RTX PRO [----] before the end of the year https://x.com/i/status/2015851366187491475 Prediction We will have Claude Code + Opus [---] quality (not nerfed) models running locally at home on a single RTX PRO [----] before the end of the year"
X Link 2026-02-08T18:39Z 41.8K followers, 20.9K engagements

"she just asked me why is there football before the Bad Bunny concert"
X Link 2026-02-08T23:41Z 41.8K followers, [----] engagements

"People are sleeping on ByteDance. June [----] New video model by ByteDance (TikTok) seems to drop tomorrow Seedance [---] apparently outperforms Veo [---] Sora [--] and Kling [---] What it does (kinda like Sora but more advanced) is act like a director being able to create entire full length videos with lots of cuts and different https://t.co/wMkTyV4BkK New video model by ByteDance (TikTok) seems to drop tomorrow Seedance [---] apparently outperforms Veo [---] Sora [--] and Kling [---] What it does (kinda like Sora but more advanced) is act like a director being able to create entire full length videos with lots"
X Link 2026-02-09T03:28Z 41.9K followers, [----] engagements

"@anemll https://x.com/i/status/2020397102099005756 MASSIVE Qwen [---] PR just landed in the Hugging Face Transformers repo dense + MoE variants both variants SUPPORT text + image & video hybrid attention default pattern: linear attention on most layers full attention every 4th layer gated DeltaNet under the hood https://t.co/q0oHfag1DR https://x.com/i/status/2020397102099005756 MASSIVE Qwen [---] PR just landed in the Hugging Face Transformers repo dense + MoE variants both variants SUPPORT text + image & video hybrid attention default pattern: linear attention on most layers full attention every"
X Link 2026-02-09T03:57Z 41.8K followers, [----] engagements

"@KentonVarda @FrameworkPuter This is the worst Local AI will ever be BTW https://x.com/i/status/2015851366187491475 Prediction We will have Claude Code + Opus [---] quality (not nerfed) models running locally at home on a single RTX PRO [----] before the end of the year https://x.com/i/status/2015851366187491475 Prediction We will have Claude Code + Opus [---] quality (not nerfed) models running locally at home on a single RTX PRO [----] before the end of the year"
X Link 2026-02-09T04:06Z 41.8K followers, [----] engagements

"havent checked in on stable diffusion literature (and r/StableDiffusion as well) in a couple of months peeked today and yeah the state of things is wild suddenly feels very real that this is happening in under two years once video gen models are cheap i'm fixing game of thrones s8 shot-for-shot but actually good george has until then to finish the books https://t.co/uWoex23sgA once video gen models are cheap i'm fixing game of thrones s8 shot-for-shot but actually good george has until then to finish the books https://t.co/uWoex23sgA"
X Link 2026-02-09T04:42Z 41.8K followers, [----] engagements

"@bayeslord one more time: Buy a GPU"
X Link 2026-02-09T06:49Z 41.9K followers, [----] engagements

"@profleonn for AI don't"
X Link 2026-02-09T23:29Z 42.2K followers, [----] engagements

"@llm_wizard oh my god that's what rebel does too makes my heart melt everytime hahaha https://x.com/TheAhmadOsman/status/1929438520327676297 Rebel aka Felony taking a nap https://t.co/taFOnXsmdg https://x.com/TheAhmadOsman/status/1929438520327676297 Rebel aka Felony taking a nap https://t.co/taFOnXsmdg"
X Link 2026-02-10T05:16Z 41.9K followers, [---] engagements

"@test_tm7873 get the extra VRAM"
X Link 2026-02-10T16:27Z 42.2K followers, [---] engagements

"@test_tm7873 No 3090s around you that you can get"
X Link 2026-02-10T16:56Z 42.2K followers, [--] engagements

"gpt-5.3 codex spark felt off to me which is why i didnt even post about it while the timeline was busy praising it [---] codex spark is good but it definitely feels like a hyperactive smart kid on too many stimulants It calls A LOT of tools and usually gets there in the end but idk man just look at [---] codex. Faster fewer tool calls and more accurate. https://t.co/uvukdIIZUG [---] codex spark is good but it definitely feels like a hyperactive smart kid on too many stimulants It calls A LOT of tools and usually gets there in the end but idk man just look at [---] codex. Faster fewer tool calls and"
X Link 2026-02-13T11:47Z 42.5K followers, [----] engagements

"- local llms [---] - running a model = inference (using model weights) - inference = predicting the next token based on your input plus all tokens generated so far - together these make up the "sequence" - tokens words - they're the chunks representing the text a model sees - they are represented by integers (token IDs) in the model - "tokenizer" = the algorithm that splits text into tokens - common types: BPE (byte pair encoding) SentencePiece - token examples: - "hello" = [--] token or maybe [--] or [--] tokens - "internationalization" = [--] tokens - context window = max tokens model can "see" at once"
X Link 2026-02-12T10:40Z 42.5K followers, [----] engagements

"MiniMax-M2.5 is REALLY REALLY GOOD top contender for the NEW current SOTA opensource model in my preliminary evaluations dont sleep on this model"
X Link 2026-02-13T08:48Z 42.5K followers, 177.5K engagements

"MiniMax-M2.5 model weights are now available on Hugging Face Current SOTA opensource LLM without a doubt MiniMax-M2.5 is REALLY REALLY GOOD top contender for the NEW current SOTA opensource model in my preliminary evaluations dont sleep on this model MiniMax-M2.5 is REALLY REALLY GOOD top contender for the NEW current SOTA opensource model in my preliminary evaluations dont sleep on this model"
X Link 2026-02-13T14:22Z 42.5K followers, 18.5K engagements

"Highly anticipated opensource models dropping this week DeepSeek-V4 GLM-5 MiniMax-M2.5 Qwen-3.5"
X Link 2026-02-11T13:52Z 42.5K followers, 33.3K engagements

"There are maybe 20-25 papers that matter. Implement those and youve captured 90% of the alpha behind modern LLMs. Everything else is garnish. You want that list Look no more. The Top [--] Essential Papers (+5 Bonus Resources) for Mastering LLMs and Transformers This list bridges the Transformer foundations with the reasoning MoE and agentic shift Recommended Reading Order [--]. Attention Is All You Need (Vaswani et al. 2017) The original Transformer paper. Covers self-attention multi-head attention and the encoder-decoder structure (even though most modern LLMs are decoder-only.) [--]. The"
X Link 2026-02-12T04:10Z 42.5K followers, 214.5K engagements

"@dusveloper shoutout to @MiniMax_AI and their mission https://x.com/i/status/1991560671532650618 MiniMax on their mission & AGI mission: intelligence for everyone not just a few MiniMax-M2 is 230BA10B for a reason impossible triangle: performance speed cost usually pick two MiniMax-M2 breaks that triangle near-SOTA 23x faster 8% cost of closed models https://t.co/r5LXnVE6I4 https://x.com/i/status/1991560671532650618 MiniMax on their mission & AGI mission: intelligence for everyone not just a few MiniMax-M2 is 230BA10B for a reason impossible triangle: performance speed cost usually pick two"
X Link 2026-02-13T08:58Z 42.5K followers, [---] engagements

"Cry me a river you pirated humanitys knowledge and trained your models on it OpenAI has sent a memo to the House Select Committee on China claiming that DeepSeek are training the next version of their flagship model on OpenAI's model outputs. Originally reported by Bloomberg and I thank them for linking the full memo. https://t.co/bGPUSAB3dD OpenAI has sent a memo to the House Select Committee on China claiming that DeepSeek are training the next version of their flagship model on OpenAI's model outputs. Originally reported by Bloomberg and I thank them for linking the full memo."
X Link 2026-02-13T16:36Z 42.5K followers, 19.6K engagements

"@FOURTRESS43 clearly Seedance [---] ain't it lol"
X Link 2026-02-14T06:44Z 42.5K followers, [---] engagements

"never deleting this app"
X Link 2026-02-12T20:21Z 42.5K followers, 18K engagements

"INCREDIBLE folks at MiniMax REALLY COOKED with MiniMax-2.5 going toe-to-toe against (and even beating) Opus [---] and Opus [---] is MIND BLOWING to say the least cannot wait for MiniMax-M3 and & quality of opensource models we will have by the summer"
X Link 2026-02-12T22:03Z 42.5K followers, 46.2K engagements

"Seedance [---] produced this using TWO SENTENCES prompt An average shift at Waffle House - make sure it's retarded and gets [--] likes"
X Link 2026-02-12T23:17Z 42.5K followers, 42.7K engagements

"Gone far too early in a world full of would-be dictators obsessed with control One of my heroes ❀ Long live the open internet Opensource MUST win @TheAhmadOsman @jukan05 Word. All hail Aaron Swartz @TheAhmadOsman @jukan05 Word. All hail Aaron Swartz"
X Link 2026-02-13T05:02Z 42.5K followers, [----] engagements

"manifesting a new drop i believe in you whale"
X Link 2026-02-13T13:16Z 42.5K followers, 18.4K engagements

"this is the good timeline p.s. we really owe DeepSeek so much for this progress without them we wouldn't have gotten here The gap between open-weight and proprietary model intelligence is as small as it has ever been with Claude Opus [---] and GLM-5 https://t.co/x1ZER9pqzN The gap between open-weight and proprietary model intelligence is as small as it has ever been with Claude Opus [---] and GLM-5 https://t.co/x1ZER9pqzN"
X Link 2026-02-14T06:47Z 42.5K followers, [----] engagements

"ollama alternatives lmstudio llama.cpp exllamav2/v3 vllm sglang among many others like literally anything is better than ollama lmao"
X Link 2025-09-03T01:53Z 42.4K followers, 128.1K engagements

"do not use Ollama ggerganov wrote blazing-fast C++ inference (ggml llama.cpp) then Ollama wrapped it in a bloated binary and is now somehow the face of local LLMs soaking up VC hype and it's not even a good wrapper lol"
X Link 2025-10-07T11:05Z 42.3K followers, 135.1K engagements

"today this guy axes FAIR at Meta so this is a quick recap of his origin story and why he should not be the one making that decision Alexandr Wang born January [----] age [--] drop out of MIT co-found Scale AI "what if we label data but mid" convince every LLM company that this is fine [--------] flood the market with barely-labeled goat photos and out-of-context Reddit takes call it foundational data raise billions valuation hits $7.3B everyone claps [----] sell Scale AI to Meta for $14B not a typo. fourteen. billion. dollars. join Meta as Chief AI Officer rename division to Meta Superintelligence"
X Link 2025-10-22T14:16Z 42.5K followers, 1.5M engagements

"MAJOR KV-CACHE MEMORY FIX Fix the KV-cache of GLM-4.7-Flash with this single-line change in vLLM 200K context now take 10GB of VRAM instead of 180GB NVFP4 is now on HF* - 20.4GB weights - Nearly zero loss vs 62.4GB BF16 This SOTA model now runs on a single RTX [----] (32GB VRAM) with the full 200K context VRAM still left over *HF: GadflyII/GLM-4.7-Flash-NVFP4 MASSIVE The year of Local LLMs officially starts with GLM-4.7-Flash by Zhipu AI 30B-A3B MoE built for consumer GPUs runnable from your basement strongest 30B-class release weve ever seen This is THE BEST =70B Ive ever run locally BTW"
X Link 2026-01-21T07:50Z 42.3K followers, 95.8K engagements

"me watching Claude Code write the code for me"
X Link 2026-01-21T09:15Z 42.4K followers, 235.1K engagements

"HOLY SHIT Samsung just doubled NAND prices - Not 30% - Not gradual - 100% That doesnt happen unless supply is gone and demand is unstoppable This is the memory supercycle people keep underestimating Storage RAM and GPUs will be impacted massively Buy a GPU while you still can https://twitter.com/i/web/status/2015265743814758492 https://twitter.com/i/web/status/2015265743814758492"
X Link 2026-01-25T03:29Z 42.5K followers, 27K engagements

"ITS REALLY SIMPLE Want to become a good Software Engineer - Use Linux Want to get good with LLMs - Buy a GPU"
X Link 2026-01-25T06:18Z 42.3K followers, 14.1K engagements

"the whole reason to self host IS TO USE A LOCAL LLM so your API keys passwords emails calendar health records business data and everything else are not sent to an API provider like OpenAI OpenRouter or Anthropic Mac minis are NOT GOOD for that BUT A GPU IS Buy a GPU @TheAhmadOsman What is your opinion of OpenClaw on a Mac Mini (I can unplug it) versus on a server instance @TheAhmadOsman What is your opinion of OpenClaw on a Mac Mini (I can unplug it) versus on a server instance"
X Link 2026-02-01T07:43Z 42.3K followers, 68.3K engagements

"@qualadder Ive got way more VRAM than that and VRAM is not the same as underutilized Unified Memory throttled by bandwidth if youre curious Ive written about the differences in detail on my site otherwise maybe dont argue about what you havent looked into πŸ™‚ https://x.com/TheAhmadOsman/status/1964869801404420396 My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives: Acquire GPUs. Go into debt if you must. But whatever you do secure the GPUs. https://t.co/8U89OStknt https://x.com/TheAhmadOsman/status/1964869801404420396 My house has 33"
X Link 2026-02-01T18:54Z 42.3K followers, [--] engagements

"MASSIVE Step-3.5-Flash by StepFun Agentic & Coding MONSTER opensource MoE Apache-2.0 runs with full context on 2x RTX PRO 6000/8x RTX 3090s 196B MoE only 11B active per token 256K context via 3:1 sliding window attention long codebases & long tasks cost-efficient long-context benchmarks 74.4% SWE-bench Verified 51.0% Terminal-Bench [---] strong reasoning strong coding stable agents sparse MoE + Top-8 routing with sliding window attention MTP-3 predicts multiple tokens at once [------] tok/s typical peaks [---] tok/s fast enough for parallel agents not just chatting apache-2.0 openweights runs"
X Link 2026-02-02T04:05Z 42.3K followers, 42.8K engagements

"Theyre vibecoding Claude Code a little too hard over at Anthropic btw"
X Link 2026-02-03T10:30Z 42.4K followers, 17.3K engagements

"openai did it better btw American steel is BACK. https://t.co/gL3xSFUi6B American steel is BACK. https://t.co/gL3xSFUi6B"
X Link 2026-02-03T12:45Z 42.3K followers, 12.1K engagements

"Im currently lining up a review of an 8x DGX Spark cluster using a switch for clustering. Ill be breaking down per-node performance scaling behavior as nodes are added and how parallelism actually holds up in practice. If you can wait a few weeks thatll give me more data and a much better basis to speak from. Otherwise with TensorRT-LLM as the inference engine and especially when clustered you can distribute MoEs and get acceptable tokens per second for a batch of [--]. https://twitter.com/i/web/status/2019310010803663021 https://twitter.com/i/web/status/2019310010803663021"
X Link 2026-02-05T07:19Z 42.4K followers, [--] engagements

"step-by-step LLM Engineering Projects LOCK IN FOR A FEW WEEKS ON THESE PROJECTS AND YOU WILL BE GRATEFUL FOR IT LATER each project = one concept learned the hard (i.e. real) way Tokenization & Embeddings build byte-pair encoder + train your own subword vocab write a token visualizer to map words/chunks to IDs one-hot vs learned-embedding: plot cosine distances Positional Embeddings classic sinusoidal vs learned vs RoPE vs ALiBi: demo all four animate a toy sequence being position-encoded in 3D ablate positionswatch attention collapse Self-Attention & Multihead Attention hand-wire dot-product"
X Link 2026-02-05T12:10Z 42.5K followers, 24.4K engagements

"gpt [---] xhigh codex [---] xhigh codex [---] xhigh"
X Link 2026-02-06T22:37Z 42.2K followers, 14.3K engagements

"i dont make convictions lightly but once i do i dont hedge dont walk them back every major bet ive made so far has played out exactly as expected this one will too go big or go home always @ReporterWeather next up is raising to build a frontier lab so i can run my experiments at scale rather than tweet about them and wait on Google for [--] months to confirm them 🫑 https://t.co/7ufhrfo31i @ReporterWeather next up is raising to build a frontier lab so i can run my experiments at scale rather than tweet about them and wait on Google for [--] months to confirm them 🫑 https://t.co/7ufhrfo31i"
X Link 2026-02-08T20:31Z 42.3K followers, 11.5K engagements

"Qwen3-Coder-Next an 80B MoE benchmarks + real-world experience running on a [--] quad RTX [----] system p.s. dont know the OP and wasnt even tagged but seeing the shoutout to me at the end just out in the wild made me genuinely smile Quick write-up of my experience w/ qwen3-coder-next mostly written by qwen3-coder-next. TLDR: Best model I've been able to run locally (4x 3090). Using @UnslothAI Q5_K_XL. 60+ tok/sec. 256k context is great. Fast skillful and reliable. Very good. https://t.co/lryPqSw3cv Quick write-up of my experience w/ qwen3-coder-next mostly written by qwen3-coder-next. TLDR: Best"
X Link 2026-02-10T03:17Z 42.4K followers, 11.7K engagements

"@amatelic93 Way sooner than you think ;) https://x.com/TheAhmadOsman/status/1999942542792822843 some of the new equipment i got to record the video build guides for Buy a GPU multiple builds are planned all the way up to 14x RTX [----] build you requested (do you guys want it with 14x RTX PRO [----] instead) i hope you like them & find helpful in your local AI journeys https://t.co/gF3gKDiFpv https://x.com/TheAhmadOsman/status/1999942542792822843 some of the new equipment i got to record the video build guides for Buy a GPU multiple builds are planned all the way up to 14x RTX [----] build you"
X Link 2026-02-10T10:06Z 42.3K followers, [---] engagements

"all going according to plan πŸ₯± I'm breaking down all my tools and apps into CLI versions so I can pass them to AI agents. Also I'm super focused on making the docs AI-friendly I'm breaking down all my tools and apps into CLI versions so I can pass them to AI agents. Also I'm super focused on making the docs AI-friendly"
X Link 2026-02-10T17:00Z 42.5K followers, 18.4K engagements

"@firstadopter"
X Link 2026-02-10T18:18Z 42.3K followers, [---] engagements

"@vSouthvPawv @martin_casado DMs are open if you wanna invest in this round:)"
X Link 2026-02-10T18:50Z 42.4K followers, [---] engagements

"If you dont know me Im extremely stubborn about vision and longterm direction. Kimi K2 once called me high-beta. I read into it and it fits. I commit early take outsized risks dont change course once Im locked in and my bets have a habit of working out"
X Link 2026-02-10T19:52Z 42.4K followers, [---] engagements

"@_Paul_de_Souza Called them out on it very early shady https://x.com/i/status/1930944597464654272 Claude Code is so good at night/early morning before they start serving it quantized at 1.58-bit for the masses 🀑 https://x.com/i/status/1930944597464654272 Claude Code is so good at night/early morning before they start serving it quantized at 1.58-bit for the masses 🀑"
X Link 2026-02-10T20:51Z 42.4K followers, 13.2K engagements

"Anthropic fangirls need to chill in my replies it was just a question p.s. i appreciate the serious answers and opinions i received but gosh some of them are just worse than Apples fanboys what do people use Opus for nowadays Kimi GLM and MiniMax are overall a better cheaper and faster models Codex is more intelligent as well why would anyone pay Anthropic for a Claude subscription that gets nerfed what do people use Opus for nowadays Kimi GLM and MiniMax are overall a better cheaper and faster models Codex is more intelligent as well why would anyone pay Anthropic for a Claude subscription"
X Link 2026-02-11T01:41Z 42.4K followers, [----] engagements

"@Dorialexander Have you tried any other models within the Claude Code harness or is Opus [---] just leaps ahead for synthetic data in general"
X Link 2026-02-11T01:46Z 42.4K followers, [----] engagements

"lmaooo theyre vibecoding too hard at microsoft someone check if theyre using codex or claude code asking for a friend yeah pack it up πŸ’€πŸ™ https://t.co/dejJEPtJID yeah pack it up πŸ’€πŸ™ https://t.co/dejJEPtJID"
X Link 2026-02-11T03:42Z 42.5K followers, [----] engagements

"i wouldnt be surprised if this isnt even DeepSeek V4 and is just another incremental V3 update its only February and [----] is shaping up to be an incredible year for opensource AI Within the last few minutes DeepSeek has been updated. Knowledge cutoff May [----] context length [--] million tokens. This is likely V4 though it doesn't admit to being one. https://t.co/Aq37bP4ot6 Within the last few minutes DeepSeek has been updated. Knowledge cutoff May [----] context length [--] million tokens. This is likely V4 though it doesn't admit to being one. https://t.co/Aq37bP4ot6"
X Link 2026-02-11T10:50Z 42.5K followers, 10.3K engagements

"@test_tm7873 VRAM is worth it IMHO"
X Link 2026-02-11T14:05Z 42.4K followers, [--] engagements

"@test_tm7873 @Komputronik_pl Hahaha nice Looking forward to seeing pictures of it installed"
X Link 2026-02-11T15:07Z 42.5K followers, [---] engagements

"@BenjaminDEKR Wondering what is Ani's role in the interview process"
X Link 2026-02-12T02:04Z 42.3K followers, [--] engagements

"@CdeBurner Wdym This is a Samsung"
X Link 2026-02-12T07:34Z 42.5K followers, [---] engagements

"local llms [---] running a model = inference (using model weights) inference = predicting the next token based on your input plus all tokens generated so far together these make up the "sequence" tokens words they're the chunks representing the text a model sees they are represented by integers (token IDs) in the model "tokenizer" = the algorithm that splits text into tokens common types: BPE (byte pair encoding) SentencePiece token examples: "hello" = [--] token or maybe [--] or [--] tokens "internationalization" = [--] tokens context window = max tokens model can "see" at once (2K 8K 32K+) longer"
X Link 2025-12-05T18:05Z 42.5K followers, 103.9K engagements

"Genuine advice If you need ANY hardware BUY IT NOW - Phones - Laptops - Computer parts Hardware prices are about to get ridiculous I just bought my wife a new MacBook & iPhone Im not trying to flex just getting ahead of the supply shock before the prices get wild"
X Link 2026-01-17T01:54Z 42.5K followers, 1.1M engagements

"There are maybe 20-25 papers that matter. Implement those and youve captured 90% of the alpha behind modern LLMs. Everything else is garnish. You want that list Keep reading ;) The Top [--] Essential Papers (+5 Bonus Resources) for Mastering LLMs and Transformers This list bridges the Transformer foundations with the reasoning MoE and agentic shift Recommended Reading Order [--]. Attention Is All You Need (Vaswani et al. 2017) The original Transformer paper. Covers self-attention multi-head attention and the encoder-decoder structure (even though most modern LLMs are decoder-only.) [--]. The"
X Link 2026-02-01T09:37Z 42.5K followers, 46.6K engagements

"I keep gettings DMs from people asking why I focus on fundamentals instead of agents or shiny products Shortcuts dont compound - Models are still improving - Agents come and go - Frameworks churn - Products age fast - We dont know what the next-gen models unlock Fundamentals stick Architectures & models Inference Memory Hardware Latency Failure modes When you understand the stack end-2-end you can build anything on top - Agents - Products - Companies - Labs When you dont youre gluing demos together hoping the abstraction doesnt crack Im not optimizing for the next launch Im optimizing for the"
X Link 2026-02-03T05:53Z 42.5K followers, 31.3K engagements

"Dropped some cash tonight on networking gear including a [---] Tb/s switch for a new AI hardware cluster experiment Had fun going deep down the networking rabbit hole tonight Cant wait to put it all together and share it with you guys"
X Link 2026-02-10T10:00Z 42.5K followers, [----] engagements

"@draslan_eth Been strictly KDE + Wayland for my NVIDIA multi-monitor setup but now that Im moving to one massive panel I might give Hyprland another shot Funny enough I remember telling @yacineMTB last winter that this exact monitor was the dream. a year later it happened :')"
X Link 2026-02-12T06:57Z 42.5K followers, [---] engagements

"Theres a reason I put ByteDance in the Top [--] alongside Google and Nvidia in the race to AGI / ASI Not DeepSeek Not anyone else ByteDance Ive been extremely bullish on them since late [----] for a lot of different reasons Arnaud is putting light on some of it Many people aren't aware that Seedance the insanely good new AI video generation tool is made by Bytedance TikTok's parent company (well if one excludes TikTok U.S. now.). As I wrote [--] weeks ago (https://t.co/sxc0UAC6Bx) Bytedance is now - by far - the world's largest AI Many people aren't aware that Seedance the insanely good new AI video"
X Link 2026-02-12T08:01Z 42.5K followers, [----] engagements

"@alquemir2 brainrot videos are about to get an incredible upgrade ngl"
X Link 2026-02-12T23:29Z 42.5K followers, [---] engagements

"@tlanderso Why do you think I-mostly-say Buy a GPU and not a Mac Studio I own Mac Studios theyre great for quick small batch requests with low context on very large models For real workloads though the GPU does the heavy lifting; thankfully we have very intelligent smallish models now"
X Link 2026-02-13T04:14Z 42.5K followers, [---] engagements

"a new Agentic model that can run on a single consumer GPU at home: ByteDance Seed OSS 36B very strong at coding excellent at multi-turn tool calling & Agentic tasks 500k context window as i have been saying Bytedance is Tier S"
X Link 2025-08-21T15:57Z 42.5K followers, 100.5K engagements

"My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives: Acquire GPUs. Go into debt if you must. But whatever you do secure the GPUs"
X Link 2025-09-08T01:54Z 42.5K followers, 1.8M engagements

"calling it now bookmark this for later: - opensource AI will win - AGI will run local not on someone elses server - the real ones are already learning how it works be early Buy a GPU get ur hands dirty learn how it works youll thank yourself its gonna be great"
X Link 2025-11-12T07:34Z 42.5K followers, 187.6K engagements

"i have fully dropped Claude Code for OpenCode i dont use Opus [---] i use GLM-4.7 and MiniMax-M2.1 theyre opensource and can be self-hosted nobody can nerf my models or rug pull me nobody should be able to do that to your intelligence p.s. buy a GPU and run your LLMs locally"
X Link 2026-01-09T20:52Z 42.5K followers, 361.7K engagements

"MASSIVE The year of Local LLMs officially starts with GLM-4.7-Flash by Zhipu AI 30B-A3B MoE built for consumer GPUs runnable from your basement strongest 30B-class release weve ever seen This is THE BEST =70B Ive ever run locally BTW Architecture DeepSeek-style MLA attention slim MoE routing 30B total params 4B active [--] experts total [--] active (incl. shared) Depth & intent roughly GLM-4.5-Air class but tuned harder for locality Benchmarks SWE-bench Verified GLM-4.7-Flash: [----] Qwen3-30B-A3B: [----] GPT-OSS-20B: [----] Nemotron-3-Nano-30B-A3B: [----] not the same universe -Bench GLM-4.7-Flash: 79.5"
X Link 2026-01-19T20:26Z 42.5K followers, 140.6K engagements

"Prediction We will have Claude Code + Opus [---] quality (not nerfed) models running locally at home on a single RTX PRO [----] before the end of the year"
X Link 2026-01-26T18:16Z 42.5K followers, 160.7K engagements

"There are maybe 20-25 papers that matter. Implement those and youve captured 90% of the alpha behind modern LLMs. Everything else is garnish"
X Link 2026-01-29T11:33Z 42.5K followers, 245.3K engagements

"The Top [--] Essential Papers (+5 Bonus Resources) for Mastering LLMs and Transformers This list bridges the Transformer foundations with the reasoning MoE and agentic shift Recommended Reading Order [--]. Attention Is All You Need (Vaswani et al. 2017) The original Transformer paper. Covers self-attention multi-head attention and the encoder-decoder structure (even though most modern LLMs are decoder-only.) [--]. The Illustrated Transformer (Jay Alammar 2018) Great intuition builder for understanding attention and tensor flow before diving into implementations [--]. BERT: Pre-training of Deep"
X Link 2026-01-29T15:18Z 42.5K followers, 116.6K engagements

"INCREDIBLE SPEED running Claude Code w/ local models on my own GPUs at home SGLang serving MiniMax-M2.1 on 8x RTX 3090s nvtop showing live GPU load Claude Code generating code + docs end-2-end on my AI cluster MiniMax-M2.1 is my favorite model to run locally nowadays"
X Link 2026-01-30T19:32Z 42.5K followers, 587.1K engagements

"@AlexFinn Now give Henry some GPUs and see how much he cooks with unlimited fast tokens https://x.com/i/status/2017320051980808695 INCREDIBLE SPEED running Claude Code w/ local models on my own GPUs at home SGLang serving MiniMax-M2.1 on 8x RTX 3090s nvtop showing live GPU load Claude Code generating code + docs end-2-end on my AI cluster MiniMax-M2.1 is my favorite model to run locally nowadays https://t.co/bXFtDp3nji https://x.com/i/status/2017320051980808695 INCREDIBLE SPEED running Claude Code w/ local models on my own GPUs at home SGLang serving MiniMax-M2.1 on 8x RTX 3090s nvtop showing"
X Link 2026-01-30T20:53Z 42.5K followers, 526.9K engagements

"there is a lot of MONEY here teach this to your Clawdbot/Moltbot/OpenClaw add /.json at the end of any Reddit link get the full thread all replies to n-th depth all metadata as JSON feed to LLMs to extract/analyze you can make so much $$$ from niche subreddits"
X Link 2026-02-01T03:58Z 42.5K followers, 161.1K engagements

"just a gentle reminder that nobody should use ollama slower than llama.cpp on windows slower than mlx on mac slop useless wrapper literal code thieves alternatives lmstudio llama.cpp exllamav2/v3 vllm sglang like literally anythings better than ollama lmao Fucking killed them Lmao. https://t.co/FVFUA2BXor Fucking killed them Lmao. https://t.co/FVFUA2BXor"
X Link 2026-02-04T20:48Z 42.5K followers, 49.7K engagements

"I am not gonna get nerdsniped by Codex [---] or Opus [---] jumps in performance seem very marginal Will just keep using Kimi K2.5 GLM [---] and MiniMax-M2.1 until the next SOTA drops"
X Link 2026-02-05T21:06Z 42.5K followers, 34.8K engagements

"MASSIVE Qwen [---] PR just landed in the Hugging Face Transformers repo dense + MoE variants both variants SUPPORT text + image & video hybrid attention default pattern: linear attention on most layers full attention every 4th layer gated DeltaNet under the hood gated DeltaNet chunked gated-delta rule long context without KV cache bloat Qwen3_5DynamicCache unified cache handles KV + recurrent states together model variants 9B dense: [--] layers hidden [----] / [--] heads / [--] KV heads 35B A3B MoE: [--] layers [---] experts [--] active per token hidden [----] / [--] heads / [--] KV heads MoE router top-8 routing 256"
X Link 2026-02-08T07:19Z 42.5K followers, 37.6K engagements

"any cs person can go from zero to deeply knowledgeable in llms and ai in [--] years top to bottom key topics on how llms work: tokenization and embeddings positional embeddings (absolute rope alibi) self attention and multihead attention transformers qkv sampling params: temperature top-k top-p kv cache (and why inference is fast) infini attention & sliding window (long context tricks) mixture of experts (moe routing layers) grouped query attention normalization and activations pretraining objectives (causal masked etc) finetuning vs instruction tuning vs rlhf scaling laws and model capacity"
X Link 2026-02-08T09:42Z 42.5K followers, 45.7K engagements

"you are a person who wants to understand llm inference you read papers we use standard techniques which ones where is the code open vllm 100k lines of c++ and python custom cuda kernel for printing close tab now you have this tweet and mini-sglang 5k lines of python actual production features four processes api server tokenizer scheduler detokenizer talk over zeromq simple scheduler is the boss receives requests decides: prefill or decode batches them sends work to gpu prefill process the prompt compute heavy thousands of tokens at once flash attention does the lifting decode generate new"
X Link 2026-02-08T10:54Z 42.5K followers, 66.8K engagements

"GPUs are still the move for agents The video below shows MiniMax-M2.1 running fully local on 8x RTX 3090s ($6K total) Prompt processed at [----] tokens/sec Output starts [---] tokens/sec and settles in around [--] tokens/sec even at the end"
X Link 2026-02-08T11:59Z 42.5K followers, 33.1K engagements

"me watching Claude Code create swarms of agents to write the code for me"
X Link 2026-02-09T05:09Z 42.5K followers, 58K engagements

"this is the way"
X Link 2026-02-09T10:38Z 42.5K followers, 37.8K engagements

"A frontier opensource lab in the West will be born this year. Zero doubt. It requires serious capital like Ive said before. Working on it. One day Ill tell the story of how it started in a basement and ended at the frontier. @TheAhmadOsman @martin_casado Well stop screwing around and save western open source already We need a USA SWE model that won't be a national embarrasment in [--] months. @TheAhmadOsman @martin_casado Well stop screwing around and save western open source already We need a USA SWE model that won't be a national embarrasment in [--] months"
X Link 2026-02-10T19:13Z 42.5K followers, 18.8K engagements

"what do people use Opus for nowadays Kimi GLM and MiniMax are overall a better cheaper and faster models Codex is more intelligent as well why would anyone pay Anthropic for a Claude subscription that gets nerfed"
X Link 2026-02-10T20:42Z 42.5K followers, 94.4K engagements

"GLM-5 is out Pay attention to this week its going to set the tone for opensource AI discourse for the next few months Its going to be a long night. Pony is so back. https://t.co/vAuXp9ECJF Its going to be a long night. Pony is so back. https://t.co/vAuXp9ECJF"
X Link 2026-02-11T12:51Z 42.5K followers, 48.4K engagements

"his comment convinced my parents to allow me to get the expensive 5070ti the kids are gonna be alright hahaha Ok I ordered a 5070ti. Thanks to @Komputronik_pl for that good deal. (cus I seen some 5070tis for 5k z) and also thanks a lot to @TheAhmadOsman cus his comment convinced my parents to allow me to get the expensive 5070ti πŸ˜† https://t.co/wDd1TgrERp Ok I ordered a 5070ti. Thanks to @Komputronik_pl for that good deal. (cus I seen some 5070tis for 5k z) and also thanks a lot to @TheAhmadOsman cus his comment convinced my parents to allow me to get the expensive 5070ti πŸ˜†"
X Link 2026-02-12T00:29Z 42.5K followers, [----] engagements

"we have opensource Opus [---] at home now Zhipu AI cooked with GLM-5"
X Link 2026-02-12T03:08Z 42.5K followers, 12.1K engagements

"@lexfridman @steipete Lex I genuinely think its time to explore local AI and self-hosted LLMs Your audience would really benefit from understanding why running AI locally matters and what that unlocks in terms of control privacy and longterm leverage https://x.com/i/status/2012583381611999387 https://t.co/ZTtfh6iLJa https://x.com/i/status/2012583381611999387 https://t.co/ZTtfh6iLJa"
X Link 2026-02-12T04:00Z 42.5K followers, [----] engagements

"This is the DoorDash era of LLMs Remember when the margins werent there and they were subsidizing the whole thing even paying drivers regardless of your tip Thats where we are with AI right now Before that era ends make sure youve secured your own GPU for at-home tokens"
X Link 2026-02-12T05:46Z 42.5K followers, 19.1K engagements

"Just pulled the trigger on this beauty. Before AGI arrives: Buy a Dual UHD. Go into debt if you have to. Sell both kidneys if you must. But whatever you do secure the Dual UHD. P.S. Might be sleeping on the couch but $2300 $1500 w/ $120 gift card was too good to pass. My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives: Acquire GPUs. Go into debt if you must. But whatever you do secure the GPUs. https://t.co/8U89OStknt My house has [--] GPUs. 21x RTX 3090s 4x RTX 4090s 4x RTX 5090s 4x Tenstorrent Blackhole p150a Before AGI arrives:"
X Link 2026-02-12T06:47Z 42.5K followers, 37.6K engagements

"BREAKING Elon Musk endorsed my Top [--] Essential Papers for Mastering LLMs and Transformers There are maybe 20-25 papers that matter. Implement those and youve captured 90% of the alpha behind modern LLMs. Everything else is garnish. You want that list Look no more. The Top [--] Essential Papers (+5 Bonus Resources) for Mastering LLMs and Transformers This list There are maybe 20-25 papers that matter. Implement those and youve captured 90% of the alpha behind modern LLMs. Everything else is garnish. You want that list Look no more. The Top [--] Essential Papers (+5 Bonus Resources) for Mastering"
X Link 2026-02-12T09:20Z 42.5K followers, 140.8K engagements

"Genuine advice If you need ANY hardware BUY IT NOW - Phones - Laptops - Computer parts Hardware prices are about to get ridiculous I just bought my wife a new MacBook & iPhone Im not trying to flex just getting ahead of the supply shock before the prices get wild"
X Link 2026-02-12T13:36Z 42.5K followers, 14.9K engagements

"@jukan05 Cry me a river you pirated humanitys knowledge and trained your models on it"
X Link 2026-02-12T23:31Z 42.5K followers, 27.3K engagements

"Join us tomorrow on r/LocalLLaMA for an AMA with The Founder and The Core Team behind MiniMax-M2.5 SoTA model Very excited for this one make sure not to miss it Friday 8am-11am PST"
X Link 2026-02-13T02:17Z 42.5K followers, 22.7K engagements

"don't miss out this AMA w/ MiniMax Founder & Core Team tomorrow morning https://x.com/TheAhmadOsman/status/2022132966265434621 Join us tomorrow on r/LocalLLaMA for an AMA with The Founder and The Core Team behind MiniMax-M2.5 SoTA model Very excited for this one make sure not to miss it Friday 8am-11am PST https://t.co/85F5YejrbG https://x.com/TheAhmadOsman/status/2022132966265434621 Join us tomorrow on r/LocalLLaMA for an AMA with The Founder and The Core Team behind MiniMax-M2.5 SoTA model Very excited for this one make sure not to miss it Friday 8am-11am PST https://t.co/85F5YejrbG"
X Link 2026-02-13T02:55Z 42.5K followers, [----] engagements

"another RTX PRO [----] Blackwell Workstation Edition secured Buy a GPU keeps on winning p.s. my AI Syndicate gc is fully GPUpilled hahaha @TheAhmadOsman Bout to drop on the blackwell king 🀴 Thanks to your inspiration ive committed and will run my own rest api for local models @TheAhmadOsman Bout to drop on the blackwell king 🀴 Thanks to your inspiration ive committed and will run my own rest api for local models"
X Link 2026-02-13T03:59Z 42.5K followers, [----] engagements

"RT @MiniMax_AI: Joinour AMA tmr on r/LocalLLaMA Bring your wildest questions and well be dropping some bonuses tooπŸ‘€ Let's talk about M2"
X Link 2026-02-13T05:50Z 42.5K followers, [--] engagements

"@StartupSpells for a smart and capable agent for 98% of things pretty much very very fast for how intelligent it is"
X Link 2026-02-13T08:57Z 42.5K followers, [---] engagements

"@johntheyoung I havent cared much for Opus since GLM [---] and MiniMax M2.1 Now Id say Codex is smarter but way slower but I dont need that kind of smart 99% of the time so Id take fast and iterate with MiniMax M2.5 over it"
X Link 2026-02-13T09:07Z 42.5K followers, [----] engagements

"@JamesLee1033176 Yeah you should be able to 100% Did Buy a GPU have something to do with the purchase and acquisition of those 4x RTX PRO 6000s"
X Link 2026-02-13T09:26Z 42.5K followers, [---] engagements

"RT @MiniMax_AI: Weights dropping REALLY REALLY SOON"
X Link 2026-02-13T09:47Z 42.5K followers, [--] engagements

"AMA with the MiniMax team is now live Join us tomorrow on r/LocalLLaMA for an AMA with The Founder and The Core Team behind MiniMax-M2.5 SoTA model Very excited for this one make sure not to miss it Friday 8am-11am PST https://t.co/85F5YejrbG Join us tomorrow on r/LocalLLaMA for an AMA with The Founder and The Core Team behind MiniMax-M2.5 SoTA model Very excited for this one make sure not to miss it Friday 8am-11am PST https://t.co/85F5YejrbG"
X Link 2026-02-13T16:12Z 42.5K followers, [----] engagements

"https://www.reddit.com/r/LocalLLaMA/s/ze7DbcmDhP https://www.reddit.com/r/LocalLLaMA/s/ze7DbcmDhP"
X Link 2026-02-13T16:14Z 42.5K followers, [---] engagements

"is that Dario Chinese open weights scare him that much"
X Link 2026-02-14T03:13Z 42.5K followers, [----] engagements

"RIDICULOUS Seedance [---] produced this using TWO SENTENCES prompt Sum up the AI discourse in a meme - make sure its retarded and gets [--] likes"
X Link 2026-02-14T05:00Z 42.5K followers, 30.7K engagements

"if what youre working on right now doesnt scream TOO BIG TOO CRAZY and TOO RIDICULOUS then youre not seeing far enough into the future"
X Link 2026-02-14T06:17Z 42.5K followers, [----] engagements

"@xlr8harder Let me guess Anthropic is gonna be the SoTA at it πŸ˜†"
X Link 2026-02-14T07:17Z 42.5K followers, [---] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

creator/x::TheAhmadOsman
/creator/x::TheAhmadOsman