Dark | Light
# ![@ivanfioravanti Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::43874767.png) @ivanfioravanti Ivan Fioravanti α―…

Ivan Fioravanti α―… posts on X about ultra, $2413t, apple, claude code the most. They currently have [------] followers and [---] posts still getting attention that total [------] engagements in the last [--] hours.

### Engagements: [------] [#](/creator/twitter::43874767/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::43874767/c:line/m:interactions.svg)

- [--] Week [-------] +220%
- [--] Month [---------] +11%
- [--] Months [---------] +54%
- [--] Year [---------] +30%

### Mentions: [--] [#](/creator/twitter::43874767/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::43874767/c:line/m:posts_active.svg)

- [--] Week [--] -45%
- [--] Month [---] +19%
- [--] Months [-----] +67%
- [--] Year [-----] +109%

### Followers: [------] [#](/creator/twitter::43874767/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::43874767/c:line/m:followers.svg)

- [--] Week [------] +3.20%
- [--] Month [------] +8.40%
- [--] Months [------] +26%
- [--] Year [------] +83%

### CreatorRank: [-------] [#](/creator/twitter::43874767/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::43874767/c:line/m:influencer_rank.svg)

### Social Influence

**Social category influence**
[technology brands](/list/technology-brands)  11.28% [stocks](/list/stocks)  3.59% [products](/list/products)  2.05% [travel destinations](/list/travel-destinations)  1.54% [countries](/list/countries)  1.54% [finance](/list/finance)  1.03% [social networks](/list/social-networks)  0.51% [automotive brands](/list/automotive-brands)  0.51% [exchanges](/list/exchanges)  0.51% [celebrities](/list/celebrities)  0.51%

**Social topic influence**
[ultra](/topic/ultra) #1145, [$2413t](/topic/$2413t) #84, [apple](/topic/apple) #1583, [claude code](/topic/claude-code) #103, [llamacpp](/topic/llamacpp) 4.1%, [inference](/topic/inference) 3.59%, [ai](/topic/ai) 3.59%, [model](/topic/model) #1023, [more than](/topic/more-than) 2.56%, [strong](/topic/strong) #1727

**Top accounts mentioned or mentioned by**
[@princecanuma](/creator/undefined) [@kernelpool](/creator/undefined) [@llmjunky](/creator/undefined) [@clementpillette](/creator/undefined) [@limzba](/creator/undefined) [@krakowiakk](/creator/undefined) [@angeloskath](/creator/undefined) [@thezachmueller](/creator/undefined) [@jakitreehorne](/creator/undefined) [@prince_canuma](/creator/undefined) [@awnihannun](/creator/undefined) [@minimaxai](/creator/undefined) [@grok](/creator/undefined) [@rickrosstn](/creator/undefined) [@sudoingx](/creator/undefined) [@digitalix](/creator/undefined) [@filipstrand](/creator/undefined) [@thedarthsider](/creator/undefined) [@andrejusb](/creator/undefined) [@andreihasna](/creator/undefined)

**Top assets mentioned**
[Alphabet Inc Class A (GOOGL)](/topic/$googl) [Microsoft Corp. (MSFT)](/topic/microsoft) [Tesla, Inc. (TSLA)](/topic/tesla)
### Top Social Posts
Top posts by engagements in the last [--] hours

"πŸ”₯ Apple M3 Ultra 512GB vs NVIDIA RTX [----] LLM Benchmark Results πŸ”₯Running Qwen3-30B-A3B (Q4_K_M) on llamacpp and 4bit on MLX pp512: πŸ₯‡ M3 w/ MLX: [----] t/s πŸ₯ˆ 3090: [----] t/s πŸ₯‰ M3 w/ Metal: [----] t/s tg128: πŸ₯‡ 3090: [---] t/s πŸ₯ˆ M3 w/ MLX: [--] t/s πŸ₯‰ M3 w/ Metal: [--] t/s"  
[X Link](https://x.com/ivanfioravanti/status/1959018028579852674)  2025-08-22T22:21Z 19.9K followers, 27.3K engagements


"Qwen3-30B-A3B-2507 Q5_K_M + M3 Ultra 512GB + llamacpp with [--] parallel requests + agno = πŸ”₯ Next step is trying with MLX Batch Inference"  
[X Link](https://x.com/ivanfioravanti/status/1974494216702025788)  2025-10-04T15:18Z 20.4K followers, 14.2K engagements


"Using the new MLX server_benchmark for continuous batching to push MiniMax M2.1 locally on M3 Ultra. 4bit: [--] request: [--] t/s [--] requests: [---] t/s πŸ”₯ 8bit: [--] request: [--] t/s [--] requests: 150t/s πŸ”₯"  
[X Link](https://x.com/ivanfioravanti/status/2011115626690179290)  2026-01-13T16:38Z 19.9K followers, [----] engagements


"GLM-4.7-Flash-8bit-gs32 is perfect for local coding and tool calling 4bit is too compressed. Here detailed benchmark with contexts from 0.5K to 128K tokens on Apple M3 Ultra [---] and a comparison with 4bit at the end. Chart 1/3"  
[X Link](https://x.com/ivanfioravanti/status/2014676267325432266)  2026-01-23T12:27Z 20.4K followers, 14K engagements


"@lucatac0 Let me update my code that uses llamacpp and give it a try"  
[X Link](https://x.com/ivanfioravanti/status/2014943852788850734)  2026-01-24T06:10Z 20.4K followers, [---] engagements


"Preparing some llamacpp benchmarks on Apple Silicon. Stay tuned"  
[X Link](https://x.com/ivanfioravanti/status/2014975040421331399)  2026-01-24T08:14Z 19.8K followers, [----] engagements


"GLM-4.7-Flash 8bit Context Royal Rumble πŸ”₯ - M3 Ultra [---] - llamacpp [----] with vs mlx 0.30.5 (from main) - UnslothAI Q8_0 vs mlx 8bit (gs64) both [---] bpw πŸ₯‡ MLX πŸ₯ˆ llamacpp Big jump in performance in both with latest version Details in 🧡 and OpenCode tests coming soon"  
[X Link](https://x.com/ivanfioravanti/status/2015045081133121570)  2026-01-24T12:52Z 20.4K followers, [----] engagements


"Oura ring membership deleted no more need to track data with it πŸ€·πŸ»β™‚ @ouraring deleting subscription is not so easy I wonder why 🀨"  
[X Link](https://x.com/ivanfioravanti/status/2015395395174531343)  2026-01-25T12:04Z 20.2K followers, [----] engagements


"Jan-v2-4B models in 4bit and 8bit are now on mlx-community I use them through LM Studio. Slighlty faster than llamacpp (q4_0 and q8_0 used as GGUF to make a better comparison): πŸ₯‡ MLX vs πŸ₯ˆ llama.cpp 4bit: [---] tps vs [---] tps 8bit: 92ps vs [--] tps Great model @jandotai πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2016060789799395669)  2026-01-27T08:08Z 20.4K followers, [----] engagements


"Kimi K2.5 - Same Prompt - OpenCode vs KIMI CLI vs Claude Code any differences πŸ€” Create a single-page website for "PHANTOM PROTOCOL" a fictional tactical shooter video game. Design capabilities of this model are out of scale πŸ₯‡ Final results at the end of the video. 🧡"  
[X Link](https://x.com/ivanfioravanti/status/2017263132498919435)  2026-01-30T15:46Z 19.8K followers, 44.8K engagements


"Ollama ❀ MLX πŸ™ The CUDA backend of MLX now builds on Windows with tests passing special thanks to @ollama for a lot of help making this happen. It still needs some efforts to provide Windows binaries for MLX but I think ollama will ship the code to users much sooner. https://t.co/kKuJpd9Wdr The CUDA backend of MLX now builds on Windows with tests passing special thanks to @ollama for a lot of help making this happen. It still needs some efforts to provide Windows binaries for MLX but I think ollama will ship the code to users much sooner. https://t.co/kKuJpd9Wdr"  
[X Link](https://x.com/ivanfioravanti/status/2017467153234919875)  2026-01-31T05:17Z 19.8K followers, [----] engagements


"K2.5 - GLM-4.7 and MiniMax M2.1 solving a Rubiks Cube in 3D. πŸ‘€ Kimi CLI and Claude Code used here. πŸ₯‡ GLM-4.7 [--] secs and perfect πŸ₯ˆ K2.5 - Missing colors in the cube πŸ₯‰ MiniMax [---] - Missing colors and failed to autosolve"  
[X Link](https://x.com/ivanfioravanti/status/2017595857986527570)  2026-01-31T13:48Z 19.8K followers, 29.1K engagements


"MLX Phantom Protocol prompt developed locally with GLM-4.7-Flash using the new tensor parallel support in mlx-lm.server and OpenCode to drive development mactop on the right measuring the two M3 Ultra [---] used as compute power [--] minutes and great result"  
[X Link](https://x.com/ivanfioravanti/status/2017902596455796830)  2026-02-01T10:07Z 19.8K followers, 17.2K engagements


"In the opeconde.sjon I added this provider: "provider" : "mlx" : "models" : "mlx-community/GLM-4.7-Flash-8bit" :  "npm" : "@ai-sdk/openai-compatible" "options" : "baseURL" : "http://localhost:8080/v1"  https://twitter.com/i/web/status/2017905709342495204 https://twitter.com/i/web/status/2017905709342495204"  
[X Link](https://x.com/ivanfioravanti/status/2017905709342495204)  2026-02-01T10:19Z 19.8K followers, [---] engagements


"@angeloskath @ai I was facing some error in tools calls so I had to use this PR locally: https://github.com/ml-explore/mlx-lm/pull/792 https://github.com/ml-explore/mlx-lm/pull/792"  
[X Link](https://x.com/ivanfioravanti/status/2017905835863585229)  2026-02-01T10:20Z 19.8K followers, [----] engagements


"vibe coding paralysis is real https://t.co/RbzyeSjatz https://t.co/RbzyeSjatz"  
[X Link](https://x.com/ivanfioravanti/status/2017910121888616883)  2026-02-01T10:37Z 19.8K followers, [----] engagements


"Direct MLX support coming soon in @jandotai πŸ”₯ MLX support coming soon @jandotai https://t.co/vjetN3OYKM MLX support coming soon @jandotai https://t.co/vjetN3OYKM"  
[X Link](https://x.com/ivanfioravanti/status/2017985015749988540)  2026-02-01T15:34Z 19.8K followers, [----] engagements


"Ready for some fun with MLX and Step-3.5-Flash"  
[X Link](https://x.com/ivanfioravanti/status/2018233425707106730)  2026-02-02T08:02Z 19.8K followers, [----] engagements


"Adding support for model type step3p5 to MLX using Codex MLX Skill and @RepoPrompt let's try πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2018233932412510250)  2026-02-02T08:04Z 19.8K followers, [----] engagements


"glm-5 in February Incredible acceleration πŸš€ @TeksEdge glm-5 @TeksEdge glm-5"  
[X Link](https://x.com/ivanfioravanti/status/2018330102694555942)  2026-02-02T14:26Z 19.9K followers, [----] engagements


"MLX: I have a first version of Step-3.5-Flash running locally on my M3 Ultra πŸ”₯πŸ”₯πŸ”₯ But I bet @kernelpool will do much better"  
[X Link](https://x.com/ivanfioravanti/status/2018335134001361381)  2026-02-02T14:46Z 19.9K followers, 21.3K engagements


"MLX Step-3.5-Flash I've reached [--] toks/s Thanks to Fast-MLX skill by @awnihannun used withing Codex with GPT [---] High From [--] toks/s v0 to [--] toks/s v2 πŸš€ but again I bet @kernelpool will do even better πŸ™ŒπŸ» MLX: I have a first version of Step-3.5-Flash running locally on my M3 Ultra πŸ”₯πŸ”₯πŸ”₯ But I bet @kernelpool will do much better https://t.co/pY9QAXaElH MLX: I have a first version of Step-3.5-Flash running locally on my M3 Ultra πŸ”₯πŸ”₯πŸ”₯ But I bet @kernelpool will do much better https://t.co/pY9QAXaElH"  
[X Link](https://x.com/ivanfioravanti/status/2018349755429003442)  2026-02-02T15:44Z 19.8K followers, 13.9K engagements


"MLX Context Benchmark for Step-3.5-Flash-4bit using the PR of @kernelpool πŸ”₯ Here on Apple M3 Ultra 512GB but it can run on 128GB"  
[X Link](https://x.com/ivanfioravanti/status/2018375149251158040)  2026-02-02T17:25Z 19.9K followers, [----] engagements


"PR is here: https://github.com/ml-explore/mlx-lm/pull/836 https://github.com/ml-explore/mlx-lm/pull/836"  
[X Link](https://x.com/ivanfioravanti/status/2018375156834423166)  2026-02-02T17:25Z 19.8K followers, [----] engagements


"If you play with Apple MLX remember to install skills from @awnihannun uvx --from mlx-skills --codex https://github.com/awni/mlx-skills.git https://github.com/awni/mlx-skills.git"  
[X Link](https://x.com/ivanfioravanti/status/2018395241553002550)  2026-02-02T18:45Z 19.8K followers, [----] engagements


"Step-3.5-Flash in action on MLX with OpenCode on a single (distributed testing in progress) M3 Ultra to create a snake game πŸ”₯ 6bit quantization. Perfect tool calling. Fast & powerful coding model Recommended Inference Settings: Temperature: [---] Top-p: [----] Top-k: [--] 🧡"  
[X Link](https://x.com/ivanfioravanti/status/2018630307012915583)  2026-02-03T10:19Z 19.9K followers, 17.1K engagements


"Server started with this: mlx_lm.server --model mlx-community/Step-3.5-Flash-4bit --temp [--] --top-p [----] --top-k [--] --trust-remote-code 🧡"  
[X Link](https://x.com/ivanfioravanti/status/2018630309038924150)  2026-02-03T10:19Z 19.8K followers, [----] engagements


"@RickRossTN No I invoked 6bit where is the 4bit part OpenCode is using 6bit and mlx_lm.server started with 6bit"  
[X Link](https://x.com/ivanfioravanti/status/2018632461522264499)  2026-02-03T10:27Z 19.8K followers, [---] engagements


"@RickRossTN Here the correct one. Thanks πŸ™πŸ» mlx_lm.server --model mlx-community/Step-3.5-Flash-6bit --temp [--] --top-p [----] --top-k [--] --trust-remote-code"  
[X Link](https://x.com/ivanfioravanti/status/2018638657432027164)  2026-02-03T10:52Z 19.8K followers, [---] engagements


"Pushing Mac Studio M3 Ultra fan to max speed during inference with TG Pro πŸ”₯"  
[X Link](https://x.com/ivanfioravanti/status/2018639506057121958)  2026-02-03T10:55Z 19.8K followers, [----] engagements


"MLX Distributed inference testing with Step-3.5-Flash-6bit in progress on [--] x M3 Ultra 512GB. Space Invaders coded locally coming soon πŸ‘Ύ Vite + JavaScript + Phaser [--] as engine"  
[X Link](https://x.com/ivanfioravanti/status/2018643250769514905)  2026-02-03T11:10Z 19.9K followers, [----] engagements


"@SIGKITTEN πŸ˜‚ Imagine when some Microsoft teams will start shipping for MacOS first too"  
[X Link](https://x.com/ivanfioravanti/status/2018722398540018110)  2026-02-03T16:25Z 19.9K followers, [---] engagements


"Amazing job Kimi K2.5 full VLM distributed πŸ”₯ This is Kimi K2.5 (1T params)the full VLM mind you not just the language model describing an image. It runs in distributed mode across two Big Macs using MLX πŸ₯³ πŸ₯‡This is a first for mlx_vlm We wrote an experimental script with @Prince_Canuma read on to learn how it works. https://t.co/H8GTMWvRFz This is Kimi K2.5 (1T params)the full VLM mind you not just the language model describing an image. It runs in distributed mode across two Big Macs using MLX πŸ₯³ πŸ₯‡This is a first for mlx_vlm We wrote an experimental script with @Prince_Canuma read on to"  
[X Link](https://x.com/ivanfioravanti/status/2018789228398936158)  2026-02-03T20:50Z 20.3K followers, [----] engagements


"Feel the power of MLX πŸš€ Latest mlx-lm is out: - New models: Kimi K2.5 Step3.5 flash LongCat Flash lite thanks to @kernelpool - Support for distributed inference with mlx_lm.server thanks to @angeloskath - Much faster and more memory efficient DeepSeek v3 (and other MLA-based models) https://t.co/lEL2KnNz6Y Latest mlx-lm is out: - New models: Kimi K2.5 Step3.5 flash LongCat Flash lite thanks to @kernelpool - Support for distributed inference with mlx_lm.server thanks to @angeloskath - Much faster and more memory efficient DeepSeek v3 (and other MLA-based models) https://t.co/lEL2KnNz6Y"  
[X Link](https://x.com/ivanfioravanti/status/2019439788961488910)  2026-02-05T15:55Z 19.8K followers, [----] engagements


"Incredible speed up More than welcome while coding with MLX as backend πŸ™ The speed up for DeepSeek v3 is especially nice for long context (more than 2.5x). Some pre / post numbers here: https://t.co/iffB4lKBE7 The speed up for DeepSeek v3 is especially nice for long context (more than 2.5x). Some pre / post numbers here: https://t.co/iffB4lKBE7"  
[X Link](https://x.com/ivanfioravanti/status/2019439943861288991)  2026-02-05T15:56Z 19.8K followers, [----] engagements


"And the winner is: GPT-5.3 Codex And in real life is even better than benchmarks"  
[X Link](https://x.com/ivanfioravanti/status/2019537788874469673)  2026-02-05T22:25Z 20.1K followers, [----] engagements


"What is a good notebook for Linux Tired of waiting for M5 Max πŸ€·πŸ»β™‚"  
[X Link](https://x.com/ivanfioravanti/status/2019539323373383924)  2026-02-05T22:31Z 20.2K followers, [----] engagements


"For anyone complaining that GPT-5.3 Codex was not on the official Terminal Bench leaderboard. Here it is: 75.1% πŸ”₯πŸ”₯πŸ”₯"  
[X Link](https://x.com/ivanfioravanti/status/2019699177228526054)  2026-02-06T09:06Z 20.4K followers, 30K engagements


"mlx-lm-lora wins πŸ”₯πŸ”₯πŸ”₯ Some looooong awaited new features and efficiency gains are coming to mlx-lm-lora here are some of them πŸ™‚πŸ˜ŽπŸ‘ @lmstudio plus some cool new notebooks https://t.co/Ksek9Gwe8o Some looooong awaited new features and efficiency gains are coming to mlx-lm-lora here are some of them πŸ™‚πŸ˜ŽπŸ‘ @lmstudio plus some cool new notebooks https://t.co/Ksek9Gwe8o"  
[X Link](https://x.com/ivanfioravanti/status/2019797541798183258)  2026-02-06T15:37Z 19.8K followers, [----] engagements


"MLX context benchmark for Qwen3-Coder-Next in bf16 [--] [--] [--] and 8bit quantizations tested on M3 Ultra with latest mlx-lm 0.30.6 πŸ”₯ Its a great model fast in all tested configs. Personally I suggest 6bit+ especially with larger context. Choose based on memory availability"  
[X Link](https://x.com/ivanfioravanti/status/2019873669699543168)  2026-02-06T20:39Z 20.4K followers, [----] engagements


"@rod_coutinho @KrakowiakK I will create it next week Its time to do it"  
[X Link](https://x.com/ivanfioravanti/status/2020069114941960497)  2026-02-07T09:36Z 19.8K followers, [--] engagements


"I was testing Qwen3-Next-Coder on llamacpp to compare with MLX but results are too different I need to double check benchmark before posting anything πŸ‘€"  
[X Link](https://x.com/ivanfioravanti/status/2020195625779052650)  2026-02-07T17:59Z 20.4K followers, [----] engagements


"@mudler_it @LocalAI_API Top"  
[X Link](https://x.com/ivanfioravanti/status/2020250924736782819)  2026-02-07T21:38Z 19.9K followers, [--] engagements


"Today Ill be back to Milan and my Mac Studios Using them through iPad Pro in ssh is not my favorite experience πŸ€·πŸ»β™‚"  
[X Link](https://x.com/ivanfioravanti/status/2020380030485033186)  2026-02-08T06:11Z 19.8K followers, [----] engagements


"@andreabalducci @antirez I was planning to do the same test this week but with Open Models Lets see what happens 🀞"  
[X Link](https://x.com/ivanfioravanti/status/2020759824909172776)  2026-02-09T07:21Z 19.8K followers, [---] engagements


"No you cant do this cheaply with NVIDIA GPUs. Still so many misconceptions about running frontier AI locally. Yes its possible. Yes Apple Silicon is the cheapest way to do it ($20k). No you cant do this cheaply with NVIDIA GPUs. Still so many misconceptions about running frontier AI locally. Yes its possible. Yes Apple Silicon is the cheapest way to do it ($20k). No you cant do this cheaply with NVIDIA GPUs"  
[X Link](https://x.com/ivanfioravanti/status/2020760098906255415)  2026-02-09T07:22Z 20.4K followers, [----] engagements


"@sudoingX @AlexFinn What are you talking about It's 600GB"  
[X Link](https://x.com/ivanfioravanti/status/2020877252112621616)  2026-02-09T15:07Z 20.3K followers, [---] engagements


"MLX OpenCode + Qwen3-Coder-Next-8bit 165K context πŸ‘€ Pushing this model to the max locally. Honestly I need to steer it to the right solution too often compared to larger models from MiniMax Zai StepFun"  
[X Link](https://x.com/ivanfioravanti/status/2020910223095992785)  2026-02-09T17:18Z 20.4K followers, [----] engagements


"@francip I need a CUDA machine. I can try in the cloud"  
[X Link](https://x.com/ivanfioravanti/status/2020910566429073825)  2026-02-09T17:20Z 19.9K followers, [---] engagements


"@Jasonio DGX Try it there and keep us posted"  
[X Link](https://x.com/ivanfioravanti/status/2020966368309543282)  2026-02-09T21:01Z 20.1K followers, [---] engagements


"@digitalix M5 Ultra will enable even faster image generation to be used with local coding agents πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2021103540198871392)  2026-02-10T06:06Z 19.9K followers, [--] engagements


"@jemp_error Good suggestion I've not done a context benchmark on it I will asap"  
[X Link](https://x.com/ivanfioravanti/status/2021104084552503737)  2026-02-10T06:08Z 19.9K followers, [---] engagements


"@jemp_error This was a run I did in the past I will try with latest mlx-lm to see if things improved πŸ’ͺ🏻 https://x.com/ivanfioravanti/status/2018375149251158040s=20 MLX Context Benchmark for Step-3.5-Flash-4bit using the PR of @kernelpool πŸ”₯ Here on Apple M3 Ultra 512GB but it can run on 128GB https://t.co/tUZQuhp53c https://x.com/ivanfioravanti/status/2018375149251158040s=20 MLX Context Benchmark for Step-3.5-Flash-4bit using the PR of @kernelpool πŸ”₯ Here on Apple M3 Ultra 512GB but it can run on 128GB https://t.co/tUZQuhp53c"  
[X Link](https://x.com/ivanfioravanti/status/2021138577636167680)  2026-02-10T08:26Z 19.9K followers, [--] engagements


"On-device real time transcription MLX-Audio-Swift is the way to go Thanks @Prince_Canuma for this gem πŸ™πŸ» On-device realtime transcription on iPhone [--] Pro max πŸš€ Using MLX-Audio-Swift + Qwen3-ASR-0.6B by @Alibaba_Qwen Its much faster and more consistent with the latest adjustments. Almost ready to push to GH. https://t.co/xFDjoxiJfg On-device realtime transcription on iPhone [--] Pro max πŸš€ Using MLX-Audio-Swift + Qwen3-ASR-0.6B by @Alibaba_Qwen Its much faster and more consistent with the latest adjustments. Almost ready to push to GH. https://t.co/xFDjoxiJfg"  
[X Link](https://x.com/ivanfioravanti/status/2021145297351487884)  2026-02-10T08:52Z 19.9K followers, [----] engagements


"@Prince_Canuma @Alibaba_Qwen AMAZING"  
[X Link](https://x.com/ivanfioravanti/status/2021145451542487503)  2026-02-10T08:53Z 19.9K followers, [---] engagements


"MLX Context Benchmark for Step-3.5-Flash-4bit with mlx-lm 0.30.7 big performance boost after just [--] week Look by yourself πŸš€ I've been able to test up to 128K cotnext using M3 Ultra 512GB Here the chart: MLX Context Benchmark for Step-3.5-Flash-4bit using the PR of @kernelpool πŸ”₯ Here on Apple M3 Ultra 512GB but it can run on 128GB https://t.co/tUZQuhp53c MLX Context Benchmark for Step-3.5-Flash-4bit using the PR of @kernelpool πŸ”₯ Here on Apple M3 Ultra 512GB but it can run on 128GB https://t.co/tUZQuhp53c"  
[X Link](https://x.com/ivanfioravanti/status/2021164776160632932)  2026-02-10T10:10Z 19.9K followers, [----] engagements


"@thsottiaux 100$ tier"  
[X Link](https://x.com/ivanfioravanti/status/2021215892542021868)  2026-02-10T13:33Z 19.9K followers, [---] engagements


"@ronaldmannak @LiMzba 🀣"  
[X Link](https://x.com/ivanfioravanti/status/2021257697379622993)  2026-02-10T16:19Z 19.9K followers, [--] engagements


"Quality is Incredible 🀩 πŸš€ Introducing Qwen-Image-2.0 our next-gen image generation model 🎨 Your imagination unleashed. ✨ Type a paragraph get a pro slides ✨ Describe a scene get photoreal 2K magic ✨ Add text it just works (no more glitchy letters) ✨ Key upgrades: βœ… Professional https://t.co/rigOUYy81k πŸš€ Introducing Qwen-Image-2.0 our next-gen image generation model 🎨 Your imagination unleashed. ✨ Type a paragraph get a pro slides ✨ Describe a scene get photoreal 2K magic ✨ Add text it just works (no more glitchy letters) ✨ Key upgrades: βœ… Professional https://t.co/rigOUYy81k"  
[X Link](https://x.com/ivanfioravanti/status/2021292640537305407)  2026-02-10T18:38Z 20K followers, [----] engagements


"Codex is better πŸ€·πŸ»β™‚ nearly all of the best engineers i know are switching from claude to codex nearly all of the best engineers i know are switching from claude to codex"  
[X Link](https://x.com/ivanfioravanti/status/2021341823483118039)  2026-02-10T21:53Z 19.9K followers, [----] engagements


"@109mae Totally agree codex is more than just a coder"  
[X Link](https://x.com/ivanfioravanti/status/2021465788784443774)  2026-02-11T06:06Z 19.9K followers, [--] engagements


"People keeps moving to Codex 😎 Cancelled my Max subscription to Claude. Had it for [--] months. Kept a Pro subscription for now but Codex with the [---] model provides comparable coding skill and a better UX. Cancelled my Max subscription to Claude. Had it for [--] months. Kept a Pro subscription for now but Codex with the [---] model provides comparable coding skill and a better UX"  
[X Link](https://x.com/ivanfioravanti/status/2021466536599486858)  2026-02-11T06:09Z 19.9K followers, [----] engagements


"Hes still using Opus [---]. Hell try Codex soon. I have never experienced a more dumb Claude Code than today I have to start coding myself again cause it makes so many Low IQ mistakes they must be nerfing it now that Opus [---] is out or something is up I have never experienced a more dumb Claude Code than today I have to start coding myself again cause it makes so many Low IQ mistakes they must be nerfing it now that Opus [---] is out or something is up"  
[X Link](https://x.com/ivanfioravanti/status/2021466735786983599)  2026-02-11T06:10Z 19.9K followers, [----] engagements


"mflux vs flux2.c quick performance test on M3 Ultra [---]. Both projects are pushing Apple Silicon hardware to the max mflux vs flux2.c in seconds 512x512: [----] vs [----] 1024x1024: [-----] vs [-----] 1792x1792: [-----] vs [----] Amazing jobs @filipstrand and @antirez πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2021479795561988411)  2026-02-11T07:01Z 20.4K followers, [----] engagements


"Prompt used: A surreal cinematic cityscape at dusk where modern skyscrapers bend and fold onto themselves streets curling upward into the sky. Gravity feels unstable with buildings mirrored and layered in impossible geometries. In the foreground stands Adrian Hale a lone contemplative man wearing a dark tailored coat seen from behind small against the vast distorted city. He appears thoughtful and calm as if questioning reality itself. Moody realistic lighting with deep shadows and cool blue-gray tones subtle warm highlights glowing from windows. Ultra-realistic high contrast dramatic"  
[X Link](https://x.com/ivanfioravanti/status/2021479799064072586)  2026-02-11T07:01Z 19.9K followers, [---] engagements


"@Viswana34226652 Yep merge with Space X has been tough I think"  
[X Link](https://x.com/ivanfioravanti/status/2021480360647983149)  2026-02-11T07:04Z 19.9K followers, [---] engagements


"@Prince_Canuma I go directly with CC slower but steady and precise. So far so good"  
[X Link](https://x.com/ivanfioravanti/status/2021480714978672829)  2026-02-11T07:05Z 19.9K followers, [---] engagements


"@CrazyAITech Yes Codex with GPT-5.3-Codex is faster than ever"  
[X Link](https://x.com/ivanfioravanti/status/2021481319495238022)  2026-02-11T07:07Z 19.9K followers, [--] engagements


"@paulmarin90 @Prince_Canuma yes it seems I caught it hallucinate multiple times while writing docs on a codebase. Very well written docs but with fake code examples"  
[X Link](https://x.com/ivanfioravanti/status/2021485721144766702)  2026-02-11T07:25Z 19.9K followers, [--] engagements


"@thedarthsider Problem is that this is 83B params vs 7B"  
[X Link](https://x.com/ivanfioravanti/status/2021535425010291166)  2026-02-11T10:42Z 20K followers, [--] engagements


"My iPhone [--] Pro fell down and is now full of signs everywhere I went to Apple Store in Milan asking to use Apple Care+ to change it and they told me not feasible only if stolen or completely broken πŸ‘€ @Apple what should I do Use an hammer πŸ€”"  
[X Link](https://x.com/ivanfioravanti/status/2021557310938558839)  2026-02-11T12:09Z 20.4K followers, [----] engagements


"Fasten your seatbelts Mega AI Release week"  
[X Link](https://x.com/ivanfioravanti/status/2021558605435301974)  2026-02-11T12:15Z 19.9K followers, [----] engagements


"@1littlecoder @Apple I was more than angry when I left the store. I've spent so much money in Apple devices that they should put picture of myself as best customer ever for that Milan's store 🀣"  
[X Link](https://x.com/ivanfioravanti/status/2021559968374353943)  2026-02-11T12:20Z 20.4K followers, [---] engagements


""I'm not writing this to make you feel helpless. I'm writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt." πŸ’― https://t.co/ivXRKXJvQg https://t.co/ivXRKXJvQg"  
[X Link](https://x.com/ivanfioravanti/status/2021566760814878947)  2026-02-11T12:47Z 19.9K followers, [----] engagements


"GLM-5 Boom A new model is now available on https://t.co/gocggrfb3U. https://t.co/KZGoAsN5Z0 A new model is now available on https://t.co/gocggrfb3U. https://t.co/KZGoAsN5Z0"  
[X Link](https://x.com/ivanfioravanti/status/2021567506520170719)  2026-02-11T12:50Z 20.2K followers, [----] engagements


"M2.5 is not out yet. Why people are saying it is πŸ€”"  
[X Link](https://x.com/ivanfioravanti/status/2021589226744098889)  2026-02-11T14:16Z 20K followers, [----] engagements


"Here the complete prompt that is leveraging a flux2.skill created on the fly using codex. I will now try something similar but with mflux https://gist.github.com/ivanfioravanti/7297e8f19c3d760fd80fe692cf43b176 https://gist.github.com/ivanfioravanti/7297e8f19c3d760fd80fe692cf43b176"  
[X Link](https://x.com/ivanfioravanti/status/2021599615993118949)  2026-02-11T14:58Z 20.1K followers, [---] engagements


"With the release of M5 Max and Ultra MFLUX project will become a superstar Neural Accelerators will help a lot https://github.com/filipstrand/mflux https://github.com/filipstrand/mflux"  
[X Link](https://x.com/ivanfioravanti/status/2021605142810792211)  2026-02-11T15:20Z 20.3K followers, [----] engagements


"@filipstrand @angeloskath On flux2 great jump Can't wait to test mflux on M5 Max or Ultra"  
[X Link](https://x.com/ivanfioravanti/status/2021628450084467037)  2026-02-11T16:52Z 20.2K followers, [---] engagements


"Tools calling failing on MLX asking help to Codex πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2021647369826349372)  2026-02-11T18:07Z 20.2K followers, [----] engagements


"Next week I'll try Tesla FSD in Italy with @CPunella So exciting πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2021652958338031732)  2026-02-11T18:30Z 20.3K followers, [----] engagements


"After an incredible [--] weeks using xAI Ive decided to leave and start using other models. It pains me to go but I was tired of waiting for Grok 4.20"  
[X Link](https://x.com/ivanfioravanti/status/2021654216524140559)  2026-02-11T18:35Z 20.4K followers, [----] engagements


"@rod_coutinho What have you used to test tool calling I'm hacking mlx-lm to get it work properly"  
[X Link](https://x.com/ivanfioravanti/status/2021658162022297653)  2026-02-11T18:50Z 20.1K followers, [----] engagements


"@LLMJunky @badlogicgames I have 20$ Claude Code too πŸ˜‰ But I use Codex more than anything else at the moment"  
[X Link](https://x.com/ivanfioravanti/status/2021676331998822599)  2026-02-11T20:02Z 19.9K followers, [--] engagements


"@LLMJunky @brooks_eth WOW πŸ‘€"  
[X Link](https://x.com/ivanfioravanti/status/2021676523057496117)  2026-02-11T20:03Z 19.9K followers, [--] engagements


"@__mharrison__ Qwen Coder Next better on coding. At least from my initial tests"  
[X Link](https://x.com/ivanfioravanti/status/2021685928767414615)  2026-02-11T20:41Z 20.1K followers, [----] engagements


"@spok_vulkan @AI_Homelab 100%"  
[X Link](https://x.com/ivanfioravanti/status/2021688759620632591)  2026-02-11T20:52Z 20.4K followers, [---] engagements


"Amazon Kindle Scribe Colorsoft not available in Amazon Italy but. Coming Soon on Amazon UK πŸ”₯"  
[X Link](https://x.com/ivanfioravanti/status/2021821568372928953)  2026-02-12T05:40Z 20.4K followers, [----] engagements


"@ClementPillette @Zai_org Wait M5 Ultra. I will try a distributed inference test on [--] x M3 Ultra today. Keep you posted"  
[X Link](https://x.com/ivanfioravanti/status/2021830382371410100)  2026-02-12T06:15Z 20.2K followers, [--] engagements


"@alexcovo_eth @filipstrand @angeloskath @openclaw Try a skill for mflux too release 0.16.0 is super fast https://x.com/ivanfioravanti/status/2021626616808644940s=20 mflux 0.16.0 released and it's truly faster for flux2 Here an updated comparison with flux2.c using the 4B distilled version of the model. Creating a skill right now to generate images locally for my AI tests πŸ”₯ Great job @filipstrand and @angeloskath https://t.co/eLDabQqEbB https://x.com/ivanfioravanti/status/2021626616808644940s=20 mflux 0.16.0 released and it's truly faster for flux2 Here an updated comparison with flux2.c"  
[X Link](https://x.com/ivanfioravanti/status/2021830795204116505)  2026-02-12T06:16Z 20.2K followers, [--] engagements


"Important This new Nanbeige model is really strong Since CORE is such an out of distribution eval this is further evidence that Nanbeige really is an extremely well-trained generalist model and isn't just 'benchmaxxed'. Since CORE is such an out of distribution eval this is further evidence that Nanbeige really is an extremely well-trained generalist model and isn't just 'benchmaxxed'"  
[X Link](https://x.com/ivanfioravanti/status/2021878110665544090)  2026-02-12T09:24Z 20.2K followers, [----] engagements


"I'll try Lambda to test inference for sure Well done Zach Over the last month I've been digging into model inference; what's the best out-of-the-box tokens/s on our hardware and how do you benchmark it Our model-inference revamp is now live with model cards built to answer exactly this (in a community-focused way): https://t.co/ZwGf5wgcQS Over the last month I've been digging into model inference; what's the best out-of-the-box tokens/s on our hardware and how do you benchmark it Our model-inference revamp is now live with model cards built to answer exactly this (in a community-focused way):"  
[X Link](https://x.com/ivanfioravanti/status/2021918013520547878)  2026-02-12T12:03Z 20.4K followers, [----] engagements


"@TheZachMueller @Prince_Canuma Downloading it now"  
[X Link](https://x.com/ivanfioravanti/status/2021938381115339200)  2026-02-12T13:24Z 19.9K followers, [--] engagements


"I was lucky to be able to test early preview of M2.5 and it's fast and furious Chinese Labs are cooking like crazy Well done πŸš€ Introducing M2.5 an open-source frontier model designed for real-world productivity. - SOTA performance at coding (SWE-Bench Verified 80.2%) search (BrowseComp 76.3%) agentic tool-calling (BFCL 76.8%) & office work. - Optimized for efficient execution 37% faster at complex https://t.co/UwiKzzQNG8 Introducing M2.5 an open-source frontier model designed for real-world productivity. - SOTA performance at coding (SWE-Bench Verified 80.2%) search (BrowseComp 76.3%)"  
[X Link](https://x.com/ivanfioravanti/status/2022023388936720636)  2026-02-12T19:01Z 20.4K followers, [----] engagements


"Antirez docet The 20$ codex plan is worth more than the $200 Claude Code plan. The 20$ codex plan is worth more than the $200 Claude Code plan"  
[X Link](https://x.com/ivanfioravanti/status/2022027106189029735)  2026-02-12T19:16Z 20.4K followers, [----] engagements


"@ollama @MiniMax_AI TOP"  
[X Link](https://x.com/ivanfioravanti/status/2022030811005129087)  2026-02-12T19:31Z 20.4K followers, [---] engagements


"@bernaferrari @AntLingAGI INCREDIBLE 😱"  
[X Link](https://x.com/ivanfioravanti/status/2022035263866990712)  2026-02-12T19:49Z 20.1K followers, [--] engagements


"Don't underestimate Trinity-Large-Preview by @arcee_ai Test in progress and it's fast in text generation phase Thanks @TheZachMueller for pushing me on testing it"  
[X Link](https://x.com/ivanfioravanti/status/2022047062049927367)  2026-02-12T20:36Z 20.4K followers, [----] engagements


"@LLMJunky @OpenAI Top OpenAI Top"  
[X Link](https://x.com/ivanfioravanti/status/2022051807921680779)  2026-02-12T20:54Z 20.3K followers, [---] engagements


"@andrejusb I got rid of Jetbrains one year ago after waiting and waiting for their AI to become good enough. 😒"  
[X Link](https://x.com/ivanfioravanti/status/2022191274305233314)  2026-02-13T06:09Z 20.4K followers, [--] engagements


"@storn_max Failed at first try 😒 ImportError: cannot import name 'ALLOWED_LAYER_TYPES' from 'transformers.configuration_utils' (/Users/ifioravanti/.venv-vllm-metal/lib/python3.12/site-packages/transformers/configuration_utils.py). Did you mean: 'ALLOWED_MLP_LAYER_TYPES'"  
[X Link](https://x.com/ivanfioravanti/status/2022260356316340691)  2026-02-13T10:43Z 20.4K followers, [--] engagements


"@storn_max On it but with Codex. πŸ˜‰"  
[X Link](https://x.com/ivanfioravanti/status/2022285082636025857)  2026-02-13T12:21Z 20.4K followers, [--] engagements


"@nanbeige Top You created something magical here Its fast and good πŸ”₯"  
[X Link](https://x.com/ivanfioravanti/status/2022306368489705906)  2026-02-13T13:46Z 20.4K followers, [---] engagements


"@doublebirdcap Model architecture should be the same so it will be a matter of minutes after release πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2022309843525046519)  2026-02-13T14:00Z 20.4K followers, [---] engagements


"@awnihannun Looks a great one to test Thanks for sharing πŸ™πŸ»"  
[X Link](https://x.com/ivanfioravanti/status/2022335753909645532)  2026-02-13T15:43Z 20.4K followers, [---] engagements


"@KrakowiakK Same architecture so same speed maybe they optimize thinking process so we can get answers faster"  
[X Link](https://x.com/ivanfioravanti/status/2022341441112911999)  2026-02-13T16:05Z 20.4K followers, [----] engagements


"@thegeorge @TendiesOfWisdom Here it is https://x.com/ivanfioravanti/status/2022360835621032111s=20 https://t.co/Irt5BwmmUO https://x.com/ivanfioravanti/status/2022360835621032111s=20 https://t.co/Irt5BwmmUO"  
[X Link](https://x.com/ivanfioravanti/status/2022361695742525758)  2026-02-13T17:26Z 20.4K followers, [---] engagements


"@aayushkrm Why Have you got issue with it For me it's good enough"  
[X Link](https://x.com/ivanfioravanti/status/2022378160088596707)  2026-02-13T18:31Z 20.4K followers, [---] engagements


"@JakiTreehorne @Prince_Canuma Wait for M5 πŸ˜‰"  
[X Link](https://x.com/ivanfioravanti/status/2022405146253365751)  2026-02-13T20:18Z 20.4K followers, [--] engagements


"@Nikonenes @Ed_Randgad @andreihasna It codes clearly not GPT-5.3-Codex level or Opus [---] but it 's good enough. Surely better than Closed Models of previous versions. Open Weights are catching up"  
[X Link](https://x.com/ivanfioravanti/status/2022682495783756053)  2026-02-14T14:41Z 20.4K followers, [--] engagements


"Pushing [--] Mac Studio M3 Ultra [---] to the max One is running gpqa_diamon on Nanbeige4.1-3B and the other is running context benchmark on _Trinity-Large-Preview πŸš€πŸš€πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2022032758764449852)  2026-02-12T19:39Z 20.4K followers, [----] engagements


"MiniMax M2.5 weights are online https://huggingface.co/MiniMaxAI/MiniMax-M2.5 https://huggingface.co/MiniMaxAI/MiniMax-M2.5"  
[X Link](https://x.com/ivanfioravanti/status/2022314477694144971)  2026-02-13T14:18Z 20.4K followers, [----] engagements


"@JakiTreehorne @Prince_Canuma Nope 😒 https://x.com/ivanfioravanti/status/2022360835621032111 https://t.co/Irt5BwmmUO https://x.com/ivanfioravanti/status/2022360835621032111 https://t.co/Irt5BwmmUO"  
[X Link](https://x.com/ivanfioravanti/status/2022401791267422619)  2026-02-13T20:05Z 20.4K followers, [---] engagements


"@test_tm7873 US only 😒"  
[X Link](https://x.com/ivanfioravanti/status/2022745957184520487)  2026-02-14T18:53Z 20.4K followers, [---] engagements


"nanotexture display on iPad Pro and MacBook Pro is game changer especially when used in the open Ill never go back Yes you loose a bit of contrast but readability is another league"  
[X Link](https://x.com/ivanfioravanti/status/2019686804916699469)  2026-02-06T08:17Z 20.4K followers, [----] engagements


"Dont underestimate Transformer Lab πŸ”₯ [----] commits and we're just getting started https://t.co/HvqIZ7LMTp [----] commits and we're just getting started https://t.co/HvqIZ7LMTp"  
[X Link](https://x.com/ivanfioravanti/status/2019881695080845504)  2026-02-06T21:11Z 20.4K followers, [----] engagements


"Why everyone left xAI all together πŸ‘€"  
[X Link](https://x.com/ivanfioravanti/status/2021473464599838866)  2026-02-11T06:36Z 20.4K followers, [----] engagements


"@sudo_goreng Its not slow Ultra Mega slow"  
[X Link](https://x.com/ivanfioravanti/status/2021616201022210427)  2026-02-11T16:03Z 20.4K followers, [----] engagements


"Llamacpp has a PR with a strong optimization for Qwen3-Coder-Next I will retest MLX vs Llamacpp on this model as soon as this will be merged. https://github.com/ggml-org/llama.cpp/pull/19375 https://github.com/ggml-org/llama.cpp/pull/19375"  
[X Link](https://x.com/ivanfioravanti/status/2022187629027242397)  2026-02-13T05:54Z 20.4K followers, [----] engagements


"MiniMax M2.5 weights going live in few hours πŸ”₯"  
[X Link](https://x.com/ivanfioravanti/status/2022258227757351399)  2026-02-13T10:35Z 20.4K followers, [----] engagements


"MiniMax-M2.5 is a joy to use FAST and POWERFUL"  
[X Link](https://x.com/ivanfioravanti/status/2022293591293407719)  2026-02-13T12:55Z 20.4K followers, [----] engagements


"BOOM Download started MiniMax-M2.5 is now open source. Trained with reinforcement learning across hundreds of thousands of complex real-world environments it delivers SOTA performance in coding agentic tool use search and office workflows. Hugging Face: https://t.co/Wxksq9BB7t GitHub: MiniMax-M2.5 is now open source. Trained with reinforcement learning across hundreds of thousands of complex real-world environments it delivers SOTA performance in coding agentic tool use search and office workflows. Hugging Face: https://t.co/Wxksq9BB7t GitHub:"  
[X Link](https://x.com/ivanfioravanti/status/2022313766419882187)  2026-02-13T14:15Z 20.4K followers, [----] engagements


"@LiMzba @AI_Homelab @UnslothAI Should we try a dwq πŸ‘€"  
[X Link](https://x.com/ivanfioravanti/status/2022339476325089579)  2026-02-13T15:57Z 20.4K followers, [---] engagements


"@ai_christianson This was basic generation test so context minimal [---] tokens. I'm gonna start some context tests soon"  
[X Link](https://x.com/ivanfioravanti/status/2022339858753306766)  2026-02-13T15:59Z 20.4K followers, [----] engagements


"Both M3 Ultra [---] busy doing context benchmarks on MiniMax [---] 4bit on the left 6bit on the right πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2022344002205573212)  2026-02-13T16:15Z 20.4K followers, [----] engagements


"Forge: Scalable Agent RL Framework and Algorithm. The secret to reach Opus [---] level @MiniMax_AI is cooking https://www.minimax.io/news/forge-scalable-agent-rl-framework-and-algorithm https://www.minimax.io/news/forge-scalable-agent-rl-framework-and-algorithm"  
[X Link](https://x.com/ivanfioravanti/status/2022353972548214868)  2026-02-13T16:55Z 20.4K followers, [----] engagements


"@sleep_deprivado Neural Accelerators (matmul in hardware) is the cherry on the cake. M5 πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2022368752327651580)  2026-02-13T17:54Z 20.4K followers, [----] engagements


"GPT [---] Codex Spark needs an urgent review: "1000 tokens per second means nothing if the model can't follow a basic prompt." GPT [---] Codex Spark is fast but not smart. I gave [--] models the same prompt: Create a hot air balloon ride in HTML. Claude Opus 4.6: Beautiful night scene with colorful balloon. Nailed it. GLM 5: Vibrant sunset with detailed balloon and basket. Great. MiniMax M2.5: Dreamy https://t.co/5O9HE6tO2K GPT [---] Codex Spark is fast but not smart. I gave [--] models the same prompt: Create a hot air balloon ride in HTML. Claude Opus 4.6: Beautiful night scene with colorful balloon."  
[X Link](https://x.com/ivanfioravanti/status/2022382282254983527)  2026-02-13T18:48Z 20.4K followers, [----] engagements


"@CalimanuLoredan Currently is experimental and there is no way to know the limit as far as I can see but when it works it's a great model"  
[X Link](https://x.com/ivanfioravanti/status/2022744723782537423)  2026-02-14T18:48Z 20.4K followers, [---] engagements


"@mweinbach It's much bigger 744B params with 40B active. DeepSeek Sparse Attention helps but overall is too much for a single M3 Ultra"  
[X Link](https://x.com/ivanfioravanti/status/2022724112922079529)  2026-02-14T17:26Z 20.4K followers, [---] engagements


"@itscarlospaiva @MiniMax_AI Maybe smaller ones yes but MiniMax is a public company listed on Hong Kong stock exchange not a small startup"  
[X Link](https://x.com/ivanfioravanti/status/2023090417924018537)  2026-02-15T17:41Z 20.4K followers, [--] engagements


"Be ready for some amazing new open model releases in upcoming weeks πŸ€πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2020759579898921319)  2026-02-09T07:20Z 20.4K followers, [----] engagements


"What is this Nanbeige4.1-3B model running at - [--] toks/s in bf16 (in video) - [---] toks/s in 8bit on M3 Ultra with MLX with these benchmark scores πŸ”₯"  
[X Link](https://x.com/ivanfioravanti/status/2021645592108445951)  2026-02-11T18:00Z 20.4K followers, 85.3K engagements


"@otarkhan94 True Next architecture is top"  
[X Link](https://x.com/ivanfioravanti/status/2023102854764826825)  2026-02-15T18:31Z 20.4K followers, [--] engagements


"mflux 0.16.0 released and it's truly faster for flux2 Here an updated comparison with flux2.c using the 4B distilled version of the model. Creating a skill right now to generate images locally for my AI tests πŸ”₯ Great job @filipstrand and @angeloskath"  
[X Link](https://x.com/ivanfioravanti/status/2021626616808644940)  2026-02-11T16:45Z 20.4K followers, [----] engagements


"MLX - Many quantizations of JoyAI-LLM-Flash are now available on mlx-community on huggingface. It seems a strong model Context benchmark results on M3 Ultra coming soon and testing it now with OpenCode and mlx_lm.server https://huggingface.co/mlx-community/modelssearch=joyai https://huggingface.co/mlx-community/modelssearch=joyai"  
[X Link](https://x.com/ivanfioravanti/status/2023321847878779000)  2026-02-16T09:01Z 20.4K followers, [----] engagements


"@Alibaba_Qwen Congrats for the release Deep diving on it with MLX right now"  
[X Link](https://x.com/ivanfioravanti/status/2023342098892636612)  2026-02-16T10:22Z 20.4K followers, [----] engagements


"LTX-2 is preparing for the battle πŸš€ Faster than you think. Faster than you think"  
[X Link](https://x.com/ivanfioravanti/status/2023372492707082303)  2026-02-16T12:22Z 20.4K followers, [----] engagements


"How to use it with Claude Code My updated GIST here. https://gist.github.com/ivanfioravanti/03d7b4d6cd856e6a541edf373d9974d8 https://gist.github.com/ivanfioravanti/03d7b4d6cd856e6a541edf373d9974d8"  
[X Link](https://x.com/ivanfioravanti/status/2022293593285706133)  2026-02-13T12:55Z 20.4K followers, [---] engagements


"@swyx @deepseek_ai It did not work with Linux πŸ’ͺ"  
[X Link](https://x.com/ivanfioravanti/status/2022619806248374736)  2026-02-14T10:31Z 20.4K followers, [----] engagements


"Left Opus Center Gemini [--] Right GPT 5.3"  
[X Link](https://x.com/ivanfioravanti/status/2022773145191506325)  2026-02-14T20:41Z 20.4K followers, [----] engagements


"I had the brilliant idea of trying context benchmark test of MiniMax M2.5 with bf16 up to 128K context. This was the result. πŸ˜– https://t.co/Irt5BwmmUO https://t.co/Irt5BwmmUO"  
[X Link](https://x.com/ivanfioravanti/status/2022362032595451924)  2026-02-13T17:27Z 20.4K followers, 26.3K engagements


"I'm finally entering the Google Gemini world too I subscribed to Ultra so I could test Deep Think But I hit a wall immediately 😒"  
[X Link](https://x.com/ivanfioravanti/status/2022722758854217874)  2026-02-14T17:21Z 20.4K followers, 27.7K engagements


"@pashmerepat M5 Ultra is coming"  
[X Link](https://x.com/ivanfioravanti/status/2023008869564526952)  2026-02-15T12:17Z 20.4K followers, [----] engagements


"Adding Perplexity computation to results Using mlx_lm.preplexity by @N8Programs πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2023093904237572189)  2026-02-15T17:55Z 20.4K followers, [---] engagements


"Monitoring tool is mactop"  
[X Link](https://x.com/ivanfioravanti/status/2023131092320985586)  2026-02-15T20:23Z 20.4K followers, [---] engagements


"Another "small" LLM has been released: "JoyAI-LLM-Flash" by JD Open Source Chinese lab. Base and Instruct models have been release on HuggingFace: https://huggingface.co/jdopensource/JoyAI-LLM-Flash 48B total params with only http://x.com/i/article/2023333007361241088 http://x.com/i/article/2023333007361241088"  
[X Link](https://x.com/ivanfioravanti/status/2023341417133748350)  2026-02-16T10:19Z 20.4K followers, [----] engagements


"Ok it's Qwen [---] time now πŸ”₯ πŸš€ Qwen3.5-397B-A17B is here: The first open-weight model in the Qwen3.5 series. πŸ–ΌNative multimodal. Trained for real-world agents. ✨Poweredbyhybridlinearattention+sparseMoEandlarge-scaleRLenvironmentscaling. ⚑8.6x19.0xdecodingthroughputvsQwen3-Max 🌍201 https://t.co/Pq0qIk54MB πŸš€ Qwen3.5-397B-A17B is here: The first open-weight model in the Qwen3.5 series. πŸ–ΌNative multimodal. Trained for real-world agents. ✨Poweredbyhybridlinearattention+sparseMoEandlarge-scaleRLenvironmentscaling. ⚑8.6x19.0xdecodingthroughputvsQwen3-Max 🌍201 https://t.co/Pq0qIk54MB"  
[X Link](https://x.com/ivanfioravanti/status/2023341849369620551)  2026-02-16T10:21Z 20.4K followers, [----] engagements


"I bet @Prince_Canuma is on it already I'm still downloading this beast"  
[X Link](https://x.com/ivanfioravanti/status/2023373162348732589)  2026-02-16T12:25Z 20.4K followers, [---] engagements


"RT @Prince_Canuma: Already on MLX-VLM πŸš€ Pull from the main branch we just pushed a fix for long context"  
[X Link](https://x.com/ivanfioravanti/status/2023409408349516178)  2026-02-16T14:49Z 20.4K followers, [--] engagements


"Kimi K2.5 (Kimi CLI) vs MiniMax [---] (CC) vs GLM [---] (CC). πŸ”₯ Same prompt to create a single-page website for "PHANTOM PROTOCOL" a fictional tactical shooter video game 0-shot. Spoiler IMO: πŸ₯‡ Kimi K2.5 is another league πŸ₯ˆ MiniMax [---] πŸ₯‰ GLM 4.7"  
[X Link](https://x.com/ivanfioravanti/status/2017486165188690310)  2026-01-31T06:32Z 20.4K followers, 70K engagements


"People keep asking me how to use Claude Code with different model providers. Here a gist with Kimi MiniMax zai and kooka/mlx server. https://gist.github.com/ivanfioravanti/03d7b4d6cd856e6a541edf373d9974d8 https://gist.github.com/ivanfioravanti/03d7b4d6cd856e6a541edf373d9974d8"  
[X Link](https://x.com/ivanfioravanti/status/2017585121465971054)  2026-01-31T13:05Z 20.4K followers, 22.2K engagements


"Buying a tinybox from Europe now is 10% cheaper than [--] year ago evaluating. πŸ‘€"  
[X Link](https://x.com/ivanfioravanti/status/2018347929870963198)  2026-02-02T15:37Z 20.4K followers, 93.4K engagements


"Qwen3-Coder-Next MLX vs llama.cpp on M3 Ultra πŸ”₯ Incredible results I know I tested multiple times but keep seeing MLX winning by a large margin. πŸ€·πŸ»β™‚ BTW This is a 80B MoE (3B active) with 256K ctx http://x.com/i/article/2020767149053108224 http://x.com/i/article/2020767149053108224"  
[X Link](https://x.com/ivanfioravanti/status/2020876939917971867)  2026-02-09T15:06Z 20.4K followers, 29.6K engagements


"In the past days I've got early access to MiniMax M2.5 and I've been able to play with it quite a lot. M2.1 was already a great model [---] is incremental on top of it and combined with Claude Code delivers amazing results All images have been generated locally using flux2.c"  
[X Link](https://x.com/ivanfioravanti/status/2021599613526987232)  2026-02-11T14:58Z 20.4K followers, 20.6K engagements


"How can a 3B parameters model reach this quality πŸ‘€ What is this Nanbeige4.1-3B model running at - [--] toks/s in bf16 (in video) - [---] toks/s in 8bit on M3 Ultra with MLX with these benchmark scores πŸ”₯ https://t.co/8RO5QiyVmq What is this Nanbeige4.1-3B model running at - [--] toks/s in bf16 (in video) - [---] toks/s in 8bit on M3 Ultra with MLX with these benchmark scores πŸ”₯ https://t.co/8RO5QiyVmq"  
[X Link](https://x.com/ivanfioravanti/status/2021648512380022861)  2026-02-11T18:12Z 20.4K followers, 54.1K engagements


"MLX: quick preview of @arcee_ai Trinity-Large-Preview context benchmark on M3 Ultra 512GB: it's fast πŸ”₯ More details and tests tomorrow"  
[X Link](https://x.com/ivanfioravanti/status/2022051529423769671)  2026-02-12T20:53Z 20.4K followers, [----] engagements


"Trinity Large Preview is really fast for its size. It's a 398B params sparse (MoE) with 13B active parameters per token. At 4bit is usable up to 64K context on M3 Ultra. Can't wait to test an M5 Ultra 🀩"  
[X Link](https://x.com/ivanfioravanti/status/2022320355273302270)  2026-02-13T14:42Z 20.4K followers, [----] engagements


"MLX MiniMax [---] running LOCALLY on a single M3 Ultra 512GB Writing a poem on LLMs at 6bit quantization πŸ”₯ Let's start some coding context and distributed tests Generation: [----] tokens-per-sec Peak memory: [---] GB"  
[X Link](https://x.com/ivanfioravanti/status/2022338870172684655)  2026-02-13T15:55Z 20.4K followers, 223.3K engagements


"MiniMax M2.5 is here Weights released open on Hugging Face. Let's make a quick context benchmark test using MLX with: single request (no batching VLLM style here) no caching of previous request Mac http://x.com/i/article/2021144156400209921 http://x.com/i/article/2021144156400209921"  
[X Link](https://x.com/ivanfioravanti/status/2022360835621032111)  2026-02-13T17:22Z 20.4K followers, 55.1K engagements


"@andreihasna Yes Pretty well My next test is OpenCode with M2.5"  
[X Link](https://x.com/ivanfioravanti/status/2022361349830144265)  2026-02-13T17:24Z 20.4K followers, [----] engagements


"Hey Apple my wife wants an M5 Max or M5 Ultra for Valentine's Day can you help"  
[X Link](https://x.com/ivanfioravanti/status/2022406170560876990)  2026-02-13T20:23Z 20.4K followers, [----] engagements


"I think it's time to buy Apple stocks local AI is gonna push Macs sale to the next level πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2022542569486557586)  2026-02-14T05:25Z 20.4K followers, [----] engagements


"@swyx @deepseek_ai Open source will probably never beat the best closed model but its reaching the good enough state faster than ever. K2.5 is better than GPT [---] no k3 will be better than [---] and so on. πŸ€·πŸ»β™‚"  
[X Link](https://x.com/ivanfioravanti/status/2022559433524392005)  2026-02-14T06:32Z 20.4K followers, 17.8K engagements


"@Jezmond81 More power efficient combined with a LOT of unified memory but not as fast as Nvidia"  
[X Link](https://x.com/ivanfioravanti/status/2022579234988835138)  2026-02-14T07:50Z 20.4K followers, [--] engagements


"GLM-5 can't be run locally on Apple Silicon. Even at 4bit quantization it's too slow. We need more GPU power and memory bandwidth for model of this size"  
[X Link](https://x.com/ivanfioravanti/status/2022683062509736278)  2026-02-14T14:43Z 20.4K followers, 13.3K engagements


"This is what I mean. Benchmarking 64k context on M3 Ultra: Prompt: [-----] tokens [----] tokens-per-sec Generation: [---] tokens [----] tokens-per-sec Peak memory: [------] GB Total wall time: 1492s πŸ‘€"  
[X Link](https://x.com/ivanfioravanti/status/2022688884132331786)  2026-02-14T15:06Z 20.4K followers, [----] engagements


"MLX Royal Rumble of models that can run in 4bit quantization up to 128K context on a single M3 Ultra 512GB πŸ”₯ Who's fastest πŸ€·πŸ»β™‚ Judge by yourself"  
[X Link](https://x.com/ivanfioravanti/status/2022690242042401159)  2026-02-14T15:11Z 20.4K followers, [----] engagements


""MiniMax M2.5 achieved SOTA on SWE largely because we conducted extensive training across 10+ programming languages. In particular iOS and Android received significant focus which leads to substantial improvements in client-side and mobile app development." πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2022695829887230398)  2026-02-14T15:34Z 20.4K followers, [----] engagements


"Another small issue while using Gemini CLI an infinite loop"  
[X Link](https://x.com/ivanfioravanti/status/2022727714759639463)  2026-02-14T17:40Z 20.4K followers, [----] engagements


"Eulerian Fluid simulation test Zero-shot Opus [---] vs GPT-5.3 vs Gemini [--] Deep Think My personal preference: πŸ₯‡ Gemini [--] Deep Think (really strong) πŸ₯ˆ Opus [---] πŸ₯‰ GPT [---] High"  
[X Link](https://x.com/ivanfioravanti/status/2022744459654430787)  2026-02-14T18:47Z 20.4K followers, 37.8K engagements


"Seed [--] Pro is a monster 😱 Seed [---] is finally out πŸ”₯ https://t.co/XXPqBSaE0E Seed [---] is finally out πŸ”₯ https://t.co/XXPqBSaE0E"  
[X Link](https://x.com/ivanfioravanti/status/2022792979677671848)  2026-02-14T22:00Z 20.4K followers, [----] engagements


"@QuixiAI Step-3.5-Flash has been a positive surprise Fast and powerful"  
[X Link](https://x.com/ivanfioravanti/status/2022804405716742373)  2026-02-14T22:45Z 20.4K followers, [---] engagements


"Gemini [--] preview models are all strong but when will they become officially released There are small issues (looping) and this overall sense of unfinished around models and tools. Google is improving but last mile is still missing"  
[X Link](https://x.com/ivanfioravanti/status/2022964156564164964)  2026-02-15T09:20Z 20.4K followers, [----] engagements


"@KrakowiakK As you said: starting point πŸš€ It can only get better day after day"  
[X Link](https://x.com/ivanfioravanti/status/2022970538600083846)  2026-02-15T09:45Z 20.4K followers, [---] engagements


"Everything is accelerating at an insane pace.πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2022999564769439841)  2026-02-15T11:40Z 20.4K followers, [----] engagements


"I really think Elon Musk should create two separate accounts one as entrepreneur and visionary and one for politics"  
[X Link](https://x.com/ivanfioravanti/status/2023014559536279731)  2026-02-15T12:40Z 20.4K followers, [----] engagements


"@DIY_Tardis @swyx @deepseek_ai Open Source runs the world"  
[X Link](https://x.com/ivanfioravanti/status/2023023328022437980)  2026-02-15T13:15Z 20.4K followers, [---] engagements


"As soon as M5 Max and Ultra will be released Ill buy [--] + [--]. So please @Apple release them before end of March to boost your Q1 earnings 😎"  
[X Link](https://x.com/ivanfioravanti/status/2023062345807737188)  2026-02-15T15:50Z 20.4K followers, [----] engagements


"Another Monster LLM has been released The white dog is coming Happy to share that we have released JoyAI-LLM Flash via JD OpenSourcea state-of-the-art instruction model based on the Mixture-of-Experts (MoE) architecture. Model weights are now available on @huggingface πŸ€—Huggingface (instruct model): https://t.co/WF8MCBnxDu The white dog is coming Happy to share that we have released JoyAI-LLM Flash via JD OpenSourcea state-of-the-art instruction model based on the Mixture-of-Experts (MoE) architecture. Model weights are now available on @huggingface πŸ€—Huggingface (instruct model):"  
[X Link](https://x.com/ivanfioravanti/status/2023064679191302191)  2026-02-15T15:59Z 20.4K followers, [----] engagements


"@MiniMax_AI Upgraded I'm Max High-Speed now"  
[X Link](https://x.com/ivanfioravanti/status/2023071222037147654)  2026-02-15T16:25Z 20.4K followers, [----] engagements


"Upgraded my yearly @MiniMax_AI plan to Max High-Speed I'm ready for M3 M3.1 M3.5 and M4 πŸš€ Video 20x but just to show model MiniMax-M2.5-highspeed in action in Claude Code"  
[X Link](https://x.com/ivanfioravanti/status/2023071872032940246)  2026-02-15T16:28Z 20.4K followers, 18.5K engagements


"MLX context benchmark repo optimized Model now loads once and stays in memory across all context sizes warmup pass added too. Previously it spawned a new mlx_lm subprocess for every context reloading each time. Now it uses the mlx_lm Python API. πŸš€ https://github.com/ivanfioravanti/llm_context_benchmarks https://github.com/ivanfioravanti/llm_context_benchmarks"  
[X Link](https://x.com/ivanfioravanti/status/2023082186719850802)  2026-02-15T17:09Z 20.4K followers, [----] engagements


"Testing JoyAI-LLM-Flash with this version right now TTFT added too πŸ”₯ Thanks to Claude Code + Opus [---] here. Let's try them all"  
[X Link](https://x.com/ivanfioravanti/status/2023085160602739115)  2026-02-15T17:21Z 20.4K followers, [---] engagements


"MLX preview of new context benchmark format on JoyAI-Flash-4bit πŸ”₯ Time To First Token and Perplexity added LFG πŸš€"  
[X Link](https://x.com/ivanfioravanti/status/2023099033791643979)  2026-02-15T18:16Z 20.4K followers, [----] engagements


"@exRhenum Italy is diifferent πŸ€·πŸ»β™‚"  
[X Link](https://x.com/ivanfioravanti/status/2023105939348353164)  2026-02-15T18:43Z 20.4K followers, [---] engagements


"MLX DWQ quantization works Here Perplexity for JoyAI-LLM_Flash Uploaded on mlx-community by @kernelpool"  
[X Link](https://x.com/ivanfioravanti/status/2023114430838677559)  2026-02-15T19:17Z 20.4K followers, [----] engagements


"@cacus @kernelpool Distilled Weight Quantization (DWQ) Here more details. https://github.com/ml-explore/mlx-lm/blob/main/mlx_lm/LEARNED_QUANTS.md https://github.com/ml-explore/mlx-lm/blob/main/mlx_lm/LEARNED_QUANTS.md"  
[X Link](https://x.com/ivanfioravanti/status/2023119290107355397)  2026-02-15T19:36Z 20.4K followers, [---] engagements


"Imagine Apple Vision Pro [--] with M6 and a model like SeeDance [--] but immersive. Endgame"  
[X Link](https://x.com/ivanfioravanti/status/2023122641826066494)  2026-02-15T19:50Z 20.4K followers, [----] engagements


"@Prince_Canuma YES πŸ™ŒπŸ»"  
[X Link](https://x.com/ivanfioravanti/status/2023125226498781692)  2026-02-15T20:00Z 20.4K followers, [---] engagements


"@steipete @OpenAI @openclaw Top Congrats to both you and OpenAI"  
[X Link](https://x.com/ivanfioravanti/status/2023186485038714994)  2026-02-16T00:03Z 20.4K followers, [---] engagements


"If you encounter strange behavior on your Apple Silicon Mac during LLM experiments reboot. This resolves 80% of my issues"  
[X Link](https://x.com/ivanfioravanti/status/2023328400887177262)  2026-02-16T09:27Z 20.4K followers, [----] engagements


"Running all benchmarks required a lot of time even using [--] Mac Studios. I'll have to keep automating as much as possible here"  
[X Link](https://x.com/ivanfioravanti/status/2023341600987046107)  2026-02-16T10:20Z 20.4K followers, [---] engagements


"@Prince_Canuma BOOM"  
[X Link](https://x.com/ivanfioravanti/status/2023385928610558088)  2026-02-16T13:16Z 20.4K followers, [--] engagements


"Apple 4th March event"  
[X Link](https://x.com/ivanfioravanti/status/2023410866314760281)  2026-02-16T14:55Z 20.4K followers, [---] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@ivanfioravanti Avatar @ivanfioravanti Ivan Fioravanti α―…

Ivan Fioravanti α―… posts on X about ultra, $2413t, apple, claude code the most. They currently have [------] followers and [---] posts still getting attention that total [------] engagements in the last [--] hours.

Engagements: [------] #

Engagements Line Chart

  • [--] Week [-------] +220%
  • [--] Month [---------] +11%
  • [--] Months [---------] +54%
  • [--] Year [---------] +30%

Mentions: [--] #

Mentions Line Chart

  • [--] Week [--] -45%
  • [--] Month [---] +19%
  • [--] Months [-----] +67%
  • [--] Year [-----] +109%

Followers: [------] #

Followers Line Chart

  • [--] Week [------] +3.20%
  • [--] Month [------] +8.40%
  • [--] Months [------] +26%
  • [--] Year [------] +83%

CreatorRank: [-------] #

CreatorRank Line Chart

Social Influence

Social category influence technology brands 11.28% stocks 3.59% products 2.05% travel destinations 1.54% countries 1.54% finance 1.03% social networks 0.51% automotive brands 0.51% exchanges 0.51% celebrities 0.51%

Social topic influence ultra #1145, $2413t #84, apple #1583, claude code #103, llamacpp 4.1%, inference 3.59%, ai 3.59%, model #1023, more than 2.56%, strong #1727

Top accounts mentioned or mentioned by @princecanuma @kernelpool @llmjunky @clementpillette @limzba @krakowiakk @angeloskath @thezachmueller @jakitreehorne @prince_canuma @awnihannun @minimaxai @grok @rickrosstn @sudoingx @digitalix @filipstrand @thedarthsider @andrejusb @andreihasna

Top assets mentioned Alphabet Inc Class A (GOOGL) Microsoft Corp. (MSFT) Tesla, Inc. (TSLA)

Top Social Posts

Top posts by engagements in the last [--] hours

"πŸ”₯ Apple M3 Ultra 512GB vs NVIDIA RTX [----] LLM Benchmark Results πŸ”₯Running Qwen3-30B-A3B (Q4_K_M) on llamacpp and 4bit on MLX pp512: πŸ₯‡ M3 w/ MLX: [----] t/s πŸ₯ˆ 3090: [----] t/s πŸ₯‰ M3 w/ Metal: [----] t/s tg128: πŸ₯‡ 3090: [---] t/s πŸ₯ˆ M3 w/ MLX: [--] t/s πŸ₯‰ M3 w/ Metal: [--] t/s"
X Link 2025-08-22T22:21Z 19.9K followers, 27.3K engagements

"Qwen3-30B-A3B-2507 Q5_K_M + M3 Ultra 512GB + llamacpp with [--] parallel requests + agno = πŸ”₯ Next step is trying with MLX Batch Inference"
X Link 2025-10-04T15:18Z 20.4K followers, 14.2K engagements

"Using the new MLX server_benchmark for continuous batching to push MiniMax M2.1 locally on M3 Ultra. 4bit: [--] request: [--] t/s [--] requests: [---] t/s πŸ”₯ 8bit: [--] request: [--] t/s [--] requests: 150t/s πŸ”₯"
X Link 2026-01-13T16:38Z 19.9K followers, [----] engagements

"GLM-4.7-Flash-8bit-gs32 is perfect for local coding and tool calling 4bit is too compressed. Here detailed benchmark with contexts from 0.5K to 128K tokens on Apple M3 Ultra [---] and a comparison with 4bit at the end. Chart 1/3"
X Link 2026-01-23T12:27Z 20.4K followers, 14K engagements

"@lucatac0 Let me update my code that uses llamacpp and give it a try"
X Link 2026-01-24T06:10Z 20.4K followers, [---] engagements

"Preparing some llamacpp benchmarks on Apple Silicon. Stay tuned"
X Link 2026-01-24T08:14Z 19.8K followers, [----] engagements

"GLM-4.7-Flash 8bit Context Royal Rumble πŸ”₯ - M3 Ultra [---] - llamacpp [----] with vs mlx 0.30.5 (from main) - UnslothAI Q8_0 vs mlx 8bit (gs64) both [---] bpw πŸ₯‡ MLX πŸ₯ˆ llamacpp Big jump in performance in both with latest version Details in 🧡 and OpenCode tests coming soon"
X Link 2026-01-24T12:52Z 20.4K followers, [----] engagements

"Oura ring membership deleted no more need to track data with it πŸ€·πŸ»β™‚ @ouraring deleting subscription is not so easy I wonder why 🀨"
X Link 2026-01-25T12:04Z 20.2K followers, [----] engagements

"Jan-v2-4B models in 4bit and 8bit are now on mlx-community I use them through LM Studio. Slighlty faster than llamacpp (q4_0 and q8_0 used as GGUF to make a better comparison): πŸ₯‡ MLX vs πŸ₯ˆ llama.cpp 4bit: [---] tps vs [---] tps 8bit: 92ps vs [--] tps Great model @jandotai πŸš€"
X Link 2026-01-27T08:08Z 20.4K followers, [----] engagements

"Kimi K2.5 - Same Prompt - OpenCode vs KIMI CLI vs Claude Code any differences πŸ€” Create a single-page website for "PHANTOM PROTOCOL" a fictional tactical shooter video game. Design capabilities of this model are out of scale πŸ₯‡ Final results at the end of the video. 🧡"
X Link 2026-01-30T15:46Z 19.8K followers, 44.8K engagements

"Ollama ❀ MLX πŸ™ The CUDA backend of MLX now builds on Windows with tests passing special thanks to @ollama for a lot of help making this happen. It still needs some efforts to provide Windows binaries for MLX but I think ollama will ship the code to users much sooner. https://t.co/kKuJpd9Wdr The CUDA backend of MLX now builds on Windows with tests passing special thanks to @ollama for a lot of help making this happen. It still needs some efforts to provide Windows binaries for MLX but I think ollama will ship the code to users much sooner. https://t.co/kKuJpd9Wdr"
X Link 2026-01-31T05:17Z 19.8K followers, [----] engagements

"K2.5 - GLM-4.7 and MiniMax M2.1 solving a Rubiks Cube in 3D. πŸ‘€ Kimi CLI and Claude Code used here. πŸ₯‡ GLM-4.7 [--] secs and perfect πŸ₯ˆ K2.5 - Missing colors in the cube πŸ₯‰ MiniMax [---] - Missing colors and failed to autosolve"
X Link 2026-01-31T13:48Z 19.8K followers, 29.1K engagements

"MLX Phantom Protocol prompt developed locally with GLM-4.7-Flash using the new tensor parallel support in mlx-lm.server and OpenCode to drive development mactop on the right measuring the two M3 Ultra [---] used as compute power [--] minutes and great result"
X Link 2026-02-01T10:07Z 19.8K followers, 17.2K engagements

"In the opeconde.sjon I added this provider: "provider" : "mlx" : "models" : "mlx-community/GLM-4.7-Flash-8bit" : "npm" : "@ai-sdk/openai-compatible" "options" : "baseURL" : "http://localhost:8080/v1" https://twitter.com/i/web/status/2017905709342495204 https://twitter.com/i/web/status/2017905709342495204"
X Link 2026-02-01T10:19Z 19.8K followers, [---] engagements

"@angeloskath @ai I was facing some error in tools calls so I had to use this PR locally: https://github.com/ml-explore/mlx-lm/pull/792 https://github.com/ml-explore/mlx-lm/pull/792"
X Link 2026-02-01T10:20Z 19.8K followers, [----] engagements

"vibe coding paralysis is real https://t.co/RbzyeSjatz https://t.co/RbzyeSjatz"
X Link 2026-02-01T10:37Z 19.8K followers, [----] engagements

"Direct MLX support coming soon in @jandotai πŸ”₯ MLX support coming soon @jandotai https://t.co/vjetN3OYKM MLX support coming soon @jandotai https://t.co/vjetN3OYKM"
X Link 2026-02-01T15:34Z 19.8K followers, [----] engagements

"Ready for some fun with MLX and Step-3.5-Flash"
X Link 2026-02-02T08:02Z 19.8K followers, [----] engagements

"Adding support for model type step3p5 to MLX using Codex MLX Skill and @RepoPrompt let's try πŸš€"
X Link 2026-02-02T08:04Z 19.8K followers, [----] engagements

"glm-5 in February Incredible acceleration πŸš€ @TeksEdge glm-5 @TeksEdge glm-5"
X Link 2026-02-02T14:26Z 19.9K followers, [----] engagements

"MLX: I have a first version of Step-3.5-Flash running locally on my M3 Ultra πŸ”₯πŸ”₯πŸ”₯ But I bet @kernelpool will do much better"
X Link 2026-02-02T14:46Z 19.9K followers, 21.3K engagements

"MLX Step-3.5-Flash I've reached [--] toks/s Thanks to Fast-MLX skill by @awnihannun used withing Codex with GPT [---] High From [--] toks/s v0 to [--] toks/s v2 πŸš€ but again I bet @kernelpool will do even better πŸ™ŒπŸ» MLX: I have a first version of Step-3.5-Flash running locally on my M3 Ultra πŸ”₯πŸ”₯πŸ”₯ But I bet @kernelpool will do much better https://t.co/pY9QAXaElH MLX: I have a first version of Step-3.5-Flash running locally on my M3 Ultra πŸ”₯πŸ”₯πŸ”₯ But I bet @kernelpool will do much better https://t.co/pY9QAXaElH"
X Link 2026-02-02T15:44Z 19.8K followers, 13.9K engagements

"MLX Context Benchmark for Step-3.5-Flash-4bit using the PR of @kernelpool πŸ”₯ Here on Apple M3 Ultra 512GB but it can run on 128GB"
X Link 2026-02-02T17:25Z 19.9K followers, [----] engagements

"PR is here: https://github.com/ml-explore/mlx-lm/pull/836 https://github.com/ml-explore/mlx-lm/pull/836"
X Link 2026-02-02T17:25Z 19.8K followers, [----] engagements

"If you play with Apple MLX remember to install skills from @awnihannun uvx --from mlx-skills --codex https://github.com/awni/mlx-skills.git https://github.com/awni/mlx-skills.git"
X Link 2026-02-02T18:45Z 19.8K followers, [----] engagements

"Step-3.5-Flash in action on MLX with OpenCode on a single (distributed testing in progress) M3 Ultra to create a snake game πŸ”₯ 6bit quantization. Perfect tool calling. Fast & powerful coding model Recommended Inference Settings: Temperature: [---] Top-p: [----] Top-k: [--] 🧡"
X Link 2026-02-03T10:19Z 19.9K followers, 17.1K engagements

"Server started with this: mlx_lm.server --model mlx-community/Step-3.5-Flash-4bit --temp [--] --top-p [----] --top-k [--] --trust-remote-code 🧡"
X Link 2026-02-03T10:19Z 19.8K followers, [----] engagements

"@RickRossTN No I invoked 6bit where is the 4bit part OpenCode is using 6bit and mlx_lm.server started with 6bit"
X Link 2026-02-03T10:27Z 19.8K followers, [---] engagements

"@RickRossTN Here the correct one. Thanks πŸ™πŸ» mlx_lm.server --model mlx-community/Step-3.5-Flash-6bit --temp [--] --top-p [----] --top-k [--] --trust-remote-code"
X Link 2026-02-03T10:52Z 19.8K followers, [---] engagements

"Pushing Mac Studio M3 Ultra fan to max speed during inference with TG Pro πŸ”₯"
X Link 2026-02-03T10:55Z 19.8K followers, [----] engagements

"MLX Distributed inference testing with Step-3.5-Flash-6bit in progress on [--] x M3 Ultra 512GB. Space Invaders coded locally coming soon πŸ‘Ύ Vite + JavaScript + Phaser [--] as engine"
X Link 2026-02-03T11:10Z 19.9K followers, [----] engagements

"@SIGKITTEN πŸ˜‚ Imagine when some Microsoft teams will start shipping for MacOS first too"
X Link 2026-02-03T16:25Z 19.9K followers, [---] engagements

"Amazing job Kimi K2.5 full VLM distributed πŸ”₯ This is Kimi K2.5 (1T params)the full VLM mind you not just the language model describing an image. It runs in distributed mode across two Big Macs using MLX πŸ₯³ πŸ₯‡This is a first for mlx_vlm We wrote an experimental script with @Prince_Canuma read on to learn how it works. https://t.co/H8GTMWvRFz This is Kimi K2.5 (1T params)the full VLM mind you not just the language model describing an image. It runs in distributed mode across two Big Macs using MLX πŸ₯³ πŸ₯‡This is a first for mlx_vlm We wrote an experimental script with @Prince_Canuma read on to"
X Link 2026-02-03T20:50Z 20.3K followers, [----] engagements

"Feel the power of MLX πŸš€ Latest mlx-lm is out: - New models: Kimi K2.5 Step3.5 flash LongCat Flash lite thanks to @kernelpool - Support for distributed inference with mlx_lm.server thanks to @angeloskath - Much faster and more memory efficient DeepSeek v3 (and other MLA-based models) https://t.co/lEL2KnNz6Y Latest mlx-lm is out: - New models: Kimi K2.5 Step3.5 flash LongCat Flash lite thanks to @kernelpool - Support for distributed inference with mlx_lm.server thanks to @angeloskath - Much faster and more memory efficient DeepSeek v3 (and other MLA-based models) https://t.co/lEL2KnNz6Y"
X Link 2026-02-05T15:55Z 19.8K followers, [----] engagements

"Incredible speed up More than welcome while coding with MLX as backend πŸ™ The speed up for DeepSeek v3 is especially nice for long context (more than 2.5x). Some pre / post numbers here: https://t.co/iffB4lKBE7 The speed up for DeepSeek v3 is especially nice for long context (more than 2.5x). Some pre / post numbers here: https://t.co/iffB4lKBE7"
X Link 2026-02-05T15:56Z 19.8K followers, [----] engagements

"And the winner is: GPT-5.3 Codex And in real life is even better than benchmarks"
X Link 2026-02-05T22:25Z 20.1K followers, [----] engagements

"What is a good notebook for Linux Tired of waiting for M5 Max πŸ€·πŸ»β™‚"
X Link 2026-02-05T22:31Z 20.2K followers, [----] engagements

"For anyone complaining that GPT-5.3 Codex was not on the official Terminal Bench leaderboard. Here it is: 75.1% πŸ”₯πŸ”₯πŸ”₯"
X Link 2026-02-06T09:06Z 20.4K followers, 30K engagements

"mlx-lm-lora wins πŸ”₯πŸ”₯πŸ”₯ Some looooong awaited new features and efficiency gains are coming to mlx-lm-lora here are some of them πŸ™‚πŸ˜ŽπŸ‘ @lmstudio plus some cool new notebooks https://t.co/Ksek9Gwe8o Some looooong awaited new features and efficiency gains are coming to mlx-lm-lora here are some of them πŸ™‚πŸ˜ŽπŸ‘ @lmstudio plus some cool new notebooks https://t.co/Ksek9Gwe8o"
X Link 2026-02-06T15:37Z 19.8K followers, [----] engagements

"MLX context benchmark for Qwen3-Coder-Next in bf16 [--] [--] [--] and 8bit quantizations tested on M3 Ultra with latest mlx-lm 0.30.6 πŸ”₯ Its a great model fast in all tested configs. Personally I suggest 6bit+ especially with larger context. Choose based on memory availability"
X Link 2026-02-06T20:39Z 20.4K followers, [----] engagements

"@rod_coutinho @KrakowiakK I will create it next week Its time to do it"
X Link 2026-02-07T09:36Z 19.8K followers, [--] engagements

"I was testing Qwen3-Next-Coder on llamacpp to compare with MLX but results are too different I need to double check benchmark before posting anything πŸ‘€"
X Link 2026-02-07T17:59Z 20.4K followers, [----] engagements

"@mudler_it @LocalAI_API Top"
X Link 2026-02-07T21:38Z 19.9K followers, [--] engagements

"Today Ill be back to Milan and my Mac Studios Using them through iPad Pro in ssh is not my favorite experience πŸ€·πŸ»β™‚"
X Link 2026-02-08T06:11Z 19.8K followers, [----] engagements

"@andreabalducci @antirez I was planning to do the same test this week but with Open Models Lets see what happens 🀞"
X Link 2026-02-09T07:21Z 19.8K followers, [---] engagements

"No you cant do this cheaply with NVIDIA GPUs. Still so many misconceptions about running frontier AI locally. Yes its possible. Yes Apple Silicon is the cheapest way to do it ($20k). No you cant do this cheaply with NVIDIA GPUs. Still so many misconceptions about running frontier AI locally. Yes its possible. Yes Apple Silicon is the cheapest way to do it ($20k). No you cant do this cheaply with NVIDIA GPUs"
X Link 2026-02-09T07:22Z 20.4K followers, [----] engagements

"@sudoingX @AlexFinn What are you talking about It's 600GB"
X Link 2026-02-09T15:07Z 20.3K followers, [---] engagements

"MLX OpenCode + Qwen3-Coder-Next-8bit 165K context πŸ‘€ Pushing this model to the max locally. Honestly I need to steer it to the right solution too often compared to larger models from MiniMax Zai StepFun"
X Link 2026-02-09T17:18Z 20.4K followers, [----] engagements

"@francip I need a CUDA machine. I can try in the cloud"
X Link 2026-02-09T17:20Z 19.9K followers, [---] engagements

"@Jasonio DGX Try it there and keep us posted"
X Link 2026-02-09T21:01Z 20.1K followers, [---] engagements

"@digitalix M5 Ultra will enable even faster image generation to be used with local coding agents πŸš€"
X Link 2026-02-10T06:06Z 19.9K followers, [--] engagements

"@jemp_error Good suggestion I've not done a context benchmark on it I will asap"
X Link 2026-02-10T06:08Z 19.9K followers, [---] engagements

"@jemp_error This was a run I did in the past I will try with latest mlx-lm to see if things improved πŸ’ͺ🏻 https://x.com/ivanfioravanti/status/2018375149251158040s=20 MLX Context Benchmark for Step-3.5-Flash-4bit using the PR of @kernelpool πŸ”₯ Here on Apple M3 Ultra 512GB but it can run on 128GB https://t.co/tUZQuhp53c https://x.com/ivanfioravanti/status/2018375149251158040s=20 MLX Context Benchmark for Step-3.5-Flash-4bit using the PR of @kernelpool πŸ”₯ Here on Apple M3 Ultra 512GB but it can run on 128GB https://t.co/tUZQuhp53c"
X Link 2026-02-10T08:26Z 19.9K followers, [--] engagements

"On-device real time transcription MLX-Audio-Swift is the way to go Thanks @Prince_Canuma for this gem πŸ™πŸ» On-device realtime transcription on iPhone [--] Pro max πŸš€ Using MLX-Audio-Swift + Qwen3-ASR-0.6B by @Alibaba_Qwen Its much faster and more consistent with the latest adjustments. Almost ready to push to GH. https://t.co/xFDjoxiJfg On-device realtime transcription on iPhone [--] Pro max πŸš€ Using MLX-Audio-Swift + Qwen3-ASR-0.6B by @Alibaba_Qwen Its much faster and more consistent with the latest adjustments. Almost ready to push to GH. https://t.co/xFDjoxiJfg"
X Link 2026-02-10T08:52Z 19.9K followers, [----] engagements

"@Prince_Canuma @Alibaba_Qwen AMAZING"
X Link 2026-02-10T08:53Z 19.9K followers, [---] engagements

"MLX Context Benchmark for Step-3.5-Flash-4bit with mlx-lm 0.30.7 big performance boost after just [--] week Look by yourself πŸš€ I've been able to test up to 128K cotnext using M3 Ultra 512GB Here the chart: MLX Context Benchmark for Step-3.5-Flash-4bit using the PR of @kernelpool πŸ”₯ Here on Apple M3 Ultra 512GB but it can run on 128GB https://t.co/tUZQuhp53c MLX Context Benchmark for Step-3.5-Flash-4bit using the PR of @kernelpool πŸ”₯ Here on Apple M3 Ultra 512GB but it can run on 128GB https://t.co/tUZQuhp53c"
X Link 2026-02-10T10:10Z 19.9K followers, [----] engagements

"@thsottiaux 100$ tier"
X Link 2026-02-10T13:33Z 19.9K followers, [---] engagements

"@ronaldmannak @LiMzba 🀣"
X Link 2026-02-10T16:19Z 19.9K followers, [--] engagements

"Quality is Incredible 🀩 πŸš€ Introducing Qwen-Image-2.0 our next-gen image generation model 🎨 Your imagination unleashed. ✨ Type a paragraph get a pro slides ✨ Describe a scene get photoreal 2K magic ✨ Add text it just works (no more glitchy letters) ✨ Key upgrades: βœ… Professional https://t.co/rigOUYy81k πŸš€ Introducing Qwen-Image-2.0 our next-gen image generation model 🎨 Your imagination unleashed. ✨ Type a paragraph get a pro slides ✨ Describe a scene get photoreal 2K magic ✨ Add text it just works (no more glitchy letters) ✨ Key upgrades: βœ… Professional https://t.co/rigOUYy81k"
X Link 2026-02-10T18:38Z 20K followers, [----] engagements

"Codex is better πŸ€·πŸ»β™‚ nearly all of the best engineers i know are switching from claude to codex nearly all of the best engineers i know are switching from claude to codex"
X Link 2026-02-10T21:53Z 19.9K followers, [----] engagements

"@109mae Totally agree codex is more than just a coder"
X Link 2026-02-11T06:06Z 19.9K followers, [--] engagements

"People keeps moving to Codex 😎 Cancelled my Max subscription to Claude. Had it for [--] months. Kept a Pro subscription for now but Codex with the [---] model provides comparable coding skill and a better UX. Cancelled my Max subscription to Claude. Had it for [--] months. Kept a Pro subscription for now but Codex with the [---] model provides comparable coding skill and a better UX"
X Link 2026-02-11T06:09Z 19.9K followers, [----] engagements

"Hes still using Opus [---]. Hell try Codex soon. I have never experienced a more dumb Claude Code than today I have to start coding myself again cause it makes so many Low IQ mistakes they must be nerfing it now that Opus [---] is out or something is up I have never experienced a more dumb Claude Code than today I have to start coding myself again cause it makes so many Low IQ mistakes they must be nerfing it now that Opus [---] is out or something is up"
X Link 2026-02-11T06:10Z 19.9K followers, [----] engagements

"mflux vs flux2.c quick performance test on M3 Ultra [---]. Both projects are pushing Apple Silicon hardware to the max mflux vs flux2.c in seconds 512x512: [----] vs [----] 1024x1024: [-----] vs [-----] 1792x1792: [-----] vs [----] Amazing jobs @filipstrand and @antirez πŸš€"
X Link 2026-02-11T07:01Z 20.4K followers, [----] engagements

"Prompt used: A surreal cinematic cityscape at dusk where modern skyscrapers bend and fold onto themselves streets curling upward into the sky. Gravity feels unstable with buildings mirrored and layered in impossible geometries. In the foreground stands Adrian Hale a lone contemplative man wearing a dark tailored coat seen from behind small against the vast distorted city. He appears thoughtful and calm as if questioning reality itself. Moody realistic lighting with deep shadows and cool blue-gray tones subtle warm highlights glowing from windows. Ultra-realistic high contrast dramatic"
X Link 2026-02-11T07:01Z 19.9K followers, [---] engagements

"@Viswana34226652 Yep merge with Space X has been tough I think"
X Link 2026-02-11T07:04Z 19.9K followers, [---] engagements

"@Prince_Canuma I go directly with CC slower but steady and precise. So far so good"
X Link 2026-02-11T07:05Z 19.9K followers, [---] engagements

"@CrazyAITech Yes Codex with GPT-5.3-Codex is faster than ever"
X Link 2026-02-11T07:07Z 19.9K followers, [--] engagements

"@paulmarin90 @Prince_Canuma yes it seems I caught it hallucinate multiple times while writing docs on a codebase. Very well written docs but with fake code examples"
X Link 2026-02-11T07:25Z 19.9K followers, [--] engagements

"@thedarthsider Problem is that this is 83B params vs 7B"
X Link 2026-02-11T10:42Z 20K followers, [--] engagements

"My iPhone [--] Pro fell down and is now full of signs everywhere I went to Apple Store in Milan asking to use Apple Care+ to change it and they told me not feasible only if stolen or completely broken πŸ‘€ @Apple what should I do Use an hammer πŸ€”"
X Link 2026-02-11T12:09Z 20.4K followers, [----] engagements

"Fasten your seatbelts Mega AI Release week"
X Link 2026-02-11T12:15Z 19.9K followers, [----] engagements

"@1littlecoder @Apple I was more than angry when I left the store. I've spent so much money in Apple devices that they should put picture of myself as best customer ever for that Milan's store 🀣"
X Link 2026-02-11T12:20Z 20.4K followers, [---] engagements

""I'm not writing this to make you feel helpless. I'm writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt." πŸ’― https://t.co/ivXRKXJvQg https://t.co/ivXRKXJvQg"
X Link 2026-02-11T12:47Z 19.9K followers, [----] engagements

"GLM-5 Boom A new model is now available on https://t.co/gocggrfb3U. https://t.co/KZGoAsN5Z0 A new model is now available on https://t.co/gocggrfb3U. https://t.co/KZGoAsN5Z0"
X Link 2026-02-11T12:50Z 20.2K followers, [----] engagements

"M2.5 is not out yet. Why people are saying it is πŸ€”"
X Link 2026-02-11T14:16Z 20K followers, [----] engagements

"Here the complete prompt that is leveraging a flux2.skill created on the fly using codex. I will now try something similar but with mflux https://gist.github.com/ivanfioravanti/7297e8f19c3d760fd80fe692cf43b176 https://gist.github.com/ivanfioravanti/7297e8f19c3d760fd80fe692cf43b176"
X Link 2026-02-11T14:58Z 20.1K followers, [---] engagements

"With the release of M5 Max and Ultra MFLUX project will become a superstar Neural Accelerators will help a lot https://github.com/filipstrand/mflux https://github.com/filipstrand/mflux"
X Link 2026-02-11T15:20Z 20.3K followers, [----] engagements

"@filipstrand @angeloskath On flux2 great jump Can't wait to test mflux on M5 Max or Ultra"
X Link 2026-02-11T16:52Z 20.2K followers, [---] engagements

"Tools calling failing on MLX asking help to Codex πŸš€"
X Link 2026-02-11T18:07Z 20.2K followers, [----] engagements

"Next week I'll try Tesla FSD in Italy with @CPunella So exciting πŸš€"
X Link 2026-02-11T18:30Z 20.3K followers, [----] engagements

"After an incredible [--] weeks using xAI Ive decided to leave and start using other models. It pains me to go but I was tired of waiting for Grok 4.20"
X Link 2026-02-11T18:35Z 20.4K followers, [----] engagements

"@rod_coutinho What have you used to test tool calling I'm hacking mlx-lm to get it work properly"
X Link 2026-02-11T18:50Z 20.1K followers, [----] engagements

"@LLMJunky @badlogicgames I have 20$ Claude Code too πŸ˜‰ But I use Codex more than anything else at the moment"
X Link 2026-02-11T20:02Z 19.9K followers, [--] engagements

"@LLMJunky @brooks_eth WOW πŸ‘€"
X Link 2026-02-11T20:03Z 19.9K followers, [--] engagements

"@mharrison Qwen Coder Next better on coding. At least from my initial tests"
X Link 2026-02-11T20:41Z 20.1K followers, [----] engagements

"@spok_vulkan @AI_Homelab 100%"
X Link 2026-02-11T20:52Z 20.4K followers, [---] engagements

"Amazon Kindle Scribe Colorsoft not available in Amazon Italy but. Coming Soon on Amazon UK πŸ”₯"
X Link 2026-02-12T05:40Z 20.4K followers, [----] engagements

"@ClementPillette @Zai_org Wait M5 Ultra. I will try a distributed inference test on [--] x M3 Ultra today. Keep you posted"
X Link 2026-02-12T06:15Z 20.2K followers, [--] engagements

"@alexcovo_eth @filipstrand @angeloskath @openclaw Try a skill for mflux too release 0.16.0 is super fast https://x.com/ivanfioravanti/status/2021626616808644940s=20 mflux 0.16.0 released and it's truly faster for flux2 Here an updated comparison with flux2.c using the 4B distilled version of the model. Creating a skill right now to generate images locally for my AI tests πŸ”₯ Great job @filipstrand and @angeloskath https://t.co/eLDabQqEbB https://x.com/ivanfioravanti/status/2021626616808644940s=20 mflux 0.16.0 released and it's truly faster for flux2 Here an updated comparison with flux2.c"
X Link 2026-02-12T06:16Z 20.2K followers, [--] engagements

"Important This new Nanbeige model is really strong Since CORE is such an out of distribution eval this is further evidence that Nanbeige really is an extremely well-trained generalist model and isn't just 'benchmaxxed'. Since CORE is such an out of distribution eval this is further evidence that Nanbeige really is an extremely well-trained generalist model and isn't just 'benchmaxxed'"
X Link 2026-02-12T09:24Z 20.2K followers, [----] engagements

"I'll try Lambda to test inference for sure Well done Zach Over the last month I've been digging into model inference; what's the best out-of-the-box tokens/s on our hardware and how do you benchmark it Our model-inference revamp is now live with model cards built to answer exactly this (in a community-focused way): https://t.co/ZwGf5wgcQS Over the last month I've been digging into model inference; what's the best out-of-the-box tokens/s on our hardware and how do you benchmark it Our model-inference revamp is now live with model cards built to answer exactly this (in a community-focused way):"
X Link 2026-02-12T12:03Z 20.4K followers, [----] engagements

"@TheZachMueller @Prince_Canuma Downloading it now"
X Link 2026-02-12T13:24Z 19.9K followers, [--] engagements

"I was lucky to be able to test early preview of M2.5 and it's fast and furious Chinese Labs are cooking like crazy Well done πŸš€ Introducing M2.5 an open-source frontier model designed for real-world productivity. - SOTA performance at coding (SWE-Bench Verified 80.2%) search (BrowseComp 76.3%) agentic tool-calling (BFCL 76.8%) & office work. - Optimized for efficient execution 37% faster at complex https://t.co/UwiKzzQNG8 Introducing M2.5 an open-source frontier model designed for real-world productivity. - SOTA performance at coding (SWE-Bench Verified 80.2%) search (BrowseComp 76.3%)"
X Link 2026-02-12T19:01Z 20.4K followers, [----] engagements

"Antirez docet The 20$ codex plan is worth more than the $200 Claude Code plan. The 20$ codex plan is worth more than the $200 Claude Code plan"
X Link 2026-02-12T19:16Z 20.4K followers, [----] engagements

"@ollama @MiniMax_AI TOP"
X Link 2026-02-12T19:31Z 20.4K followers, [---] engagements

"@bernaferrari @AntLingAGI INCREDIBLE 😱"
X Link 2026-02-12T19:49Z 20.1K followers, [--] engagements

"Don't underestimate Trinity-Large-Preview by @arcee_ai Test in progress and it's fast in text generation phase Thanks @TheZachMueller for pushing me on testing it"
X Link 2026-02-12T20:36Z 20.4K followers, [----] engagements

"@LLMJunky @OpenAI Top OpenAI Top"
X Link 2026-02-12T20:54Z 20.3K followers, [---] engagements

"@andrejusb I got rid of Jetbrains one year ago after waiting and waiting for their AI to become good enough. 😒"
X Link 2026-02-13T06:09Z 20.4K followers, [--] engagements

"@storn_max Failed at first try 😒 ImportError: cannot import name 'ALLOWED_LAYER_TYPES' from 'transformers.configuration_utils' (/Users/ifioravanti/.venv-vllm-metal/lib/python3.12/site-packages/transformers/configuration_utils.py). Did you mean: 'ALLOWED_MLP_LAYER_TYPES'"
X Link 2026-02-13T10:43Z 20.4K followers, [--] engagements

"@storn_max On it but with Codex. πŸ˜‰"
X Link 2026-02-13T12:21Z 20.4K followers, [--] engagements

"@nanbeige Top You created something magical here Its fast and good πŸ”₯"
X Link 2026-02-13T13:46Z 20.4K followers, [---] engagements

"@doublebirdcap Model architecture should be the same so it will be a matter of minutes after release πŸš€"
X Link 2026-02-13T14:00Z 20.4K followers, [---] engagements

"@awnihannun Looks a great one to test Thanks for sharing πŸ™πŸ»"
X Link 2026-02-13T15:43Z 20.4K followers, [---] engagements

"@KrakowiakK Same architecture so same speed maybe they optimize thinking process so we can get answers faster"
X Link 2026-02-13T16:05Z 20.4K followers, [----] engagements

"@thegeorge @TendiesOfWisdom Here it is https://x.com/ivanfioravanti/status/2022360835621032111s=20 https://t.co/Irt5BwmmUO https://x.com/ivanfioravanti/status/2022360835621032111s=20 https://t.co/Irt5BwmmUO"
X Link 2026-02-13T17:26Z 20.4K followers, [---] engagements

"@aayushkrm Why Have you got issue with it For me it's good enough"
X Link 2026-02-13T18:31Z 20.4K followers, [---] engagements

"@JakiTreehorne @Prince_Canuma Wait for M5 πŸ˜‰"
X Link 2026-02-13T20:18Z 20.4K followers, [--] engagements

"@Nikonenes @Ed_Randgad @andreihasna It codes clearly not GPT-5.3-Codex level or Opus [---] but it 's good enough. Surely better than Closed Models of previous versions. Open Weights are catching up"
X Link 2026-02-14T14:41Z 20.4K followers, [--] engagements

"Pushing [--] Mac Studio M3 Ultra [---] to the max One is running gpqa_diamon on Nanbeige4.1-3B and the other is running context benchmark on _Trinity-Large-Preview πŸš€πŸš€πŸš€"
X Link 2026-02-12T19:39Z 20.4K followers, [----] engagements

"MiniMax M2.5 weights are online https://huggingface.co/MiniMaxAI/MiniMax-M2.5 https://huggingface.co/MiniMaxAI/MiniMax-M2.5"
X Link 2026-02-13T14:18Z 20.4K followers, [----] engagements

"@JakiTreehorne @Prince_Canuma Nope 😒 https://x.com/ivanfioravanti/status/2022360835621032111 https://t.co/Irt5BwmmUO https://x.com/ivanfioravanti/status/2022360835621032111 https://t.co/Irt5BwmmUO"
X Link 2026-02-13T20:05Z 20.4K followers, [---] engagements

"@test_tm7873 US only 😒"
X Link 2026-02-14T18:53Z 20.4K followers, [---] engagements

"nanotexture display on iPad Pro and MacBook Pro is game changer especially when used in the open Ill never go back Yes you loose a bit of contrast but readability is another league"
X Link 2026-02-06T08:17Z 20.4K followers, [----] engagements

"Dont underestimate Transformer Lab πŸ”₯ [----] commits and we're just getting started https://t.co/HvqIZ7LMTp [----] commits and we're just getting started https://t.co/HvqIZ7LMTp"
X Link 2026-02-06T21:11Z 20.4K followers, [----] engagements

"Why everyone left xAI all together πŸ‘€"
X Link 2026-02-11T06:36Z 20.4K followers, [----] engagements

"@sudo_goreng Its not slow Ultra Mega slow"
X Link 2026-02-11T16:03Z 20.4K followers, [----] engagements

"Llamacpp has a PR with a strong optimization for Qwen3-Coder-Next I will retest MLX vs Llamacpp on this model as soon as this will be merged. https://github.com/ggml-org/llama.cpp/pull/19375 https://github.com/ggml-org/llama.cpp/pull/19375"
X Link 2026-02-13T05:54Z 20.4K followers, [----] engagements

"MiniMax M2.5 weights going live in few hours πŸ”₯"
X Link 2026-02-13T10:35Z 20.4K followers, [----] engagements

"MiniMax-M2.5 is a joy to use FAST and POWERFUL"
X Link 2026-02-13T12:55Z 20.4K followers, [----] engagements

"BOOM Download started MiniMax-M2.5 is now open source. Trained with reinforcement learning across hundreds of thousands of complex real-world environments it delivers SOTA performance in coding agentic tool use search and office workflows. Hugging Face: https://t.co/Wxksq9BB7t GitHub: MiniMax-M2.5 is now open source. Trained with reinforcement learning across hundreds of thousands of complex real-world environments it delivers SOTA performance in coding agentic tool use search and office workflows. Hugging Face: https://t.co/Wxksq9BB7t GitHub:"
X Link 2026-02-13T14:15Z 20.4K followers, [----] engagements

"@LiMzba @AI_Homelab @UnslothAI Should we try a dwq πŸ‘€"
X Link 2026-02-13T15:57Z 20.4K followers, [---] engagements

"@ai_christianson This was basic generation test so context minimal [---] tokens. I'm gonna start some context tests soon"
X Link 2026-02-13T15:59Z 20.4K followers, [----] engagements

"Both M3 Ultra [---] busy doing context benchmarks on MiniMax [---] 4bit on the left 6bit on the right πŸš€"
X Link 2026-02-13T16:15Z 20.4K followers, [----] engagements

"Forge: Scalable Agent RL Framework and Algorithm. The secret to reach Opus [---] level @MiniMax_AI is cooking https://www.minimax.io/news/forge-scalable-agent-rl-framework-and-algorithm https://www.minimax.io/news/forge-scalable-agent-rl-framework-and-algorithm"
X Link 2026-02-13T16:55Z 20.4K followers, [----] engagements

"@sleep_deprivado Neural Accelerators (matmul in hardware) is the cherry on the cake. M5 πŸš€"
X Link 2026-02-13T17:54Z 20.4K followers, [----] engagements

"GPT [---] Codex Spark needs an urgent review: "1000 tokens per second means nothing if the model can't follow a basic prompt." GPT [---] Codex Spark is fast but not smart. I gave [--] models the same prompt: Create a hot air balloon ride in HTML. Claude Opus 4.6: Beautiful night scene with colorful balloon. Nailed it. GLM 5: Vibrant sunset with detailed balloon and basket. Great. MiniMax M2.5: Dreamy https://t.co/5O9HE6tO2K GPT [---] Codex Spark is fast but not smart. I gave [--] models the same prompt: Create a hot air balloon ride in HTML. Claude Opus 4.6: Beautiful night scene with colorful balloon."
X Link 2026-02-13T18:48Z 20.4K followers, [----] engagements

"@CalimanuLoredan Currently is experimental and there is no way to know the limit as far as I can see but when it works it's a great model"
X Link 2026-02-14T18:48Z 20.4K followers, [---] engagements

"@mweinbach It's much bigger 744B params with 40B active. DeepSeek Sparse Attention helps but overall is too much for a single M3 Ultra"
X Link 2026-02-14T17:26Z 20.4K followers, [---] engagements

"@itscarlospaiva @MiniMax_AI Maybe smaller ones yes but MiniMax is a public company listed on Hong Kong stock exchange not a small startup"
X Link 2026-02-15T17:41Z 20.4K followers, [--] engagements

"Be ready for some amazing new open model releases in upcoming weeks πŸ€πŸš€"
X Link 2026-02-09T07:20Z 20.4K followers, [----] engagements

"What is this Nanbeige4.1-3B model running at - [--] toks/s in bf16 (in video) - [---] toks/s in 8bit on M3 Ultra with MLX with these benchmark scores πŸ”₯"
X Link 2026-02-11T18:00Z 20.4K followers, 85.3K engagements

"@otarkhan94 True Next architecture is top"
X Link 2026-02-15T18:31Z 20.4K followers, [--] engagements

"mflux 0.16.0 released and it's truly faster for flux2 Here an updated comparison with flux2.c using the 4B distilled version of the model. Creating a skill right now to generate images locally for my AI tests πŸ”₯ Great job @filipstrand and @angeloskath"
X Link 2026-02-11T16:45Z 20.4K followers, [----] engagements

"MLX - Many quantizations of JoyAI-LLM-Flash are now available on mlx-community on huggingface. It seems a strong model Context benchmark results on M3 Ultra coming soon and testing it now with OpenCode and mlx_lm.server https://huggingface.co/mlx-community/modelssearch=joyai https://huggingface.co/mlx-community/modelssearch=joyai"
X Link 2026-02-16T09:01Z 20.4K followers, [----] engagements

"@Alibaba_Qwen Congrats for the release Deep diving on it with MLX right now"
X Link 2026-02-16T10:22Z 20.4K followers, [----] engagements

"LTX-2 is preparing for the battle πŸš€ Faster than you think. Faster than you think"
X Link 2026-02-16T12:22Z 20.4K followers, [----] engagements

"How to use it with Claude Code My updated GIST here. https://gist.github.com/ivanfioravanti/03d7b4d6cd856e6a541edf373d9974d8 https://gist.github.com/ivanfioravanti/03d7b4d6cd856e6a541edf373d9974d8"
X Link 2026-02-13T12:55Z 20.4K followers, [---] engagements

"@swyx @deepseek_ai It did not work with Linux πŸ’ͺ"
X Link 2026-02-14T10:31Z 20.4K followers, [----] engagements

"Left Opus Center Gemini [--] Right GPT 5.3"
X Link 2026-02-14T20:41Z 20.4K followers, [----] engagements

"I had the brilliant idea of trying context benchmark test of MiniMax M2.5 with bf16 up to 128K context. This was the result. πŸ˜– https://t.co/Irt5BwmmUO https://t.co/Irt5BwmmUO"
X Link 2026-02-13T17:27Z 20.4K followers, 26.3K engagements

"I'm finally entering the Google Gemini world too I subscribed to Ultra so I could test Deep Think But I hit a wall immediately 😒"
X Link 2026-02-14T17:21Z 20.4K followers, 27.7K engagements

"@pashmerepat M5 Ultra is coming"
X Link 2026-02-15T12:17Z 20.4K followers, [----] engagements

"Adding Perplexity computation to results Using mlx_lm.preplexity by @N8Programs πŸš€"
X Link 2026-02-15T17:55Z 20.4K followers, [---] engagements

"Monitoring tool is mactop"
X Link 2026-02-15T20:23Z 20.4K followers, [---] engagements

"Another "small" LLM has been released: "JoyAI-LLM-Flash" by JD Open Source Chinese lab. Base and Instruct models have been release on HuggingFace: https://huggingface.co/jdopensource/JoyAI-LLM-Flash 48B total params with only http://x.com/i/article/2023333007361241088 http://x.com/i/article/2023333007361241088"
X Link 2026-02-16T10:19Z 20.4K followers, [----] engagements

"Ok it's Qwen [---] time now πŸ”₯ πŸš€ Qwen3.5-397B-A17B is here: The first open-weight model in the Qwen3.5 series. πŸ–ΌNative multimodal. Trained for real-world agents. ✨Poweredbyhybridlinearattention+sparseMoEandlarge-scaleRLenvironmentscaling. ⚑8.6x19.0xdecodingthroughputvsQwen3-Max 🌍201 https://t.co/Pq0qIk54MB πŸš€ Qwen3.5-397B-A17B is here: The first open-weight model in the Qwen3.5 series. πŸ–ΌNative multimodal. Trained for real-world agents. ✨Poweredbyhybridlinearattention+sparseMoEandlarge-scaleRLenvironmentscaling. ⚑8.6x19.0xdecodingthroughputvsQwen3-Max 🌍201 https://t.co/Pq0qIk54MB"
X Link 2026-02-16T10:21Z 20.4K followers, [----] engagements

"I bet @Prince_Canuma is on it already I'm still downloading this beast"
X Link 2026-02-16T12:25Z 20.4K followers, [---] engagements

"RT @Prince_Canuma: Already on MLX-VLM πŸš€ Pull from the main branch we just pushed a fix for long context"
X Link 2026-02-16T14:49Z 20.4K followers, [--] engagements

"Kimi K2.5 (Kimi CLI) vs MiniMax [---] (CC) vs GLM [---] (CC). πŸ”₯ Same prompt to create a single-page website for "PHANTOM PROTOCOL" a fictional tactical shooter video game 0-shot. Spoiler IMO: πŸ₯‡ Kimi K2.5 is another league πŸ₯ˆ MiniMax [---] πŸ₯‰ GLM 4.7"
X Link 2026-01-31T06:32Z 20.4K followers, 70K engagements

"People keep asking me how to use Claude Code with different model providers. Here a gist with Kimi MiniMax zai and kooka/mlx server. https://gist.github.com/ivanfioravanti/03d7b4d6cd856e6a541edf373d9974d8 https://gist.github.com/ivanfioravanti/03d7b4d6cd856e6a541edf373d9974d8"
X Link 2026-01-31T13:05Z 20.4K followers, 22.2K engagements

"Buying a tinybox from Europe now is 10% cheaper than [--] year ago evaluating. πŸ‘€"
X Link 2026-02-02T15:37Z 20.4K followers, 93.4K engagements

"Qwen3-Coder-Next MLX vs llama.cpp on M3 Ultra πŸ”₯ Incredible results I know I tested multiple times but keep seeing MLX winning by a large margin. πŸ€·πŸ»β™‚ BTW This is a 80B MoE (3B active) with 256K ctx http://x.com/i/article/2020767149053108224 http://x.com/i/article/2020767149053108224"
X Link 2026-02-09T15:06Z 20.4K followers, 29.6K engagements

"In the past days I've got early access to MiniMax M2.5 and I've been able to play with it quite a lot. M2.1 was already a great model [---] is incremental on top of it and combined with Claude Code delivers amazing results All images have been generated locally using flux2.c"
X Link 2026-02-11T14:58Z 20.4K followers, 20.6K engagements

"How can a 3B parameters model reach this quality πŸ‘€ What is this Nanbeige4.1-3B model running at - [--] toks/s in bf16 (in video) - [---] toks/s in 8bit on M3 Ultra with MLX with these benchmark scores πŸ”₯ https://t.co/8RO5QiyVmq What is this Nanbeige4.1-3B model running at - [--] toks/s in bf16 (in video) - [---] toks/s in 8bit on M3 Ultra with MLX with these benchmark scores πŸ”₯ https://t.co/8RO5QiyVmq"
X Link 2026-02-11T18:12Z 20.4K followers, 54.1K engagements

"MLX: quick preview of @arcee_ai Trinity-Large-Preview context benchmark on M3 Ultra 512GB: it's fast πŸ”₯ More details and tests tomorrow"
X Link 2026-02-12T20:53Z 20.4K followers, [----] engagements

"Trinity Large Preview is really fast for its size. It's a 398B params sparse (MoE) with 13B active parameters per token. At 4bit is usable up to 64K context on M3 Ultra. Can't wait to test an M5 Ultra 🀩"
X Link 2026-02-13T14:42Z 20.4K followers, [----] engagements

"MLX MiniMax [---] running LOCALLY on a single M3 Ultra 512GB Writing a poem on LLMs at 6bit quantization πŸ”₯ Let's start some coding context and distributed tests Generation: [----] tokens-per-sec Peak memory: [---] GB"
X Link 2026-02-13T15:55Z 20.4K followers, 223.3K engagements

"MiniMax M2.5 is here Weights released open on Hugging Face. Let's make a quick context benchmark test using MLX with: single request (no batching VLLM style here) no caching of previous request Mac http://x.com/i/article/2021144156400209921 http://x.com/i/article/2021144156400209921"
X Link 2026-02-13T17:22Z 20.4K followers, 55.1K engagements

"@andreihasna Yes Pretty well My next test is OpenCode with M2.5"
X Link 2026-02-13T17:24Z 20.4K followers, [----] engagements

"Hey Apple my wife wants an M5 Max or M5 Ultra for Valentine's Day can you help"
X Link 2026-02-13T20:23Z 20.4K followers, [----] engagements

"I think it's time to buy Apple stocks local AI is gonna push Macs sale to the next level πŸš€"
X Link 2026-02-14T05:25Z 20.4K followers, [----] engagements

"@swyx @deepseek_ai Open source will probably never beat the best closed model but its reaching the good enough state faster than ever. K2.5 is better than GPT [---] no k3 will be better than [---] and so on. πŸ€·πŸ»β™‚"
X Link 2026-02-14T06:32Z 20.4K followers, 17.8K engagements

"@Jezmond81 More power efficient combined with a LOT of unified memory but not as fast as Nvidia"
X Link 2026-02-14T07:50Z 20.4K followers, [--] engagements

"GLM-5 can't be run locally on Apple Silicon. Even at 4bit quantization it's too slow. We need more GPU power and memory bandwidth for model of this size"
X Link 2026-02-14T14:43Z 20.4K followers, 13.3K engagements

"This is what I mean. Benchmarking 64k context on M3 Ultra: Prompt: [-----] tokens [----] tokens-per-sec Generation: [---] tokens [----] tokens-per-sec Peak memory: [------] GB Total wall time: 1492s πŸ‘€"
X Link 2026-02-14T15:06Z 20.4K followers, [----] engagements

"MLX Royal Rumble of models that can run in 4bit quantization up to 128K context on a single M3 Ultra 512GB πŸ”₯ Who's fastest πŸ€·πŸ»β™‚ Judge by yourself"
X Link 2026-02-14T15:11Z 20.4K followers, [----] engagements

""MiniMax M2.5 achieved SOTA on SWE largely because we conducted extensive training across 10+ programming languages. In particular iOS and Android received significant focus which leads to substantial improvements in client-side and mobile app development." πŸš€"
X Link 2026-02-14T15:34Z 20.4K followers, [----] engagements

"Another small issue while using Gemini CLI an infinite loop"
X Link 2026-02-14T17:40Z 20.4K followers, [----] engagements

"Eulerian Fluid simulation test Zero-shot Opus [---] vs GPT-5.3 vs Gemini [--] Deep Think My personal preference: πŸ₯‡ Gemini [--] Deep Think (really strong) πŸ₯ˆ Opus [---] πŸ₯‰ GPT [---] High"
X Link 2026-02-14T18:47Z 20.4K followers, 37.8K engagements

"Seed [--] Pro is a monster 😱 Seed [---] is finally out πŸ”₯ https://t.co/XXPqBSaE0E Seed [---] is finally out πŸ”₯ https://t.co/XXPqBSaE0E"
X Link 2026-02-14T22:00Z 20.4K followers, [----] engagements

"@QuixiAI Step-3.5-Flash has been a positive surprise Fast and powerful"
X Link 2026-02-14T22:45Z 20.4K followers, [---] engagements

"Gemini [--] preview models are all strong but when will they become officially released There are small issues (looping) and this overall sense of unfinished around models and tools. Google is improving but last mile is still missing"
X Link 2026-02-15T09:20Z 20.4K followers, [----] engagements

"@KrakowiakK As you said: starting point πŸš€ It can only get better day after day"
X Link 2026-02-15T09:45Z 20.4K followers, [---] engagements

"Everything is accelerating at an insane pace.πŸš€"
X Link 2026-02-15T11:40Z 20.4K followers, [----] engagements

"I really think Elon Musk should create two separate accounts one as entrepreneur and visionary and one for politics"
X Link 2026-02-15T12:40Z 20.4K followers, [----] engagements

"@DIY_Tardis @swyx @deepseek_ai Open Source runs the world"
X Link 2026-02-15T13:15Z 20.4K followers, [---] engagements

"As soon as M5 Max and Ultra will be released Ill buy [--] + [--]. So please @Apple release them before end of March to boost your Q1 earnings 😎"
X Link 2026-02-15T15:50Z 20.4K followers, [----] engagements

"Another Monster LLM has been released The white dog is coming Happy to share that we have released JoyAI-LLM Flash via JD OpenSourcea state-of-the-art instruction model based on the Mixture-of-Experts (MoE) architecture. Model weights are now available on @huggingface πŸ€—Huggingface (instruct model): https://t.co/WF8MCBnxDu The white dog is coming Happy to share that we have released JoyAI-LLM Flash via JD OpenSourcea state-of-the-art instruction model based on the Mixture-of-Experts (MoE) architecture. Model weights are now available on @huggingface πŸ€—Huggingface (instruct model):"
X Link 2026-02-15T15:59Z 20.4K followers, [----] engagements

"@MiniMax_AI Upgraded I'm Max High-Speed now"
X Link 2026-02-15T16:25Z 20.4K followers, [----] engagements

"Upgraded my yearly @MiniMax_AI plan to Max High-Speed I'm ready for M3 M3.1 M3.5 and M4 πŸš€ Video 20x but just to show model MiniMax-M2.5-highspeed in action in Claude Code"
X Link 2026-02-15T16:28Z 20.4K followers, 18.5K engagements

"MLX context benchmark repo optimized Model now loads once and stays in memory across all context sizes warmup pass added too. Previously it spawned a new mlx_lm subprocess for every context reloading each time. Now it uses the mlx_lm Python API. πŸš€ https://github.com/ivanfioravanti/llm_context_benchmarks https://github.com/ivanfioravanti/llm_context_benchmarks"
X Link 2026-02-15T17:09Z 20.4K followers, [----] engagements

"Testing JoyAI-LLM-Flash with this version right now TTFT added too πŸ”₯ Thanks to Claude Code + Opus [---] here. Let's try them all"
X Link 2026-02-15T17:21Z 20.4K followers, [---] engagements

"MLX preview of new context benchmark format on JoyAI-Flash-4bit πŸ”₯ Time To First Token and Perplexity added LFG πŸš€"
X Link 2026-02-15T18:16Z 20.4K followers, [----] engagements

"@exRhenum Italy is diifferent πŸ€·πŸ»β™‚"
X Link 2026-02-15T18:43Z 20.4K followers, [---] engagements

"MLX DWQ quantization works Here Perplexity for JoyAI-LLM_Flash Uploaded on mlx-community by @kernelpool"
X Link 2026-02-15T19:17Z 20.4K followers, [----] engagements

"@cacus @kernelpool Distilled Weight Quantization (DWQ) Here more details. https://github.com/ml-explore/mlx-lm/blob/main/mlx_lm/LEARNED_QUANTS.md https://github.com/ml-explore/mlx-lm/blob/main/mlx_lm/LEARNED_QUANTS.md"
X Link 2026-02-15T19:36Z 20.4K followers, [---] engagements

"Imagine Apple Vision Pro [--] with M6 and a model like SeeDance [--] but immersive. Endgame"
X Link 2026-02-15T19:50Z 20.4K followers, [----] engagements

"@Prince_Canuma YES πŸ™ŒπŸ»"
X Link 2026-02-15T20:00Z 20.4K followers, [---] engagements

"@steipete @OpenAI @openclaw Top Congrats to both you and OpenAI"
X Link 2026-02-16T00:03Z 20.4K followers, [---] engagements

"If you encounter strange behavior on your Apple Silicon Mac during LLM experiments reboot. This resolves 80% of my issues"
X Link 2026-02-16T09:27Z 20.4K followers, [----] engagements

"Running all benchmarks required a lot of time even using [--] Mac Studios. I'll have to keep automating as much as possible here"
X Link 2026-02-16T10:20Z 20.4K followers, [---] engagements

"@Prince_Canuma BOOM"
X Link 2026-02-16T13:16Z 20.4K followers, [--] engagements

"Apple 4th March event"
X Link 2026-02-16T14:55Z 20.4K followers, [---] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

creator/x::ivanfioravanti
/creator/x::ivanfioravanti