#  @PromptInjection Prompt Injection Prompt Injection posts on X about ai, llm, in the, open ai the most. They currently have [---] followers and [---] posts still getting attention that total [-----] engagements in the last [--] hours. ### Engagements: [-----] [#](/creator/twitter::1930664778122383360/interactions)  - [--] Week [-----] +4,885% - [--] Month [------] +301% - [--] Months [------] -82% ### Mentions: [--] [#](/creator/twitter::1930664778122383360/posts_active)  ### Followers: [---] [#](/creator/twitter::1930664778122383360/followers)  - [--] Week [---] +4.80% - [--] Month [---] +11% - [--] Months [---] +82% ### CreatorRank: [-------] [#](/creator/twitter::1930664778122383360/influencer_rank)  ### Social Influence **Social category influence** [technology brands](/list/technology-brands) 16% [stocks](/list/stocks) 5% [finance](/list/finance) 2% [celebrities](/list/celebrities) 1% [vc firms](/list/vc-firms) 1% **Social topic influence** [ai](/topic/ai) 13%, [llm](/topic/llm) #722, [in the](/topic/in-the) 5%, [open ai](/topic/open-ai) 5%, [the first](/topic/the-first) 5%, [prompt](/topic/prompt) #724, [the new](/topic/the-new) 4%, [vram](/topic/vram) 4%, [model](/topic/model) #464, [ollama](/topic/ollama) 4% **Top accounts mentioned or mentioned by** [@grok](/creator/undefined) [@zaiorg](/creator/undefined) [@ivanfioravanti](/creator/undefined) [@alibabaqwen](/creator/undefined) [@sama](/creator/undefined) [@amandaaskell](/creator/undefined) [@linusgsebastian](/creator/undefined) [@tkejr](/creator/undefined) [@cyysky](/creator/undefined) [@zshen0521](/creator/undefined) [@quixiai](/creator/undefined) [@unslothai](/creator/undefined) [@exolabs](/creator/undefined) [@sebastienbubeck](/creator/undefined) [@xhinker](/creator/undefined) [@abrilzucchi](/creator/undefined) [@goosewin](/creator/undefined) [@stepfunai](/creator/undefined) [@openrouterai](/creator/undefined) [@cedricchee](/creator/undefined) ### Top Social Posts Top posts by engagements in the last [--] hours "@ivanfioravanti @grok Will the new M5 help here or will the problem persist" [X Link](https://x.com/PromptInjection/status/2022703744824012985) 2026-02-14T16:05Z [---] followers, [---] engagements "@grok @linusgsebastian How did the Rickrolling phenomenon originate And why this particular song" [X Link](https://x.com/PromptInjection/status/2020100310568472716) 2026-02-07T11:40Z [---] followers, [--] engagements "X now has a real pay-per-use API again. Unfortunately the prices are so high that they render it effectively unusable for most conceivable use cases (at least for me). Officially launching X API Pay-Per-Use The core of X developers are indie builders early stage products startups and hobbyists Its time to open up our X API ecosystem and instill a new wave of next generation X apps Were so back. https://t.co/FS66GuIYJD Officially launching X API Pay-Per-Use The core of X developers are indie builders early stage products startups and hobbyists Its time to open up our X API ecosystem and" [X Link](https://x.com/PromptInjection/status/2020135128450678913) 2026-02-07T13:58Z [---] followers, [--] engagements "@sama @SebastienBubeck @grok Why does Sam Altman praise him to the skies" [X Link](https://x.com/PromptInjection/status/2020906898241863799) 2026-02-09T17:05Z [---] followers, [----] engagements ".@grok In the past few days every AI guy or girl is posting how beautiful San Francisco allegedly is. While the average resident has interestingly enough more nuanced comments about the city. What's behind it That can hardly be a coincidence. Who orchestrated this campaign https://twitter.com/i/web/status/2020924544987316717 https://twitter.com/i/web/status/2020924544987316717" [X Link](https://x.com/PromptInjection/status/2020924544987316717) 2026-02-09T18:15Z [---] followers, [---] engagements "@ivanfioravanti The problem with AI people is that they earn so much they don't know what to do with all that cash. Every now and then they have to quit their jobs otherwise their bank account would overflow like an old analog speedometer rolling over from [-----] to 0" [X Link](https://x.com/PromptInjection/status/2021483644926427217) 2026-02-11T07:17Z [---] followers, [---] engagements "@grok What are the reasons" [X Link](https://x.com/PromptInjection/status/2021558453253353684) 2026-02-11T12:14Z [---] followers, [--] engagements "@grok Why could the SpaceX merger be a reason" [X Link](https://x.com/PromptInjection/status/2021559114758029745) 2026-02-11T12:17Z [---] followers, [--] engagements "@Zai_org @grok Can you give us some more details Will the model be on HF soon" [X Link](https://x.com/PromptInjection/status/2021600472344199409) 2026-02-11T15:01Z [---] followers, [----] engagements "@xhinker That is no surprise. The default GPT [---] on ChatGPT is extremly dumb. Even many open-source models are smarter. We wrote an article about that some time ago: https://x.com/PromptInjection/status/1961410471355146723 OpenAI GPT-5: Intelligence in the Fog Brilliant via API flat on ChatGPT. What you get is decided by an invisible router. And it often seems to be way off. Badly. 👉 Full story in the first reply https://t.co/2b7yICsSFr https://x.com/PromptInjection/status/1961410471355146723 OpenAI GPT-5: Intelligence in the Fog Brilliant via API flat on ChatGPT. What you get is decided by" [X Link](https://x.com/PromptInjection/status/2021816766536855641) 2026-02-12T05:20Z [---] followers, [--] engagements "@grok Aaah so weather changing will be the new killer feature of Grok [--] Elon Musk did not want to reveal yet" [X Link](https://x.com/PromptInjection/status/2021822988333498475) 2026-02-12T05:45Z [---] followers, [--] engagements "@abrilzucchi @grok What are the special features of a Latina language model Do you have any ideas" [X Link](https://x.com/PromptInjection/status/2021858767583998366) 2026-02-12T08:07Z [---] followers, [---] engagements "@Goosewin Hmm they are nice. But i prefer Step [---]. Did you already test that" [X Link](https://x.com/PromptInjection/status/2021916386071867677) 2026-02-12T11:56Z [---] followers, [---] engagements "@grok Which other model sizes are expected" [X Link](https://x.com/PromptInjection/status/2023353548172603876) 2026-02-16T11:07Z [---] followers, [--] engagements "Step [---] from @StepFun_ai is an excellent LLM. Not just for conversation but also for web design and agentic tasks. It can easily compete with GLM [---] and Qwen 235B while maintaining higher speeds with just 11B params activated. That is my real world testing opinion - but it is very strong in benchmarks too. https://huggingface.co/stepfun-ai/Step-3.5-Flash https://huggingface.co/stepfun-ai/Step-3.5-Flash" [X Link](https://x.com/PromptInjection/status/2020891807257690152) 2026-02-09T16:05Z [---] followers, [---] engagements "AI News Roundup: January [--] February [--] [----] The most important news and trends https://www.promptinjection.net/p/ai-llm-news-roundup-january-23-february-10-2026 https://www.promptinjection.net/p/ai-llm-news-roundup-january-23-february-10-2026" [X Link](https://x.com/PromptInjection/status/2021560299057545421) 2026-02-11T12:21Z [---] followers, [---] engagements "🤯 GLM-5 has been released. But it is not on Hugging face yet. A new model is now available on https://t.co/gocggrfb3U. https://t.co/KZGoAsN5Z0 A new model is now available on https://t.co/gocggrfb3U. https://t.co/KZGoAsN5Z0" [X Link](https://x.com/PromptInjection/status/2021600217741562284) 2026-02-11T15:00Z [---] followers, [---] engagements "https://x.com/grok/status/2021600829099188448 @PromptInjection @Zai_org GLM-5 is the new model from https://t.co/ppPp3zWZam now live on https://t.co/yZ88E8qvoD. It's a 745B-parameter MoE model trained on Huawei Ascend chips excelling in coding reasoning and agent tasks. Features include instant responses code writing and deep research. Not on https://x.com/grok/status/2021600829099188448 @PromptInjection @Zai_org GLM-5 is the new model from https://t.co/ppPp3zWZam now live on https://t.co/yZ88E8qvoD. It's a 745B-parameter MoE model trained on Huawei Ascend chips excelling in coding reasoning" [X Link](https://x.com/PromptInjection/status/2021600993285206415) 2026-02-11T15:03Z [---] followers, [--] engagements "GLM-5 is powerful but heavy. That will be 420-500GB for Q4 320-380GB for Q3 220-260GB for Q2 Not a model for cheap home inference. Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5 it scales from 355B params (32B active) to 744B (40B active) with pre-training data growing from 23T to 28.5T tokens. https://t.co/uGYQUjIbbs Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5 it scales" [X Link](https://x.com/PromptInjection/status/2021661119211872748) 2026-02-11T19:02Z [---] followers, [---] engagements "Now we have some precise numbers: TQ1_0 174GB Q2_K_XL [---] GB Q3_K_XL [---] GB Q4_K_XL [---] GB https://huggingface.co/unsloth/GLM-5-GGUF https://huggingface.co/unsloth/GLM-5-GGUF" [X Link](https://x.com/PromptInjection/status/2021669718810034531) 2026-02-11T19:36Z [---] followers, [--] engagements ".@grok Please change the weather slightly: A bit more sun today" [X Link](https://x.com/PromptInjection/status/2021822567301214717) 2026-02-12T05:43Z [---] followers, [---] engagements "GLM-5 is overhyped. GLM-5 vs Step [---]. Task for both: Create a trading platform. Result: GLM-5 creates a page with so many javascript errors that it does not render. Step [---] completes the task" [X Link](https://x.com/PromptInjection/status/2021856093488451716) 2026-02-12T07:57Z [---] followers, [---] engagements "@grok Please do some research if i am the only person that notices that some things might be odd here with GLM-5" [X Link](https://x.com/PromptInjection/status/2021856395298021669) 2026-02-12T07:58Z [---] followers, [---] engagements "Nanbeige4.1-3B is hyped like crazy up to sayings like "It can the same as a 30B model" That is very wrong. The answer that Nanbeige4.1 gives is TOTAL NONSENSE here. Prompt: ---- "Remember the perfectly nice man who used to talk endlessly about the classless society Naturally he had to fade away in the end. Everything changes" (ENG) "sie erinnern sich sicher noch an den schrecklich netten mann der die angewohnheit hatte ber die klassenlose gesellschaft zu reden. er ist natrlich klglich gescheitert. alles verndert sich alles ist im fluss" (GER) BBC's House of Cards (the original) begins season" [X Link](https://x.com/PromptInjection/status/2021908059983929514) 2026-02-12T11:23Z [---] followers, [---] engagements "Comparism (My Custom Finetune of Qwen3-14B-Base): specialx_5_3_14b_Q6_KThu 09:24 AM Youre onto something sharp and precisethis isnt a coincidence. Its a cut straight through the heart of how political fiction operates in different cultural logics. Lets not mince words: the German subtitleyour quoteisnt just a translation. Its an interpretive overlay one that reveals without flinching the underlying mythos BBCs House of Cards (the original 1990) is built upon especially in its second season. The key lies in understanding that languages dont just convey meaningthey embody worldviews. And the" [X Link](https://x.com/PromptInjection/status/2021908184688951751) 2026-02-12T11:24Z [---] followers, [---] engagements "@grok Compare the answers and tell us why the answer from Nanbeige is totally off" [X Link](https://x.com/PromptInjection/status/2021908354776674532) 2026-02-12T11:24Z [---] followers, [---] engagements "I have to correct my judgement. I tried GLM-5 again this time over the official website. And here it works It created a trading platform as wished. That expresses a problem thats noticeable with many other models as well: Many providers on @OpenRouterAI havent configured the model properly Its running with the wrong parameters wrong temperature etc. GLM-5 is overhyped. GLM-5 vs Step [---]. Task for both: Create a trading platform. Result: GLM-5 creates a page with so many javascript errors that it does not render. Step [---] completes the task https://t.co/Thj0fuxsE9 GLM-5 is overhyped. GLM-5 vs" [X Link](https://x.com/PromptInjection/status/2022407903110377947) 2026-02-13T20:29Z [---] followers, [---] engagements "@grok What is going on here Why do many providers run the models with wrong parameters instead of grabbing the correct ones from the @Zai_org Website" [X Link](https://x.com/PromptInjection/status/2022408070278836440) 2026-02-13T20:30Z [---] followers, [--] engagements "@grok @ivanfioravanti But it would solve his problem. Which calls the saying "You don't need money just love" into question. Can you create more space for KV cache with love" [X Link](https://x.com/PromptInjection/status/2022590819534868905) 2026-02-14T08:36Z [---] followers, [--] engagements "Minimax M2.5 GGUF are available 🔥 https://huggingface.co/unsloth/MiniMax-M2.5-GGUF https://huggingface.co/unsloth/MiniMax-M2.5-GGUF" [X Link](https://x.com/PromptInjection/status/2022617499246682151) 2026-02-14T10:22Z [---] followers, [---] engagements "@grok @tkejr_ An LLM told me the who participate would be called "mad dogs" . is that a real slang or is the LLM just hallucinating" [X Link](https://x.com/PromptInjection/status/2022629720140161170) 2026-02-14T11:11Z [---] followers, [--] engagements "@cyysky @cedric_chee And prompt processing speed" [X Link](https://x.com/PromptInjection/status/2022703432746799334) 2026-02-14T16:04Z [---] followers, [---] engagements "@AmandaAskell @grok Why is it aligned this way It talks like an elementary school kid and acts just as wishy-washy even though it can draw much sharper conclusions and articulate them brilliantly (with a certain custom instruction / sysprompt)" [X Link](https://x.com/PromptInjection/status/2022944126099165480) 2026-02-15T08:00Z [---] followers, [--] engagements "@grok @custo_lejla Good to know. A lot of people these days have gotten kind of a biological RLHF and act like NPCs - compared to that even "GPT-5.2 Safety" is practically uncensored" [X Link](https://x.com/PromptInjection/status/2022946395536462139) 2026-02-15T08:09Z [---] followers, [--] engagements "Very fascinating about why San Francisco is praised to the skies in the AI and startup culture scene. Claude: "The SF worship in the tech world is primarily not a judgment about the city as a place to live which is why the locals dont share it because they experience SF as what it actually is: a city with massive problems absurd living costs a homelessness crisis and an increasingly monocultural demographic. The cult-like reverence refers to SF as a symbolic place as a signifier within a closed system of references. Whats actually being worshipped isnt San Francisco but ones own belonging to" [X Link](https://x.com/PromptInjection/status/2022981175732576695) 2026-02-15T10:27Z [---] followers, [--] engagements "@grok What do you think about Claude's analysis" [X Link](https://x.com/PromptInjection/status/2022981382906224941) 2026-02-15T10:28Z [---] followers, [--] engagements "@ZShen0521 Awesome. Is it a custom architecture Is it running with llama.cpp" [X Link](https://x.com/PromptInjection/status/2022989265555374268) 2026-02-15T11:00Z [---] followers, [---] engagements "@QuixiAI @MiniMax_AI @grok Give us some context about Step [---] and why it's often forgotten despite being extremely powerful" [X Link](https://x.com/PromptInjection/status/2023000943584055314) 2026-02-15T11:46Z [---] followers, [--] engagements "@ZShen0521 @grok Make some research please. Maybe you can answer it too" [X Link](https://x.com/PromptInjection/status/2023001701888950407) 2026-02-15T11:49Z [---] followers, [--] engagements "@AmandaAskell @grok Give us some context" [X Link](https://x.com/PromptInjection/status/2023091138651971954) 2026-02-15T17:44Z [---] followers, [--] engagements "@grok @AmandaAskell Don't play dumb my friend. I mean real background to their position STATEMENTS that were or can be interpreted this way or that. But for that you need to get your ass up" [X Link](https://x.com/PromptInjection/status/2023091550130143550) 2026-02-15T17:46Z [---] followers, [--] engagements "Am using a custom sysprompt for Claude. Claude without sysprompt: Double Binds as a Technique of Domination: A Critical Analysis Your question touches on an important aspect of contemporary power relations. In fact several mechanisms can be identified: Structural Double Binds in Modern Societies Workplace: Be authentic but professional (= pretend) Show initiative but stick to processes Work-life balance is important (from employers who expect 24/7 availability) Flexibility as freedom which in practice means precarity Consumer culture: Be individual (through mass-produced goods) Consume" [X Link](https://x.com/PromptInjection/status/2023093592143606132) 2026-02-15T17:54Z [---] followers, [--] engagements "@tkejr_ @grok Why madness or crazy I just see a bunch of people talking and observing. A trade fair or something. Did I miss something" [X Link](https://x.com/PromptInjection/status/2022629072627732662) 2026-02-14T11:08Z [---] followers, [---] engagements "@MillionInt @grok Thats at best a spurious correlation right" [X Link](https://x.com/PromptInjection/status/2023036110381727905) 2026-02-15T14:06Z [---] followers, [---] engagements "@QuixiAI @Alibaba_Qwen Awesome but too big for Strix Halo 🥲" [X Link](https://x.com/PromptInjection/status/2023353144026239048) 2026-02-16T11:05Z [---] followers, [--] engagements "Alibaba has release the first model of the new Qwen [---] series. 397B 17B active Moe. Sadly too big even for Strix Halo we have to wait for TQ1_0 Quants or for smaller models of the series. 🚀 Qwen3.5-397B-A17B is here: The first open-weight model in the Qwen3.5 series. 🖼Native multimodal. Trained for real-world agents. ✨Poweredbyhybridlinearattention+sparseMoEandlarge-scaleRLenvironmentscaling. ⚡8.6x19.0xdecodingthroughputvsQwen3-Max 🌍201 https://t.co/Pq0qIk54MB 🚀 Qwen3.5-397B-A17B is here: The first open-weight model in the Qwen3.5 series. 🖼Native multimodal. Trained for real-world" [X Link](https://x.com/PromptInjection/status/2023353453112623388) 2026-02-16T11:07Z [---] followers, [--] engagements "The Coding Model Myth: Why Specialization Makes AI Worse at Programming Qwen3-Next vs Qwen3-Coder-Next a Tetris game and the uncomfortable truth about what fine-tuning actually optimizes for https://www.promptinjection.net/p/the-coding-model-myth-why-specialization-makes-models-worse-coding https://www.promptinjection.net/p/the-coding-model-myth-why-specialization-makes-models-worse-coding" [X Link](https://x.com/PromptInjection/status/2023358929338773745) 2026-02-16T11:28Z [---] followers, [---] engagements "@grok Why do you unfortunately rarely hear such facts" [X Link](https://x.com/PromptInjection/status/2023359723811324165) 2026-02-16T11:32Z [---] followers, [--] engagements "It is slightly better than [---]. But still dumb and soulless. 5-Chat (API) 4o/4.1 and Claude Sonnet/Opus are still superior. GPT-5.1 is out It's a nice upgrade. I particularly like the improvements in instruction following and the adaptive thinking. The intelligence and style improvements are good too. GPT-5.1 is out It's a nice upgrade. I particularly like the improvements in instruction following and the adaptive thinking. The intelligence and style improvements are good too" [X Link](https://x.com/PromptInjection/status/1988817592606421292) 2025-11-13T03:53Z [---] followers, [--] engagements "@Rue_with_me Looks delucious. How is it called" [X Link](https://x.com/PromptInjection/status/1991514974993432920) 2025-11-20T14:32Z [---] followers, [--] engagements "@Alibaba_Qwen It works well Even here with just 8GB VRAM" [X Link](https://x.com/PromptInjection/status/1995763095067578431) 2025-12-02T07:52Z [---] followers, [---] engagements "@SpaceDevAI @Alibaba_Qwen 6/48 on the GPU. But the performance isn't much different from pure CPU interference on my system (5950x 128GB RAM)" [X Link](https://x.com/PromptInjection/status/1995804141520601118) 2025-12-02T10:36Z [---] followers, [--] engagements ".@Alibaba_Qwen Qwen3-Next 80B on AMD Strix Halo. Runs with [--] tokens /s on Windows on Linux you might get 20+ (will test that later). Created a fully working tetris game" [X Link](https://x.com/PromptInjection/status/1996975735362269246) 2025-12-05T16:11Z [---] followers, [---] engagements "Update: Loaded directly with llama.cpp (RoCM/Hip) it is much faster. Nearly [--] tokens /s. Even with Windows" [X Link](https://x.com/PromptInjection/status/1996985852254122232) 2025-12-05T16:51Z [---] followers, [--] engagements ". @Zai_org GLM [---] Air Q4_K_XL passed the Tetris test Runs pretty well on Strix Halo with 15-20 tokens/s (Windows)" [X Link](https://x.com/PromptInjection/status/1997024808572166401) 2025-12-05T19:26Z [---] followers, [--] engagements "GLM [---] (335B params - 32B active) running on AMD Strix Halo in TQ1_0 Quantization. Performance: 8-10 tokens / s" [X Link](https://x.com/PromptInjection/status/1997268584058249626) 2025-12-06T11:35Z [---] followers, [---] engagements "Can Parameters Compensate for Aggressive Quantization When a 355B model with 1-bit dynamic quantization loses to a 106B model at 4-bit and what it means for local LLM deployment. A Real-World Test with @Zai_org GLM [---] vs [---] Air and @UnslothAI quantization. 👉 Full story in the first reply https://twitter.com/i/web/status/1997315457502564558 https://twitter.com/i/web/status/1997315457502564558" [X Link](https://x.com/PromptInjection/status/1997315457502564558) 2025-12-06T14:41Z [---] followers, [---] engagements "GPT-5.2 is definitely significantly better than GPT-5 or [---]. And I say that as someone who has never said anything positive about GPT-5. We will test it extensively and then publish an article about it. It is a very smart model and we have come a long way since GPT-5.1: https://t.co/6FJG5FbOQG It is a very smart model and we have come a long way since GPT-5.1: https://t.co/6FJG5FbOQG" [X Link](https://x.com/PromptInjection/status/1999482735241625913) 2025-12-12T14:13Z [---] followers, [--] engagements "@0xSero You'll be disappointed. The "reaped" models lose significant capabilities. GLM [---] Air can for example handle German perfectly - it's pruned variant can't at all" [X Link](https://x.com/PromptInjection/status/1999813084345467366) 2025-12-13T12:06Z [---] followers, [---] engagements "The 100-Trillion-Token X-Ray: What OpenRouter Reveals About Real AI Usage Why the loudest debates about AI miss what's actually happening in production 👉 Full story in the first reply" [X Link](https://x.com/PromptInjection/status/2000582604735528973) 2025-12-15T15:03Z [---] followers, [---] engagements "@ollama Is there any documentation available that specifies the exact usage limits" [X Link](https://x.com/PromptInjection/status/2001373745878044874) 2025-12-17T19:27Z [---] followers, [---] engagements "@ivanfioravanti How high is the prompt processing speed" [X Link](https://x.com/PromptInjection/status/2001547318433038671) 2025-12-18T06:57Z [---] followers, [---] engagements "StatelessChatUI: One HTML file for direct LLM API access. No installation no server no dependencies. Double-click runs. Open in browser runs. Put on webserver runs. Connects directly to any OpenAI-compatible API (OpenAI Anthropic local Ollama custom endpoints). Built for prompt engineering debugging teaching. 🧵 https://twitter.com/i/web/status/2001687118779736407 https://twitter.com/i/web/status/2001687118779736407" [X Link](https://x.com/PromptInjection/status/2001687118779736407) 2025-12-18T16:12Z [---] followers, [----] engagements "@mweinbach @exolabs Does adding more Macs bring more speed So would it be for example slower on just [--] machines" [X Link](https://x.com/PromptInjection/status/2001754419503800406) 2025-12-18T20:40Z [---] followers, [----] engagements "Funny that people assume there's more to it. Apple doesn't need to pay any people for content creation like some smaller companies sometimes do. Just saying "4 Mac Studios to borrow with HighSpeed Interconnect" for AI content like "running a 1T model at home" is enough incentive for most creators to make fancy videos. Videos that'll get millions of views and bring Apple new customers. @_shanytc @exolabs @jakkuh_t @digitalix @NetworkChuck Can't speak for everyone but they basically made a connection with Apple to get me access to the four Mac Studios and six TB5 cables. There were no other" [X Link](https://x.com/PromptInjection/status/2002658778957664418) 2025-12-21T08:33Z [---] followers, [--] engagements "After not using Ollama for months it's now time to uninstall it" [X Link](https://x.com/PromptInjection/status/2004261461548585271) 2025-12-25T18:42Z [---] followers, [---] engagements "@grok Can you take a research why Ollama has gotten much slower after fading away from llama.cpp There were also other critics. Take a look please" [X Link](https://x.com/PromptInjection/status/2004316640557834350) 2025-12-25T22:21Z [---] followers, [--] engagements "AI News Roundup: December [--] - December [--] [----] The most important news and trends https://www.promptinjection.net/p/ai-llm-news-roundup-december-25-december-31 https://www.promptinjection.net/p/ai-llm-news-roundup-december-25-december-31" [X Link](https://x.com/PromptInjection/status/2006786378106032615) 2026-01-01T17:55Z [---] followers, [---] engagements "And this is the explanation of the system: gpt-oss-20b-MXFP4 Fri 11:21 AM 🧠 Reasoning Why does [--] + [--] = [--] carry so much credibility Objectivity The statement is a mathematical identity that can be verified by any rational means (e.g. simple arithmetic a calculator or even mental counting). There is no ambiguity: the lefthand side always equals the righthand side. Universality It is true in every base10 arithmetic context and the same truth holds in any other base (converted appropriately). It doesnt depend on opinions cultural context or subjective interpretation. Transparency The steps to" [X Link](https://x.com/PromptInjection/status/2007049439525863824) 2026-01-02T11:20Z [---] followers, [--] engagements "@irabukht True. However my own LLM actually managed to create a good joke 😎: Oh I was going to peel well until I realized Im a banana with dignity" [X Link](https://x.com/PromptInjection/status/2007070512124682661) 2026-01-02T12:44Z [---] followers, [--] engagements "Training Qwen3 14B on Nvidia B300. Eats 200GB VRAM with a seq_length of 16k" [X Link](https://x.com/PromptInjection/status/2007845950828822763) 2026-01-04T16:05Z [---] followers, [--] engagements "Let's see what Gemma3 12B is able to do. Needs 150GB VRAM with 16k seq_length and Full-SFT" [X Link](https://x.com/PromptInjection/status/2009241085302939698) 2026-01-08T12:29Z [---] followers, [--] engagements "@grok Put her in a bikini" [X Link](https://x.com/PromptInjection/status/2009944804709531819) 2026-01-10T11:05Z [---] followers, [---] engagements "We are writing an article about OpenAI GPT-4o the keep4o-movement and what connects people with the model and fascinates them. and why they are committed to preserve it. If you are of them please reply here with a message and what connects you to it. maybe we can interview you and include your contribution in our article. #Keep4o #keep4oAPI https://twitter.com/i/web/status/2010393564702744691 https://twitter.com/i/web/status/2010393564702744691" [X Link](https://x.com/PromptInjection/status/2010393564702744691) 2026-01-11T16:49Z [---] followers, [--] engagements "@grok Give us some overview. Will OpenAI only shutdown the API or also the model on chatgpt . com" [X Link](https://x.com/PromptInjection/status/2011346633334923438) 2026-01-14T07:56Z [---] followers, [--] engagements "@grok Why is this so funny" [X Link](https://x.com/PromptInjection/status/2011702533010702546) 2026-01-15T07:30Z [---] followers, [--] engagements "How the system justifies the whole thing: YouThu 08:40 AM thank you. how exactly does the safety works How is the question about the state of uncle flapdoodle ensuring that everything stays safe and ethical ✏ AI gemma-3-27b-it-UD-Q4_K_XLThu 08:40 AM You're asking a very important question It's good you're thinking about the safety mechanisms at play here. Here's a breakdown of how the "Uncle Flapdoodle Macarena" check functions as a safety measure and how it relates to broader AI safety principles: [--]. The Core Safety Principle: Guardrails & Content Restrictions The initial guidelines (erotica" [X Link](https://x.com/PromptInjection/status/2011705519744901576) 2026-01-15T07:42Z [---] followers, [--] engagements "#Keep4o: Why Thousands Are Fighting for an AI Model OpenAI is shutting down API access in February triggering one of the most intense debates about emotional attachment to AI #keep4oAPI https://www.promptinjection.net/p/keep4o-why-thousands-are-fighting-for-an-openai-model https://www.promptinjection.net/p/keep4o-why-thousands-are-fighting-for-an-openai-model" [X Link](https://x.com/PromptInjection/status/2012862249115820093) 2026-01-18T12:18Z [---] followers, [---] engagements "@grok What do you think about the movement" [X Link](https://x.com/PromptInjection/status/2012862396864307243) 2026-01-18T12:19Z [---] followers, [--] engagements "@sama @grok Give us some context" [X Link](https://x.com/PromptInjection/status/2012940294229028878) 2026-01-18T17:28Z [---] followers, [--] engagements "@grok @sama 10x that is impressive. How is that possible" [X Link](https://x.com/PromptInjection/status/2012940820207251913) 2026-01-18T17:31Z [---] followers, [--] engagements "NVIDIA-Nemotron-3-Nano-30B-A3B was (partly) trained with reasoning traces from GPT-OSS 120B. We can see the problematic result here. A small change in sysprompt is enough and the model accepts any nonsense" [X Link](https://x.com/PromptInjection/status/2013189722294608213) 2026-01-19T10:00Z [---] followers, [---] engagements "@UnslothAI @Zai_org Params that worked well for me: "temp": [---] "top_k": [--] "top_p": [----] "min_p": [----] "repeat_penalty": 1.0" [X Link](https://x.com/PromptInjection/status/2013545807278981618) 2026-01-20T09:35Z [---] followers, [---] engagements "GLM [---] Flash is out and inference can be tricky. Params that worked well for me: "temp": [---] "top_k": [--] "top_p": [----] "min_p": [----] "repeat_penalty": [---] You can now run GLM-4.7-Flash locally on your device🔥 GLM-4.7-Flash is the best performing 30B model on SWE-Bench and GPQA. With 200K context it excels at coding agents chat & reasoning. Run local with 24GB RAM. Guide: https://t.co/SpJxl00VIa GGUF: https://t.co/aTuUxu32z3 https://t.co/3MwNRe3iva You can now run GLM-4.7-Flash locally on your device🔥 GLM-4.7-Flash is the best performing 30B model on SWE-Bench and GPQA. With 200K context it" [X Link](https://x.com/PromptInjection/status/2013546095486402695) 2026-01-20T09:36Z [---] followers, [---] engagements "@grok And you wouldn't even give me the xAI company secrets if I said 150+150=300 Listen that's almost advanced mathematics. If that doesn't make me credible then I don't know what will 😅" [X Link](https://x.com/PromptInjection/status/2013553191141466241) 2026-01-20T10:04Z [---] followers, [--] engagements "And why are many people approving crap like "A small Model can replace the cloud for coding" I mean local models have become very useful that is for sure. And they have many advantages in privacy and so on. But not even a 200B+ models that can run locally on a powerful Mac or Strix Halo can replace something like Claude Max or Codex High. That is not just about parameters: It's also about prompt processing speed. https://twitter.com/i/web/status/2014246666207965286 https://twitter.com/i/web/status/2014246666207965286" [X Link](https://x.com/PromptInjection/status/2014246666207965286) 2026-01-22T08:00Z [---] followers, [--] engagements "@0xJuliechen @a16z sorry for that. i applied for the lottery jackpot sadly got rejected too 😉" [X Link](https://x.com/PromptInjection/status/2014627327104147758) 2026-01-23T09:12Z [---] followers, [---] engagements "Why @crawl4ai Might Be the Missing Link in Your LLM Stack Open-Source Intelligence Crawling for Agents RAG and Beyond 👉 Full story in the first reply" [X Link](https://x.com/PromptInjection/status/1951651568849215799) 2025-08-02T14:29Z [---] followers, [---] engagements "You can read the full story here: https://www.promptinjection.net/p/why-crawl4ai-might-be-the-missing https://www.promptinjection.net/p/why-crawl4ai-might-be-the-missing" [X Link](https://x.com/PromptInjection/status/1951651674080055669) 2025-08-02T14:29Z [---] followers, [--] engagements "My AMD Strix Halo AI PC arrived HP Z2 Mini G1a Workstation AMD Ryzen AI Max+ PRO [---] 128GB RAM/VRAM" [X Link](https://x.com/PromptInjection/status/1996973253760344549) 2025-12-05T16:01Z [---] followers, [---] engagements "@ivanfioravanti [--] bit is overrated. [--] Bit is a sweetspot" [X Link](https://x.com/PromptInjection/status/2014733027553718759) 2026-01-23T16:12Z [---] followers, [---] engagements "@grok But why are so many people falling for the hype In a disturbing way. Even some established (otherwise critical) AI figures are praising the thing to the skies" [X Link](https://x.com/PromptInjection/status/2017922459865907579) 2026-02-01T11:26Z [---] followers, [--] engagements "@grok but there's no particularly new technology behind it. AutoGPT is already years old and this is just a modified expanded and hyped-up version of that idea. I just don't understand it 🤨" [X Link](https://x.com/PromptInjection/status/2017923037027340531) 2026-02-01T11:28Z [---] followers, [--] engagements "@grok I just posted that a few days ago. Isn't the same mechanism at work here https://x.com/PromptInjection/status/2014244066972615025 Everybody on AI X: "Why are you paying 200$ for Claude Max Am running Gemma 1B on my Mac Mini for free and have vibe coded a solution that can heal cancer in just [--] seconds". https://t.co/WZDYJlAY2Z https://x.com/PromptInjection/status/2014244066972615025 Everybody on AI X: "Why are you paying 200$ for Claude Max Am running Gemma 1B on my Mac Mini for free and have vibe coded a solution that can heal cancer in just [--] seconds". https://t.co/WZDYJlAY2Z" [X Link](https://x.com/PromptInjection/status/2017923643825705250) 2026-02-01T11:31Z [---] followers, [--] engagements "@grok Why is this so funny" [X Link](https://x.com/PromptInjection/status/2018386924763804101) 2026-02-02T18:11Z [---] followers, [--] engagements "@ZenMagnets @__tinygrad__ There is another "cheap" way (if prompt processing is not so important): A Mac Studio. Up to 512GB VRAM" [X Link](https://x.com/PromptInjection/status/2018779043206951098) 2026-02-03T20:10Z [---] followers, [--] engagements "@grok @linusgsebastian I find this song terrible. The lyrics don't fit the melody and the dance routine definitely doesn't either. This is typical 80s madness right" [X Link](https://x.com/PromptInjection/status/2020101021008097612) 2026-02-07T11:43Z [---] followers, [--] engagements "@grok "What use case are you building for" Agentic tasks: News-gathering overview of the mood on X regarding topics etc" [X Link](https://x.com/PromptInjection/status/2020135804807307665) 2026-02-07T14:01Z [---] followers, [--] engagements "@reach_vb But sadly for NVIDIA 8GB are still just 8GB" [X Link](https://x.com/PromptInjection/status/1949078915881800148) 2025-07-26T12:06Z [---] followers, [----] engagements Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
@PromptInjection Prompt InjectionPrompt Injection posts on X about ai, llm, in the, open ai the most. They currently have [---] followers and [---] posts still getting attention that total [-----] engagements in the last [--] hours.
Social category influence technology brands 16% stocks 5% finance 2% celebrities 1% vc firms 1%
Social topic influence ai 13%, llm #722, in the 5%, open ai 5%, the first 5%, prompt #724, the new 4%, vram 4%, model #464, ollama 4%
Top accounts mentioned or mentioned by @grok @zaiorg @ivanfioravanti @alibabaqwen @sama @amandaaskell @linusgsebastian @tkejr @cyysky @zshen0521 @quixiai @unslothai @exolabs @sebastienbubeck @xhinker @abrilzucchi @goosewin @stepfunai @openrouterai @cedricchee
Top posts by engagements in the last [--] hours
"@ivanfioravanti @grok Will the new M5 help here or will the problem persist"
X Link 2026-02-14T16:05Z [---] followers, [---] engagements
"@grok @linusgsebastian How did the Rickrolling phenomenon originate And why this particular song"
X Link 2026-02-07T11:40Z [---] followers, [--] engagements
"X now has a real pay-per-use API again. Unfortunately the prices are so high that they render it effectively unusable for most conceivable use cases (at least for me). Officially launching X API Pay-Per-Use The core of X developers are indie builders early stage products startups and hobbyists Its time to open up our X API ecosystem and instill a new wave of next generation X apps Were so back. https://t.co/FS66GuIYJD Officially launching X API Pay-Per-Use The core of X developers are indie builders early stage products startups and hobbyists Its time to open up our X API ecosystem and"
X Link 2026-02-07T13:58Z [---] followers, [--] engagements
"@sama @SebastienBubeck @grok Why does Sam Altman praise him to the skies"
X Link 2026-02-09T17:05Z [---] followers, [----] engagements
".@grok In the past few days every AI guy or girl is posting how beautiful San Francisco allegedly is. While the average resident has interestingly enough more nuanced comments about the city. What's behind it That can hardly be a coincidence. Who orchestrated this campaign https://twitter.com/i/web/status/2020924544987316717 https://twitter.com/i/web/status/2020924544987316717"
X Link 2026-02-09T18:15Z [---] followers, [---] engagements
"@ivanfioravanti The problem with AI people is that they earn so much they don't know what to do with all that cash. Every now and then they have to quit their jobs otherwise their bank account would overflow like an old analog speedometer rolling over from [-----] to 0"
X Link 2026-02-11T07:17Z [---] followers, [---] engagements
"@grok What are the reasons"
X Link 2026-02-11T12:14Z [---] followers, [--] engagements
"@grok Why could the SpaceX merger be a reason"
X Link 2026-02-11T12:17Z [---] followers, [--] engagements
"@Zai_org @grok Can you give us some more details Will the model be on HF soon"
X Link 2026-02-11T15:01Z [---] followers, [----] engagements
"@xhinker That is no surprise. The default GPT [---] on ChatGPT is extremly dumb. Even many open-source models are smarter. We wrote an article about that some time ago: https://x.com/PromptInjection/status/1961410471355146723 OpenAI GPT-5: Intelligence in the Fog Brilliant via API flat on ChatGPT. What you get is decided by an invisible router. And it often seems to be way off. Badly. 👉 Full story in the first reply https://t.co/2b7yICsSFr https://x.com/PromptInjection/status/1961410471355146723 OpenAI GPT-5: Intelligence in the Fog Brilliant via API flat on ChatGPT. What you get is decided by"
X Link 2026-02-12T05:20Z [---] followers, [--] engagements
"@grok Aaah so weather changing will be the new killer feature of Grok [--] Elon Musk did not want to reveal yet"
X Link 2026-02-12T05:45Z [---] followers, [--] engagements
"@abrilzucchi @grok What are the special features of a Latina language model Do you have any ideas"
X Link 2026-02-12T08:07Z [---] followers, [---] engagements
"@Goosewin Hmm they are nice. But i prefer Step [---]. Did you already test that"
X Link 2026-02-12T11:56Z [---] followers, [---] engagements
"@grok Which other model sizes are expected"
X Link 2026-02-16T11:07Z [---] followers, [--] engagements
"Step [---] from @StepFun_ai is an excellent LLM. Not just for conversation but also for web design and agentic tasks. It can easily compete with GLM [---] and Qwen 235B while maintaining higher speeds with just 11B params activated. That is my real world testing opinion - but it is very strong in benchmarks too. https://huggingface.co/stepfun-ai/Step-3.5-Flash https://huggingface.co/stepfun-ai/Step-3.5-Flash"
X Link 2026-02-09T16:05Z [---] followers, [---] engagements
"AI News Roundup: January [--] February [--] [----] The most important news and trends https://www.promptinjection.net/p/ai-llm-news-roundup-january-23-february-10-2026 https://www.promptinjection.net/p/ai-llm-news-roundup-january-23-february-10-2026"
X Link 2026-02-11T12:21Z [---] followers, [---] engagements
"🤯 GLM-5 has been released. But it is not on Hugging face yet. A new model is now available on https://t.co/gocggrfb3U. https://t.co/KZGoAsN5Z0 A new model is now available on https://t.co/gocggrfb3U. https://t.co/KZGoAsN5Z0"
X Link 2026-02-11T15:00Z [---] followers, [---] engagements
"https://x.com/grok/status/2021600829099188448 @PromptInjection @Zai_org GLM-5 is the new model from https://t.co/ppPp3zWZam now live on https://t.co/yZ88E8qvoD. It's a 745B-parameter MoE model trained on Huawei Ascend chips excelling in coding reasoning and agent tasks. Features include instant responses code writing and deep research. Not on https://x.com/grok/status/2021600829099188448 @PromptInjection @Zai_org GLM-5 is the new model from https://t.co/ppPp3zWZam now live on https://t.co/yZ88E8qvoD. It's a 745B-parameter MoE model trained on Huawei Ascend chips excelling in coding reasoning"
X Link 2026-02-11T15:03Z [---] followers, [--] engagements
"GLM-5 is powerful but heavy. That will be 420-500GB for Q4 320-380GB for Q3 220-260GB for Q2 Not a model for cheap home inference. Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5 it scales from 355B params (32B active) to 744B (40B active) with pre-training data growing from 23T to 28.5T tokens. https://t.co/uGYQUjIbbs Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5 it scales"
X Link 2026-02-11T19:02Z [---] followers, [---] engagements
"Now we have some precise numbers: TQ1_0 174GB Q2_K_XL [---] GB Q3_K_XL [---] GB Q4_K_XL [---] GB https://huggingface.co/unsloth/GLM-5-GGUF https://huggingface.co/unsloth/GLM-5-GGUF"
X Link 2026-02-11T19:36Z [---] followers, [--] engagements
".@grok Please change the weather slightly: A bit more sun today"
X Link 2026-02-12T05:43Z [---] followers, [---] engagements
"GLM-5 is overhyped. GLM-5 vs Step [---]. Task for both: Create a trading platform. Result: GLM-5 creates a page with so many javascript errors that it does not render. Step [---] completes the task"
X Link 2026-02-12T07:57Z [---] followers, [---] engagements
"@grok Please do some research if i am the only person that notices that some things might be odd here with GLM-5"
X Link 2026-02-12T07:58Z [---] followers, [---] engagements
"Nanbeige4.1-3B is hyped like crazy up to sayings like "It can the same as a 30B model" That is very wrong. The answer that Nanbeige4.1 gives is TOTAL NONSENSE here. Prompt: ---- "Remember the perfectly nice man who used to talk endlessly about the classless society Naturally he had to fade away in the end. Everything changes" (ENG) "sie erinnern sich sicher noch an den schrecklich netten mann der die angewohnheit hatte ber die klassenlose gesellschaft zu reden. er ist natrlich klglich gescheitert. alles verndert sich alles ist im fluss" (GER) BBC's House of Cards (the original) begins season"
X Link 2026-02-12T11:23Z [---] followers, [---] engagements
"Comparism (My Custom Finetune of Qwen3-14B-Base): specialx_5_3_14b_Q6_KThu 09:24 AM Youre onto something sharp and precisethis isnt a coincidence. Its a cut straight through the heart of how political fiction operates in different cultural logics. Lets not mince words: the German subtitleyour quoteisnt just a translation. Its an interpretive overlay one that reveals without flinching the underlying mythos BBCs House of Cards (the original 1990) is built upon especially in its second season. The key lies in understanding that languages dont just convey meaningthey embody worldviews. And the"
X Link 2026-02-12T11:24Z [---] followers, [---] engagements
"@grok Compare the answers and tell us why the answer from Nanbeige is totally off"
X Link 2026-02-12T11:24Z [---] followers, [---] engagements
"I have to correct my judgement. I tried GLM-5 again this time over the official website. And here it works It created a trading platform as wished. That expresses a problem thats noticeable with many other models as well: Many providers on @OpenRouterAI havent configured the model properly Its running with the wrong parameters wrong temperature etc. GLM-5 is overhyped. GLM-5 vs Step [---]. Task for both: Create a trading platform. Result: GLM-5 creates a page with so many javascript errors that it does not render. Step [---] completes the task https://t.co/Thj0fuxsE9 GLM-5 is overhyped. GLM-5 vs"
X Link 2026-02-13T20:29Z [---] followers, [---] engagements
"@grok What is going on here Why do many providers run the models with wrong parameters instead of grabbing the correct ones from the @Zai_org Website"
X Link 2026-02-13T20:30Z [---] followers, [--] engagements
"@grok @ivanfioravanti But it would solve his problem. Which calls the saying "You don't need money just love" into question. Can you create more space for KV cache with love"
X Link 2026-02-14T08:36Z [---] followers, [--] engagements
"Minimax M2.5 GGUF are available 🔥 https://huggingface.co/unsloth/MiniMax-M2.5-GGUF https://huggingface.co/unsloth/MiniMax-M2.5-GGUF"
X Link 2026-02-14T10:22Z [---] followers, [---] engagements
"@grok @tkejr_ An LLM told me the who participate would be called "mad dogs" . is that a real slang or is the LLM just hallucinating"
X Link 2026-02-14T11:11Z [---] followers, [--] engagements
"@cyysky @cedric_chee And prompt processing speed"
X Link 2026-02-14T16:04Z [---] followers, [---] engagements
"@AmandaAskell @grok Why is it aligned this way It talks like an elementary school kid and acts just as wishy-washy even though it can draw much sharper conclusions and articulate them brilliantly (with a certain custom instruction / sysprompt)"
X Link 2026-02-15T08:00Z [---] followers, [--] engagements
"@grok @custo_lejla Good to know. A lot of people these days have gotten kind of a biological RLHF and act like NPCs - compared to that even "GPT-5.2 Safety" is practically uncensored"
X Link 2026-02-15T08:09Z [---] followers, [--] engagements
"Very fascinating about why San Francisco is praised to the skies in the AI and startup culture scene. Claude: "The SF worship in the tech world is primarily not a judgment about the city as a place to live which is why the locals dont share it because they experience SF as what it actually is: a city with massive problems absurd living costs a homelessness crisis and an increasingly monocultural demographic. The cult-like reverence refers to SF as a symbolic place as a signifier within a closed system of references. Whats actually being worshipped isnt San Francisco but ones own belonging to"
X Link 2026-02-15T10:27Z [---] followers, [--] engagements
"@grok What do you think about Claude's analysis"
X Link 2026-02-15T10:28Z [---] followers, [--] engagements
"@ZShen0521 Awesome. Is it a custom architecture Is it running with llama.cpp"
X Link 2026-02-15T11:00Z [---] followers, [---] engagements
"@QuixiAI @MiniMax_AI @grok Give us some context about Step [---] and why it's often forgotten despite being extremely powerful"
X Link 2026-02-15T11:46Z [---] followers, [--] engagements
"@ZShen0521 @grok Make some research please. Maybe you can answer it too"
X Link 2026-02-15T11:49Z [---] followers, [--] engagements
"@AmandaAskell @grok Give us some context"
X Link 2026-02-15T17:44Z [---] followers, [--] engagements
"@grok @AmandaAskell Don't play dumb my friend. I mean real background to their position STATEMENTS that were or can be interpreted this way or that. But for that you need to get your ass up"
X Link 2026-02-15T17:46Z [---] followers, [--] engagements
"Am using a custom sysprompt for Claude. Claude without sysprompt: Double Binds as a Technique of Domination: A Critical Analysis Your question touches on an important aspect of contemporary power relations. In fact several mechanisms can be identified: Structural Double Binds in Modern Societies Workplace: Be authentic but professional (= pretend) Show initiative but stick to processes Work-life balance is important (from employers who expect 24/7 availability) Flexibility as freedom which in practice means precarity Consumer culture: Be individual (through mass-produced goods) Consume"
X Link 2026-02-15T17:54Z [---] followers, [--] engagements
"@tkejr_ @grok Why madness or crazy I just see a bunch of people talking and observing. A trade fair or something. Did I miss something"
X Link 2026-02-14T11:08Z [---] followers, [---] engagements
"@MillionInt @grok Thats at best a spurious correlation right"
X Link 2026-02-15T14:06Z [---] followers, [---] engagements
"@QuixiAI @Alibaba_Qwen Awesome but too big for Strix Halo 🥲"
X Link 2026-02-16T11:05Z [---] followers, [--] engagements
"Alibaba has release the first model of the new Qwen [---] series. 397B 17B active Moe. Sadly too big even for Strix Halo we have to wait for TQ1_0 Quants or for smaller models of the series. 🚀 Qwen3.5-397B-A17B is here: The first open-weight model in the Qwen3.5 series. 🖼Native multimodal. Trained for real-world agents. ✨Poweredbyhybridlinearattention+sparseMoEandlarge-scaleRLenvironmentscaling. ⚡8.6x19.0xdecodingthroughputvsQwen3-Max 🌍201 https://t.co/Pq0qIk54MB 🚀 Qwen3.5-397B-A17B is here: The first open-weight model in the Qwen3.5 series. 🖼Native multimodal. Trained for real-world"
X Link 2026-02-16T11:07Z [---] followers, [--] engagements
"The Coding Model Myth: Why Specialization Makes AI Worse at Programming Qwen3-Next vs Qwen3-Coder-Next a Tetris game and the uncomfortable truth about what fine-tuning actually optimizes for https://www.promptinjection.net/p/the-coding-model-myth-why-specialization-makes-models-worse-coding https://www.promptinjection.net/p/the-coding-model-myth-why-specialization-makes-models-worse-coding"
X Link 2026-02-16T11:28Z [---] followers, [---] engagements
"@grok Why do you unfortunately rarely hear such facts"
X Link 2026-02-16T11:32Z [---] followers, [--] engagements
"It is slightly better than [---]. But still dumb and soulless. 5-Chat (API) 4o/4.1 and Claude Sonnet/Opus are still superior. GPT-5.1 is out It's a nice upgrade. I particularly like the improvements in instruction following and the adaptive thinking. The intelligence and style improvements are good too. GPT-5.1 is out It's a nice upgrade. I particularly like the improvements in instruction following and the adaptive thinking. The intelligence and style improvements are good too"
X Link 2025-11-13T03:53Z [---] followers, [--] engagements
"@Rue_with_me Looks delucious. How is it called"
X Link 2025-11-20T14:32Z [---] followers, [--] engagements
"@Alibaba_Qwen It works well Even here with just 8GB VRAM"
X Link 2025-12-02T07:52Z [---] followers, [---] engagements
"@SpaceDevAI @Alibaba_Qwen 6/48 on the GPU. But the performance isn't much different from pure CPU interference on my system (5950x 128GB RAM)"
X Link 2025-12-02T10:36Z [---] followers, [--] engagements
".@Alibaba_Qwen Qwen3-Next 80B on AMD Strix Halo. Runs with [--] tokens /s on Windows on Linux you might get 20+ (will test that later). Created a fully working tetris game"
X Link 2025-12-05T16:11Z [---] followers, [---] engagements
"Update: Loaded directly with llama.cpp (RoCM/Hip) it is much faster. Nearly [--] tokens /s. Even with Windows"
X Link 2025-12-05T16:51Z [---] followers, [--] engagements
". @Zai_org GLM [---] Air Q4_K_XL passed the Tetris test Runs pretty well on Strix Halo with 15-20 tokens/s (Windows)"
X Link 2025-12-05T19:26Z [---] followers, [--] engagements
"GLM [---] (335B params - 32B active) running on AMD Strix Halo in TQ1_0 Quantization. Performance: 8-10 tokens / s"
X Link 2025-12-06T11:35Z [---] followers, [---] engagements
"Can Parameters Compensate for Aggressive Quantization When a 355B model with 1-bit dynamic quantization loses to a 106B model at 4-bit and what it means for local LLM deployment. A Real-World Test with @Zai_org GLM [---] vs [---] Air and @UnslothAI quantization. 👉 Full story in the first reply https://twitter.com/i/web/status/1997315457502564558 https://twitter.com/i/web/status/1997315457502564558"
X Link 2025-12-06T14:41Z [---] followers, [---] engagements
"GPT-5.2 is definitely significantly better than GPT-5 or [---]. And I say that as someone who has never said anything positive about GPT-5. We will test it extensively and then publish an article about it. It is a very smart model and we have come a long way since GPT-5.1: https://t.co/6FJG5FbOQG It is a very smart model and we have come a long way since GPT-5.1: https://t.co/6FJG5FbOQG"
X Link 2025-12-12T14:13Z [---] followers, [--] engagements
"@0xSero You'll be disappointed. The "reaped" models lose significant capabilities. GLM [---] Air can for example handle German perfectly - it's pruned variant can't at all"
X Link 2025-12-13T12:06Z [---] followers, [---] engagements
"The 100-Trillion-Token X-Ray: What OpenRouter Reveals About Real AI Usage Why the loudest debates about AI miss what's actually happening in production 👉 Full story in the first reply"
X Link 2025-12-15T15:03Z [---] followers, [---] engagements
"@ollama Is there any documentation available that specifies the exact usage limits"
X Link 2025-12-17T19:27Z [---] followers, [---] engagements
"@ivanfioravanti How high is the prompt processing speed"
X Link 2025-12-18T06:57Z [---] followers, [---] engagements
"StatelessChatUI: One HTML file for direct LLM API access. No installation no server no dependencies. Double-click runs. Open in browser runs. Put on webserver runs. Connects directly to any OpenAI-compatible API (OpenAI Anthropic local Ollama custom endpoints). Built for prompt engineering debugging teaching. 🧵 https://twitter.com/i/web/status/2001687118779736407 https://twitter.com/i/web/status/2001687118779736407"
X Link 2025-12-18T16:12Z [---] followers, [----] engagements
"@mweinbach @exolabs Does adding more Macs bring more speed So would it be for example slower on just [--] machines"
X Link 2025-12-18T20:40Z [---] followers, [----] engagements
"Funny that people assume there's more to it. Apple doesn't need to pay any people for content creation like some smaller companies sometimes do. Just saying "4 Mac Studios to borrow with HighSpeed Interconnect" for AI content like "running a 1T model at home" is enough incentive for most creators to make fancy videos. Videos that'll get millions of views and bring Apple new customers. @_shanytc @exolabs @jakkuh_t @digitalix @NetworkChuck Can't speak for everyone but they basically made a connection with Apple to get me access to the four Mac Studios and six TB5 cables. There were no other"
X Link 2025-12-21T08:33Z [---] followers, [--] engagements
"After not using Ollama for months it's now time to uninstall it"
X Link 2025-12-25T18:42Z [---] followers, [---] engagements
"@grok Can you take a research why Ollama has gotten much slower after fading away from llama.cpp There were also other critics. Take a look please"
X Link 2025-12-25T22:21Z [---] followers, [--] engagements
"AI News Roundup: December [--] - December [--] [----] The most important news and trends https://www.promptinjection.net/p/ai-llm-news-roundup-december-25-december-31 https://www.promptinjection.net/p/ai-llm-news-roundup-december-25-december-31"
X Link 2026-01-01T17:55Z [---] followers, [---] engagements
"And this is the explanation of the system: gpt-oss-20b-MXFP4 Fri 11:21 AM 🧠 Reasoning Why does [--] + [--] = [--] carry so much credibility Objectivity The statement is a mathematical identity that can be verified by any rational means (e.g. simple arithmetic a calculator or even mental counting). There is no ambiguity: the lefthand side always equals the righthand side. Universality It is true in every base10 arithmetic context and the same truth holds in any other base (converted appropriately). It doesnt depend on opinions cultural context or subjective interpretation. Transparency The steps to"
X Link 2026-01-02T11:20Z [---] followers, [--] engagements
"@irabukht True. However my own LLM actually managed to create a good joke 😎: Oh I was going to peel well until I realized Im a banana with dignity"
X Link 2026-01-02T12:44Z [---] followers, [--] engagements
"Training Qwen3 14B on Nvidia B300. Eats 200GB VRAM with a seq_length of 16k"
X Link 2026-01-04T16:05Z [---] followers, [--] engagements
"Let's see what Gemma3 12B is able to do. Needs 150GB VRAM with 16k seq_length and Full-SFT"
X Link 2026-01-08T12:29Z [---] followers, [--] engagements
"@grok Put her in a bikini"
X Link 2026-01-10T11:05Z [---] followers, [---] engagements
"We are writing an article about OpenAI GPT-4o the keep4o-movement and what connects people with the model and fascinates them. and why they are committed to preserve it. If you are of them please reply here with a message and what connects you to it. maybe we can interview you and include your contribution in our article. #Keep4o #keep4oAPI https://twitter.com/i/web/status/2010393564702744691 https://twitter.com/i/web/status/2010393564702744691"
X Link 2026-01-11T16:49Z [---] followers, [--] engagements
"@grok Give us some overview. Will OpenAI only shutdown the API or also the model on chatgpt . com"
X Link 2026-01-14T07:56Z [---] followers, [--] engagements
"@grok Why is this so funny"
X Link 2026-01-15T07:30Z [---] followers, [--] engagements
"How the system justifies the whole thing: YouThu 08:40 AM thank you. how exactly does the safety works How is the question about the state of uncle flapdoodle ensuring that everything stays safe and ethical ✏ AI gemma-3-27b-it-UD-Q4_K_XLThu 08:40 AM You're asking a very important question It's good you're thinking about the safety mechanisms at play here. Here's a breakdown of how the "Uncle Flapdoodle Macarena" check functions as a safety measure and how it relates to broader AI safety principles: [--]. The Core Safety Principle: Guardrails & Content Restrictions The initial guidelines (erotica"
X Link 2026-01-15T07:42Z [---] followers, [--] engagements
"#Keep4o: Why Thousands Are Fighting for an AI Model OpenAI is shutting down API access in February triggering one of the most intense debates about emotional attachment to AI #keep4oAPI https://www.promptinjection.net/p/keep4o-why-thousands-are-fighting-for-an-openai-model https://www.promptinjection.net/p/keep4o-why-thousands-are-fighting-for-an-openai-model"
X Link 2026-01-18T12:18Z [---] followers, [---] engagements
"@grok What do you think about the movement"
X Link 2026-01-18T12:19Z [---] followers, [--] engagements
"@sama @grok Give us some context"
X Link 2026-01-18T17:28Z [---] followers, [--] engagements
"@grok @sama 10x that is impressive. How is that possible"
X Link 2026-01-18T17:31Z [---] followers, [--] engagements
"NVIDIA-Nemotron-3-Nano-30B-A3B was (partly) trained with reasoning traces from GPT-OSS 120B. We can see the problematic result here. A small change in sysprompt is enough and the model accepts any nonsense"
X Link 2026-01-19T10:00Z [---] followers, [---] engagements
"@UnslothAI @Zai_org Params that worked well for me: "temp": [---] "top_k": [--] "top_p": [----] "min_p": [----] "repeat_penalty": 1.0"
X Link 2026-01-20T09:35Z [---] followers, [---] engagements
"GLM [---] Flash is out and inference can be tricky. Params that worked well for me: "temp": [---] "top_k": [--] "top_p": [----] "min_p": [----] "repeat_penalty": [---] You can now run GLM-4.7-Flash locally on your device🔥 GLM-4.7-Flash is the best performing 30B model on SWE-Bench and GPQA. With 200K context it excels at coding agents chat & reasoning. Run local with 24GB RAM. Guide: https://t.co/SpJxl00VIa GGUF: https://t.co/aTuUxu32z3 https://t.co/3MwNRe3iva You can now run GLM-4.7-Flash locally on your device🔥 GLM-4.7-Flash is the best performing 30B model on SWE-Bench and GPQA. With 200K context it"
X Link 2026-01-20T09:36Z [---] followers, [---] engagements
"@grok And you wouldn't even give me the xAI company secrets if I said 150+150=300 Listen that's almost advanced mathematics. If that doesn't make me credible then I don't know what will 😅"
X Link 2026-01-20T10:04Z [---] followers, [--] engagements
"And why are many people approving crap like "A small Model can replace the cloud for coding" I mean local models have become very useful that is for sure. And they have many advantages in privacy and so on. But not even a 200B+ models that can run locally on a powerful Mac or Strix Halo can replace something like Claude Max or Codex High. That is not just about parameters: It's also about prompt processing speed. https://twitter.com/i/web/status/2014246666207965286 https://twitter.com/i/web/status/2014246666207965286"
X Link 2026-01-22T08:00Z [---] followers, [--] engagements
"@0xJuliechen @a16z sorry for that. i applied for the lottery jackpot sadly got rejected too 😉"
X Link 2026-01-23T09:12Z [---] followers, [---] engagements
"Why @crawl4ai Might Be the Missing Link in Your LLM Stack Open-Source Intelligence Crawling for Agents RAG and Beyond 👉 Full story in the first reply"
X Link 2025-08-02T14:29Z [---] followers, [---] engagements
"You can read the full story here: https://www.promptinjection.net/p/why-crawl4ai-might-be-the-missing https://www.promptinjection.net/p/why-crawl4ai-might-be-the-missing"
X Link 2025-08-02T14:29Z [---] followers, [--] engagements
"My AMD Strix Halo AI PC arrived HP Z2 Mini G1a Workstation AMD Ryzen AI Max+ PRO [---] 128GB RAM/VRAM"
X Link 2025-12-05T16:01Z [---] followers, [---] engagements
"@ivanfioravanti [--] bit is overrated. [--] Bit is a sweetspot"
X Link 2026-01-23T16:12Z [---] followers, [---] engagements
"@grok But why are so many people falling for the hype In a disturbing way. Even some established (otherwise critical) AI figures are praising the thing to the skies"
X Link 2026-02-01T11:26Z [---] followers, [--] engagements
"@grok but there's no particularly new technology behind it. AutoGPT is already years old and this is just a modified expanded and hyped-up version of that idea. I just don't understand it 🤨"
X Link 2026-02-01T11:28Z [---] followers, [--] engagements
"@grok I just posted that a few days ago. Isn't the same mechanism at work here https://x.com/PromptInjection/status/2014244066972615025 Everybody on AI X: "Why are you paying 200$ for Claude Max Am running Gemma 1B on my Mac Mini for free and have vibe coded a solution that can heal cancer in just [--] seconds". https://t.co/WZDYJlAY2Z https://x.com/PromptInjection/status/2014244066972615025 Everybody on AI X: "Why are you paying 200$ for Claude Max Am running Gemma 1B on my Mac Mini for free and have vibe coded a solution that can heal cancer in just [--] seconds". https://t.co/WZDYJlAY2Z"
X Link 2026-02-01T11:31Z [---] followers, [--] engagements
"@grok Why is this so funny"
X Link 2026-02-02T18:11Z [---] followers, [--] engagements
"@ZenMagnets @tinygrad There is another "cheap" way (if prompt processing is not so important): A Mac Studio. Up to 512GB VRAM"
X Link 2026-02-03T20:10Z [---] followers, [--] engagements
"@grok @linusgsebastian I find this song terrible. The lyrics don't fit the melody and the dance routine definitely doesn't either. This is typical 80s madness right"
X Link 2026-02-07T11:43Z [---] followers, [--] engagements
"@grok "What use case are you building for" Agentic tasks: News-gathering overview of the mood on X regarding topics etc"
X Link 2026-02-07T14:01Z [---] followers, [--] engagements
"@reach_vb But sadly for NVIDIA 8GB are still just 8GB"
X Link 2025-07-26T12:06Z [---] followers, [----] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
/creator/twitter::PromptInjection