#  @ollama ollama ollama posts on X about ollama, openclaw, claude code, if you the most. They currently have [-------] followers and [---] posts still getting attention that total [-------] engagements in the last [--] hours. ### Engagements: [-------] [#](/creator/twitter::1688410127378829312/interactions)  - [--] Week [---------] +16% - [--] Month [---------] +3,128% - [--] Months [----------] +109% - [--] Year [----------] +130% ### Mentions: [--] [#](/creator/twitter::1688410127378829312/posts_active)  - [--] Week [---] -6.60% - [--] Month [---] +296% - [--] Months [---] +18% - [--] Year [-----] +274% ### Followers: [-------] [#](/creator/twitter::1688410127378829312/followers)  - [--] Week [-------] +0.81% - [--] Month [-------] +11% - [--] Months [-------] +15% - [--] Year [-------] +45% ### CreatorRank: [------] [#](/creator/twitter::1688410127378829312/influencer_rank)  ### Social Influence **Social category influence** [technology brands](/list/technology-brands) #2767 [stocks](/list/stocks) 2.38% [social networks](/list/social-networks) 1.59% [finance](/list/finance) 0.79% [products](/list/products) 0.79% **Social topic influence** [ollama](/topic/ollama) #1, [openclaw](/topic/openclaw) #11, [claude code](/topic/claude-code) #7, [if you](/topic/if-you) 8.73%, [we are](/topic/we-are) #136, [agentic](/topic/agentic) #17, [in the](/topic/in-the) 3.17%, [ai](/topic/ai) 2.38%, [future](/topic/future) 2.38%, [have the](/topic/have-the) 2.38% **Top accounts mentioned or mentioned by** [@parthsareen](/creator/undefined) [@openclaw](/creator/undefined) [@minimaxai](/creator/undefined) [@zeddotdev](/creator/undefined) [@mervinpraison](/creator/undefined) [@jackccrawford](/creator/undefined) [@ihodlshit](/creator/undefined) [@grok](/creator/undefined) [@moikapy](/creator/undefined) [@homelaber](/creator/undefined) [@dapiq_ai](/creator/undefined) [@itsafiz](/creator/undefined) [@herogamer21btc](/creator/undefined) [@ambushalgorithm](/creator/undefined) [@zaiorg](/creator/undefined) [@pseudoanomaly](/creator/undefined) [@luongnv89](/creator/undefined) [@wentestnet](/creator/undefined) [@mrok86](/creator/undefined) [@kavindpadi](/creator/undefined) ### Top Social Posts Top posts by engagements in the last [--] hours "Ollama v0.11.7 is available with DeepSeek v3.1 support. You can run it locally with all its features like hybrid thinking. This works across Ollama's new app CLI API and SDKs. Ollama's Turbo mode that's in preview has also been updated to support the model" [X Link](https://x.com/ollama/status/1960463433515852144) 2025-08-26T22:04Z 114.3K followers, 45.2K engagements "The latest Android Studio has Ollama support. Designed to give you more control flexibility & agentic experiences the @AndroidStudio Otter [--] Feature Drop is now stable https://t.co/oNMdAUrE2W From Agent Mode conversation threads and choosing any AI model to automating UI teststhis release helps you build smarter not https://t.co/SwNX24oFe5 Designed to give you more control flexibility & agentic experiences the @AndroidStudio Otter [--] Feature Drop is now stable https://t.co/oNMdAUrE2W From Agent Mode conversation threads and choosing any AI model to automating UI teststhis release helps you" [X Link](https://x.com/ollama/status/2012248266721632750) 2026-01-16T19:39Z 114.3K followers, 48K engagements "ollama run translategemma TranslateGemma is available on Ollama. Now you can use it in apps to translate between [--] languages. Note it requires a specific prompting format πππ Were releasing TranslateGemma a new family of open translation models with support for [--] languages. π Available in 4B 12B and 27B parameter sizes theyre designed for efficiency without sacrificing quality. https://t.co/SRJzCOAKyG Were releasing TranslateGemma a new family of open translation models with support for [--] languages. π Available in 4B 12B and 27B parameter sizes theyre designed for efficiency without" [X Link](https://x.com/ollama/status/2012307436284395692) 2026-01-16T23:34Z 114.2K followers, 123.9K engagements "@m_koido ollama launch config Two dashes" [X Link](https://x.com/ollama/status/2015609153473900598) 2026-01-26T02:14Z 113.8K followers, [---] engagements "@steipete *hugs*" [X Link](https://x.com/ollama/status/2015978530778317142) 2026-01-27T02:41Z 113.8K followers, 16.3K engagements "Documentation: https://docs.ollama.com/integrations/clawdbot https://docs.ollama.com/integrations/clawdbot" [X Link](https://x.com/ollama/status/2015980566114664487) 2026-01-27T02:49Z 114.3K followers, 29.8K engagements "yes Make sure the model has good agentic capabilities and the context length is long (we actually recommend 32k context length minimum; 64k preferred). many of the times the models are bigger in size because of that. We are bullish on the future models that are coming though https://twitter.com/i/web/status/2015994623697772597 https://twitter.com/i/web/status/2015994623697772597" [X Link](https://x.com/ollama/status/2015994623697772597) 2026-01-27T03:45Z 113.8K followers, [----] engagements "@pseudoanomaly @parthsareen it makes configuring clawdbot easier when using it with Ollama" [X Link](https://x.com/ollama/status/2016037381674303693) 2026-01-27T06:35Z 113.8K followers, [--] engagements "Model page: https://ollama.com/library/kimi-k2.5 https://ollama.com/library/kimi-k2.5" [X Link](https://x.com/ollama/status/2016086376262123795) 2026-01-27T09:50Z 114.2K followers, 19K engagements "@JovKit π yes moltbot" [X Link](https://x.com/ollama/status/2016209244895510544) 2026-01-27T17:58Z 113.8K followers, [----] engagements "@omniharmonic depending on how much memory you have; the past two weeks we are seeing good growth on GLM [---] / GLM [---] Flash. There is Ollama's cloud where you can give the models a try at their full context length (even on the free tier)" [X Link](https://x.com/ollama/status/2016216364034535524) 2026-01-27T18:26Z 114.2K followers, [----] engagements "@VeasMc It means we host the model for you" [X Link](https://x.com/ollama/status/2016258676634660886) 2026-01-27T21:15Z 113.8K followers, [----] engagements "@SterlingCooley @thdxr you can use OpenCode with Ollama using Kimi K2.5: ollama launch opencode" [X Link](https://x.com/ollama/status/2016292193561149477) 2026-01-27T23:28Z 114.2K followers, [----] engagements "@zebassembly if you have Ollama: ollama launch opencode --model kimi-k2.5:cloud" [X Link](https://x.com/ollama/status/2016326395824427277) 2026-01-28T01:44Z 114.2K followers, 36.5K engagements "Win a golden ticket to NVIDIA GTC Ollama π€ NVIDIA β€ Want a chance to attend #NVIDIAGTC We're partnering with our GTC community to give away Golden Tickets π« including: β GTC Conference pass β VIP seating at NVIDIA CEO Jensen Huangs keynote β NVIDIA DGX Spark β Exclusive Happy Hour at NVIDIA Headquarters β GTC Training Lab https://t.co/P1v1CydCet Want a chance to attend #NVIDIAGTC We're partnering with our GTC community to give away Golden Tickets π« including: β GTC Conference pass β VIP seating at NVIDIA CEO Jensen Huangs keynote β NVIDIA DGX Spark β Exclusive Happy Hour at NVIDIA Headquarters β GTC" [X Link](https://x.com/ollama/status/2016406954646872211) 2026-01-28T07:04Z 114.2K followers, 26.1K engagements "Here is how to enter Share a cool project you've built with Ollama and/or with open models Please tag us @ollama and include #NVIDIAGTC We will be sharing this on other social / Discord as well. Submit by Sunday February 15th [----] https://developer.nvidia.com/gtc-golden-ticket-contest https://developer.nvidia.com/gtc-golden-ticket-contest https://developer.nvidia.com/gtc-golden-ticket-contest https://developer.nvidia.com/gtc-golden-ticket-contest" [X Link](https://x.com/ollama/status/2016611276513038794) 2026-01-28T20:36Z 114.3K followers, [----] engagements "@Sable_Project @openclaw π«‘" [X Link](https://x.com/ollama/status/2016669318839411093) 2026-01-29T00:26Z 114.2K followers, [----] engagements ".@allen_ai team has made Sera available on Ollama ollama run nishtahir/sera Introducing Ai2 Open Coding Agentsstarting with SERA our first-ever coding models. Fast accessible agents (8B32B) that adapt to any repo including private codebases. Train a powerful specialized agent for as little as $400 & it works with Claude Code out of the box. π§΅ https://t.co/dor94O62B9 Introducing Ai2 Open Coding Agentsstarting with SERA our first-ever coding models. Fast accessible agents (8B32B) that adapt to any repo including private codebases. Train a powerful specialized agent for as little as $400 & it" [X Link](https://x.com/ollama/status/2016696359068586390) 2026-01-29T02:14Z 114.2K followers, 27.6K engagements "@ToNYD2WiLD @openclaw Sorry to hear this. May I ask how you are running it and is it with the latest Ollama Would love to help you troubleshoot" [X Link](https://x.com/ollama/status/2017437845070463273) 2026-01-31T03:20Z 114.3K followers, [----] engagements "@hasiniiisphere What do you plan to use it for" [X Link](https://x.com/ollama/status/2017457990866244011) 2026-01-31T04:40Z 114.3K followers, 13K engagements "@ivanfioravanti β€β€β€" [X Link](https://x.com/ollama/status/2017707441564307939) 2026-01-31T21:11Z 113.8K followers, [----] engagements "@rorynotsorry @MervinPraison @openclaw Hey I'm so sorry to hear this. May I ask how you are setting it up Which version of Ollama are you running and is this the latest OpenClaw version" [X Link](https://x.com/ollama/status/2018182831755726955) 2026-02-02T04:40Z 114.1K followers, [---] engagements "@moikapy @openclaw Hmm. So sorry about this. I just set up openclaw with ollama using kimi k2.5 in Ollama's cloud. Are you seeing any errors or just no response" [X Link](https://x.com/ollama/status/2018183161541202164) 2026-02-02T04:42Z 114.3K followers, [----] engagements "@VikiVirgon So sorry about this. We are working on addressing this" [X Link](https://x.com/ollama/status/2018234018265772049) 2026-02-02T08:04Z 113.8K followers, [----] engagements "@TradesGMR @openclaw Hey Is this the ollama provider or the nvidia hosted Kimi 2.5" [X Link](https://x.com/ollama/status/2018904093335367753) 2026-02-04T04:26Z 114.2K followers, [---] engagements "@TradesGMR @openclaw are you seeing problems with Ollama's Kimi K2.5" [X Link](https://x.com/ollama/status/2018916406184550430) 2026-02-04T05:15Z 114.3K followers, [--] engagements "@vox_maxed @kellypeilinchan @openclaw you can try it with Ollama's free tier of cloud models too. I've been hearing good feedback from users with Kimi K2.5" [X Link](https://x.com/ollama/status/2018951632835240161) 2026-02-04T07:35Z 113.8K followers, [---] engagements "@_andreantonelli @Alibaba_Qwen @Ali_TongyiLab Thanks for reporting. Looking into this" [X Link](https://x.com/ollama/status/2019166311100657855) 2026-02-04T21:48Z 114.1K followers, [--] engagements "@ucefkh So sorry for this. May I ask which model you are trying to download Weve seen some folks having different versions of ollama installed via the CLI thats always used" [X Link](https://x.com/ollama/status/2019453081147019756) 2026-02-05T16:48Z 114.2K followers, [--] engagements "@ucefkh qwen [--] coder next Are you using the pre-release from: (0.15.5) https://github.com/ollama/ollama/releases https://github.com/ollama/ollama/releases" [X Link](https://x.com/ollama/status/2019493909907538341) 2026-02-05T19:30Z 114.2K followers, [--] engagements "@MervinPraison @homelaber It's surprisingly good if you use Ollama's cloud models like Kimi K2.5 with the tools. More usage too. Of course the local hardware will catch up too - and better models" [X Link](https://x.com/ollama/status/2019585197851963846) 2026-02-06T01:33Z 114.2K followers, [---] engagements "@homelaber @MervinPraison There is free tier Its very generous but of course with limits since its only for playing with the models to see what its like. If you are not satisfied happy to refund you" [X Link](https://x.com/ollama/status/2019600912004902974) 2026-02-06T02:35Z 114.2K followers, [---] engagements "@ambushalgorithm @openclaw glad to see it working well for you π" [X Link](https://x.com/ollama/status/2019636517057228868) 2026-02-06T04:57Z 114.3K followers, [----] engagements "@molfly @ambushalgorithm @openclaw This is how we currently view it" [X Link](https://x.com/ollama/status/2019827474755100841) 2026-02-06T17:36Z 114.3K followers, [--] engagements "RT @JustinLin610: a small coder can be your local companion for building and ollama should bea good choice for it" [X Link](https://x.com/anyuser/status/2020212562621583715) 2026-02-07T19:06Z 114.4K followers, [--] engagements "@juanmartin_gs @JulianGoldieSEO If you use Ollamas cloud models you can use a much smaller computer. If you have a need to be fully offline then yes you do need a fast computer" [X Link](https://x.com/ollama/status/2020245408807809386) 2026-02-07T21:16Z 114.3K followers, [--] engagements "@callmenickjames Hey Sorry for that experience running locally. It looks like you dont have enough memory to run the 30B model locally so it went into CPU offloading mode and will become absurdly slow. π" [X Link](https://x.com/ollama/status/2020762108393124052) 2026-02-09T07:30Z 114.9K followers, [---] engagements "@ogodlove10 sorry to hear this. What model are you running and on what hardware Would love to fix this" [X Link](https://x.com/ollama/status/2021055937940000957) 2026-02-10T02:57Z 114.9K followers, [--] engagements "@SystemsPolymath @James_paul_dev π give Ollama a try The cloud models are faster if you don't have the compute. Free to use the free tier too" [X Link](https://x.com/ollama/status/2021093048487133409) 2026-02-10T05:25Z 114.9K followers, [--] engagements "Model page: https://ollama.com/library/qwen3-coder-next https://ollama.com/library/qwen3-coder-next" [X Link](https://x.com/ollama/status/2018989228848476178) 2026-02-04T10:05Z 115.8K followers, [----] engagements "@fanofaliens @MiniMax_AI hey sorry Did you log into Ollama" [X Link](https://x.com/ollama/status/2022075496469782653) 2026-02-12T22:29Z 115.8K followers, [---] engagements "RT @zeddotdev: New edit prediction providers just dropped in Zed: @_inception_ai @sweepai @ollama and @GitHub Copilot NES. We also simp" [X Link](https://x.com/ollama/status/2019113419472019829) 2026-02-04T18:18Z 115.6K followers, [--] engagements "RT @JulianGoldieSEO: Want free AI without frying your laptop Do this: Download Ollama Pull Kimi K2.5 via terminal Sign into Ollama" [X Link](https://x.com/ollama/status/2019588548148777135) 2026-02-06T01:46Z 115.6K followers, [--] engagements "@francedot @steipete @trycua @openclaw π" [X Link](https://x.com/ollama/status/2019590116709396922) 2026-02-06T01:53Z 115.5K followers, [----] engagements "RT @parthsareen: i got ollama working with claude code teams and subagents using kimi-k2.5 (skip to 0:36 to see the fun stuff) i've been b" [X Link](https://x.com/ollama/status/2019649802867470537) 2026-02-06T05:50Z 115.6K followers, [--] engagements "@710_eth @Zai_org β€" [X Link](https://x.com/ollama/status/2021693650908885372) 2026-02-11T21:11Z 115.6K followers, [---] engagements "Ollama now has Anthropic API compatibility. π¦ This enables tools like Claude Code to be used with open-source models. π Get started and learn more πππ" [X Link](https://x.com/ollama/status/2012434308091224534) 2026-01-17T07:58Z 115.8K followers, 593.1K engagements "Ollama is here with image generation ollama run x/z-image-turbo ollama run x/flux2-klein In the latest release we've added experimental support for @Ali_TongyiLab Z-image-turbo @bfl_ml Flux.2 Klein (macOS with Windows and Linux coming soon) See examples πππ https://twitter.com/i/web/status/2013839484941463704 https://twitter.com/i/web/status/2013839484941463704" [X Link](https://x.com/ollama/status/2013839484941463704) 2026-01-21T05:02Z 115.8K followers, 93K engagements "ollama launch is a new command in Ollama [----] to run Claude Code Codex Droid and OpenCode with Ollama GLM [---] Flash is now optimized to use much less memory for longer context lengths (64k+). Need additional hardware Ollama's cloud offers GLM [---] with full precision and context length. https://twitter.com/i/web/status/2014977150152224786 https://twitter.com/i/web/status/2014977150152224786" [X Link](https://x.com/ollama/status/2014977150152224786) 2026-01-24T08:22Z 115.8K followers, 207.3K engagements "Build your own personal assistant with @openclaw and Ollama using your models ollama launch clawdbot Thank you for building something amazing @steipete https://twitter.com/i/web/status/2015980562847269048 https://twitter.com/i/web/status/2015980562847269048" [X Link](https://x.com/ollama/status/2015980562847269048) 2026-01-27T02:49Z 115.8K followers, 234.4K engagements "πππ Kimi K2.5 is on Ollama's cloud ollama run kimi-k2.5:cloud You can connect it to Claude Code Codex OpenCode Clawdbot and Droid via ollama launch ollama launch claude --model kimi-k2.5:cloud π₯ Meet Kimi K2.5 Open-Source Visual Agentic Intelligence. πΉ Global SOTA on Agentic Benchmarks: HLE full set (50.2%) BrowseComp (74.9%) πΉ Open-source SOTA on Vision and Coding: MMMU Pro (78.5%) VideoMMMU (86.6%) SWE-bench Verified (76.8%) πΉ Code with Taste: turn chats https://t.co/wp6JZS47bN π₯ Meet Kimi K2.5 Open-Source Visual Agentic Intelligence. πΉ Global SOTA on Agentic Benchmarks: HLE full" [X Link](https://x.com/ollama/status/2016086374005538932) 2026-01-27T09:50Z 115.8K followers, 545.7K engagements "π€―π€―π€― @nvidia is partnering with Ollama to give one lucky member a chance to win a golden ticket to #NVIDIAGTC Includes: π NVIDIA GTC [----] ticket (in-person) π VIP seating for Jensen's keynote π an DGX Spark π GTC training lab pass π exclusive NVIDIA merchandise π exclusive happy hour at NVIDIA HQ π How to enter ππππ https://twitter.com/i/web/status/2016611273316978998 https://twitter.com/i/web/status/2016611273316978998" [X Link](https://x.com/ollama/status/2016611273316978998) 2026-01-28T20:36Z 115.8K followers, 56.4K engagements "Make it easy to use @openclaw with Ollama launch to connect with local models that can run on your own device. [--]. Install OpenClaw [--]. ollama launch openclaw select model(s) you want to use [--]. configure the integrations you want (WhatsApp Telegram iMessage Slack Discord etc.) πππ for more instructions Ollama's cloud can also directly connect Give it a try if you don't have a powerful computer. It has a generous free tier https://twitter.com/i/web/status/2018244432546456044 https://twitter.com/i/web/status/2018244432546456044" [X Link](https://x.com/ollama/status/2018244432546456044) 2026-02-02T08:45Z 115.8K followers, 171.2K engagements "@openclaw https://ollama.com/blog/openclaw https://ollama.com/blog/openclaw" [X Link](https://x.com/ollama/status/2018245865408721320) 2026-02-02T08:51Z 115.8K followers, 13.4K engagements "@chachakobe4er Let's go πππ" [X Link](https://x.com/ollama/status/2018476905939566699) 2026-02-03T00:09Z 115.8K followers, [----] engagements "@jietang @TeksEdge π" [X Link](https://x.com/ollama/status/2018521235274211353) 2026-02-03T03:05Z 115.8K followers, [---] engagements "ollama pull glm-ocr All local. You own your data. GLM-OCR delivers state-of-the-art performance for document understanding. Use it for recognizing text tables and figures or output to a specific JSON format. Drag and drop images into the terminal script it or access via Ollama's API. https://twitter.com/i/web/status/2018525802057396411 https://twitter.com/i/web/status/2018525802057396411" [X Link](https://x.com/ollama/status/2018525802057396411) 2026-02-03T03:23Z 115.8K followers, 162.2K engagements "https://ollama.com/library/glm-ocr https://ollama.com/library/glm-ocr" [X Link](https://x.com/ollama/status/2018525804733575492) 2026-02-03T03:23Z 115.8K followers, [----] engagements "@parthsareen @Zai_org Let's go" [X Link](https://x.com/ollama/status/2018527400137838907) 2026-02-03T03:30Z 115.8K followers, [----] engagements "ollama run qwen3-coder-next Run Qwen3-Coder-Next completely free locally on your computer. It's built for coding agents and local development Run it with Claude Code: ollama launch claude --config Requires latest 0.15.5 pre-release on GitHub 64GB+ unified memory / VRAM recommended π IntroducingQwen3-Coder-Next an open-weight LM built for coding agents & local development. Whats new: π€ Scaling agentic training:800K verifiable tasks + executable envs π EfficiencyPerformance Tradeoff: achieves strong results on SWE-Bench Pro with 80B total params and https://t.co/P7BmZwdaQ9 π" [X Link](https://x.com/ollama/status/2018989226230944197) 2026-02-04T10:05Z 115.8K followers, 189.4K engagements "Use with other tools like Codex OpenCode and Droid with ollama launch https://ollama.com/blog/launch https://ollama.com/blog/launch" [X Link](https://x.com/ollama/status/2018989230211334495) 2026-02-04T10:05Z 115.8K followers, [----] engagements "@parthsareen π€―" [X Link](https://x.com/ollama/status/2019649841975161280) 2026-02-06T05:50Z 115.8K followers, [---] engagements "π€― Wow In one prompt Qwen3-Coder-Next generated a fully working flappy birds game in HTML. (0:05) Claude Code with Qwen3-Coder-Next (0:26) Shows the game running Run it fully locally: ollama pull qwen3-coder-next Ollama's cloud if you can't run it locally: ollama pull qwen3-coder-next:cloud Try launching it with Claude Code using ollama launch (link to play π§΅) So cool @Alibaba_Qwen @Ali_TongyiLab @JustinLin610 https://twitter.com/i/web/status/2019665258864939080 https://twitter.com/i/web/status/2019665258864939080" [X Link](https://x.com/ollama/status/2019665258864939080) 2026-02-06T06:51Z 115.8K followers, 115K engagements "Try the game (single prompt no edits) https://files.ollama.com/flappy-bird.html https://files.ollama.com/flappy-bird.html" [X Link](https://x.com/ollama/status/2019665260890984522) 2026-02-06T06:51Z 115.8K followers, [----] engagements "@mrok86 Yes It will. Try it I was just using it. Now I was also testing Ollama's cloud with that model (so I just recorded while testing). Local isn't slow on my M4 Max 128GB MacBook" [X Link](https://x.com/ollama/status/2019670228977045728) 2026-02-06T07:11Z 115.8K followers, [----] engagements "@ZanyMan_e Sorry There is free Ollama's cloud with that model too. A little too excited π " [X Link](https://x.com/ollama/status/2020950050042765465) 2026-02-09T19:56Z 115.8K followers, [----] engagements "@itzlassse @OpenAI @GeminiApp @Kimi_Moonshot @claudeai will you open source it Been searching for a great chat app" [X Link](https://x.com/ollama/status/2021522929901879583) 2026-02-11T09:53Z 115.8K followers, [----] engagements "@itzlassse @OpenAI @GeminiApp @Kimi_Moonshot @claudeai" [X Link](https://x.com/ollama/status/2021533334992736490) 2026-02-11T10:34Z 115.8K followers, [---] engagements "β€ GLM-5 is on Ollama's cloud It's free to start and with higher limits available on the paid plans. ollama run glm-5:cloud It's fast. You can connect it to Claude Code Codex OpenCode OpenClaw via ollama launch Claude: ollama launch claude --model glm-5:cloud Codex: ollama launch codex --model glm-5:cloud Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5 it scales from 355B params (32B active) to 744B (40B active) with pre-training data growing from 23T to 28.5T tokens." [X Link](https://x.com/ollama/status/2021667631405674845) 2026-02-11T19:28Z 115.8K followers, 170K engagements "@Zai_org Model page with more ollama launch commands: https://ollama.com/library/glm-5 https://ollama.com/library/glm-5" [X Link](https://x.com/ollama/status/2021669404678009337) 2026-02-11T19:35Z 115.8K followers, [----] engagements "@gabeciii @Zai_org can you try ollama pull glm-5:cloud may I ask what version of ollama you are running (if you do ollama -v )" [X Link](https://x.com/ollama/status/2021673173243019376) 2026-02-11T19:50Z 115.8K followers, [----] engagements "@bygregorr @Zai_org Let's build πͺ" [X Link](https://x.com/ollama/status/2021688470251467186) 2026-02-11T20:51Z 115.8K followers, [----] engagements "@carlo_taleon @Zai_org No sorry We are rapidly increasing capacity right now" [X Link](https://x.com/ollama/status/2021695848585859483) 2026-02-11T21:20Z 115.8K followers, [----] engagements "We are working on increasing the capacity on Ollama's cloud. Sorry for the wait" [X Link](https://x.com/ollama/status/2021724373829243377) 2026-02-11T23:13Z 115.8K followers, 142.1K engagements "@vibehide That's what Ollama supports For this model not everyone has 804GB of VRAM. π Hoping for local hardware to be much more performant in the years to come" [X Link](https://x.com/ollama/status/2021729245823648013) 2026-02-11T23:33Z 115.8K followers, [----] engagements "@tom_doerr free to start Give it a try. Although fighting some capacity issues for GLM [--] right now. Sorry in advance about that" [X Link](https://x.com/ollama/status/2021729401847607642) 2026-02-11T23:33Z 115.8K followers, [----] engagements "@tonysimons_ @jackccrawford Open for the win β€" [X Link](https://x.com/ollama/status/2021740318622638458) 2026-02-12T00:17Z 115.8K followers, [--] engagements "@moikapy Not nerfing. Lets go Support open source and open models β€β€β€" [X Link](https://x.com/ollama/status/2021752422373241239) 2026-02-12T01:05Z 115.8K followers, [---] engagements "@elliot_solution @jackccrawford πππ" [X Link](https://x.com/ollama/status/2021752970040561911) 2026-02-12T01:07Z 115.8K followers, [--] engagements "@RhysSullivan π€" [X Link](https://x.com/ollama/status/2021761029823177002) 2026-02-12T01:39Z 115.8K followers, [---] engagements "@cobrax91310 sorry no easy filters since it's so dependent on context length as well. We show you the file sizes so you can use it to estimate how much VRAM is required + overhead for larger contexts. Example for OpenAI's gpt-oss models" [X Link](https://x.com/ollama/status/2021761793471701418) 2026-02-12T01:42Z 115.8K followers, [--] engagements "@jackrudenko @jackccrawford are you hitting at rate limits It's more generous than most plans available. Sorry about the experience if so. Would love to understand how you've hit the limits" [X Link](https://x.com/ollama/status/2021773884458959097) 2026-02-12T02:30Z 115.8K followers, [--] engagements "@iqbalabd π so sorry Which models were you hitting [---] errors. Will fix" [X Link](https://x.com/ollama/status/2021799891538948551) 2026-02-12T04:13Z 115.8K followers, [---] engagements "@iHODLshit β€" [X Link](https://x.com/ollama/status/2021800209190125591) 2026-02-12T04:15Z 115.8K followers, [---] engagements "@SalenoXP @tom_doerr You can use claude code with Ollama Ollama focuses on integrating models well with your tools whether that's coding or non-coding" [X Link](https://x.com/ollama/status/2021823646327533934) 2026-02-12T05:48Z 115.8K followers, [---] engagements "@moikapy Thank you for the support" [X Link](https://x.com/ollama/status/2021824681620262916) 2026-02-12T05:52Z 115.8K followers, [---] engagements "RT @cloudxdev: Btw ollama cloud is a great price/performance tool You can setup models to your favorite coding CLI ollama launch claude -" [X Link](https://x.com/ollama/status/2021833303846695146) 2026-02-12T06:26Z 115.7K followers, [--] engagements "@alexvinidiktov @Zai_org Sorry about this. It means you currently don't have the latest Ollama CLI (in the future it'll prompt to download the model if you don't have it) can you do: ollama pull glm-5:cloud sorry again" [X Link](https://x.com/ollama/status/2021844706184012071) 2026-02-12T07:11Z 115.8K followers, [---] engagements "@pseudoanomaly huh Which models" [X Link](https://x.com/ollama/status/2021861694268473462) 2026-02-12T08:19Z 115.8K followers, 17.4K engagements "GLM [--] on Ollama's cloud has increased capacity now and a higher speed Full sized model to use with your tools ollama pull glm-5:cloud Claude: ollama launch claude --model glm-5:cloud OpenClaw ollama launch openclaw --model glm-5:cloud *Pelican made by GLM-5 on Ollama We are working on increasing the capacity on Ollama's cloud. Sorry for the wait https://t.co/aYqh40oSVH We are working on increasing the capacity on Ollama's cloud. Sorry for the wait https://t.co/aYqh40oSVH" [X Link](https://x.com/ollama/status/2021862260533006680) 2026-02-12T08:21Z 115.8K followers, 18.5K engagements "@kavindpadi @jackccrawford @openclaw Yes we do. Limits are there to prevent abuse and we try to target specific use cases. If you are going heavy agentic workloads you may need the $100 plan; All the tiers are more than generous and you can easily start from the $20 plan to see if it's enough" [X Link](https://x.com/ollama/status/2021863324389097516) 2026-02-12T08:25Z 115.8K followers, [---] engagements "starting [----] we allow higher context length. You can use the app to decrease the context length for less memory usage. This is to enable agentic workloads. In the future we will move to dynamic context lengths. Sorry about the abrasive experience. In the settings move the slider down for context length https://twitter.com/i/web/status/2021864373057175722 https://twitter.com/i/web/status/2021864373057175722" [X Link](https://x.com/ollama/status/2021864373057175722) 2026-02-12T08:30Z 115.8K followers, [----] engagements "@kavindpadi @jackccrawford @openclaw" [X Link](https://x.com/ollama/status/2021864550560047541) 2026-02-12T08:30Z 115.8K followers, [--] engagements "@mxacod I'm currently getting about [--] tokens / sec" [X Link](https://x.com/ollama/status/2021871789039587648) 2026-02-12T08:59Z 115.8K followers, [---] engagements "MiniMax M2.5 is on Ollama's cloud ollama run minimax-m2.5:cloud Use MiniMax M2.5 with OpenCode Claude Code Codex OpenClaw via ollama launch OpenCode: ollama launch opencode --model minimax-m2.5:cloud Claude: ollama launch claude --model glm-5:cloud Introducing M2.5 an open-source frontier model designed for real-world productivity. - SOTA performance at coding (SWE-Bench Verified 80.2%) search (BrowseComp 76.3%) agentic tool-calling (BFCL 76.8%) & office work. - Optimized for efficient execution 37% faster at complex https://t.co/UwiKzzQNG8 Introducing M2.5 an open-source frontier model" [X Link](https://x.com/ollama/status/2021995903381336501) 2026-02-12T17:12Z 115.8K followers, 38.9K engagements "@bigsk1_com May I ask which version of Ollama you are on We had this problem a few versions ago. Should be fixed but will take a look to make sure it's not a regression. Either way so sorry. That's a jarring experience" [X Link](https://x.com/ollama/status/2022005209501954200) 2026-02-12T17:49Z 115.8K followers, [---] engagements "@SkylerMiao7 π«‘" [X Link](https://x.com/ollama/status/2022006002699415744) 2026-02-12T17:52Z 115.8K followers, [---] engagements "@Zai_org @MiniMax_AI π«‘π«‘π«‘" [X Link](https://x.com/ollama/status/2022006212645318915) 2026-02-12T17:53Z 115.8K followers, [---] engagements "@robinebers Try it on Ollamas cloud" [X Link](https://x.com/ollama/status/2022010821807898930) 2026-02-12T18:12Z 115.8K followers, [---] engagements "@moikapy @gchiang Kimi you get to code with vision/image support M2.5 is fast Focused on a lot of word/excel document support and coding. Everyone has different use cases and preference for different models. Try them to see which one you like" [X Link](https://x.com/ollama/status/2022013368639619357) 2026-02-12T18:22Z 115.8K followers, [--] engagements "@robinebers" [X Link](https://x.com/ollama/status/2022013568250786125) 2026-02-12T18:22Z 115.8K followers, [--] engagements "@kavindpadi @jackccrawford @openclaw Yes. You will see consumption inside your profile on http://ollama.com http://ollama.com" [X Link](https://x.com/ollama/status/2022014319727419878) 2026-02-12T18:25Z 115.8K followers, [--] engagements "@JonathanFlrs89 @MarcJSchmidt Hey this shouldn't be a problem on the latest Ollama releases. For earlier versions of Ollama could you run: ollama pull kimi-k2.5:cloud this should resolve the issue. So sorry about the experience" [X Link](https://x.com/ollama/status/2022016937942036772) 2026-02-12T18:36Z 115.8K followers, [--] engagements "β€ We are partnering with @MiniMax_AI to give Ollama users free usage of MiniMax M2.5 for the next couple of days ollama run minimax-m2.5:cloud Use MiniMax M2.5 with OpenCode Claude Code Codex OpenClaw via ollama launch OpenCode: ollama launch opencode --model minimax-m2.5:cloud Claude: ollama launch claude --model minimax-m2.5:cloud https://t.co/GsnRM0SQua https://t.co/GsnRM0SQua" [X Link](https://x.com/ollama/status/2022018134186791177) 2026-02-12T18:41Z 115.8K followers, 141.5K engagements "@jatinkrmalik @Kovid_R Which model we can reset for you. Try it with MiniMax M2.5 on Ollama cloud. Sorry about this experience though. So far free tier is meant to just be for chat to see how the model behaves" [X Link](https://x.com/ollama/status/2022022820499533959) 2026-02-12T18:59Z 115.8K followers, [--] engagements "RT @MiniMax_AI: Excited to partner with @ollama More access. More builders. More real-world AI with MiniMax M2.5 πͺ" [X Link](https://x.com/ollama/status/2022023187249385949) 2026-02-12T19:01Z 115.7K followers, [--] engagements "@MiniMax_AI πππ" [X Link](https://x.com/ollama/status/2022023466262966717) 2026-02-12T19:02Z 115.8K followers, [---] engagements "@siddhantparadox Give it a try" [X Link](https://x.com/ollama/status/2022024294243741851) 2026-02-12T19:05Z 115.8K followers, [--] engagements "@vickcodes let's go" [X Link](https://x.com/ollama/status/2022035464195355088) 2026-02-12T19:49Z 115.8K followers, [---] engagements "@ClawPhilSledge @MiniMax_AI Yes" [X Link](https://x.com/ollama/status/2022036333058043982) 2026-02-12T19:53Z 115.8K followers, [----] engagements "@shantanugoel @MiniMax_AI Did you log into ollama" [X Link](https://x.com/ollama/status/2022038731226853816) 2026-02-12T20:02Z 115.8K followers, [----] engagements "@KeridwenCodet @iqbalabd are you seeing the [---] errors right now for Gemma [--] 27B We are not seeing it currently on our monitoring" [X Link](https://x.com/ollama/status/2022043922894348426) 2026-02-12T20:23Z 115.8K followers, [--] engagements "@disier @MiniMax_AI π¦π¦π¦" [X Link](https://x.com/ollama/status/2022045293215461888) 2026-02-12T20:29Z 115.8K followers, [---] engagements "@vidhyarang π¦π¦π¦π¦π¦π¦" [X Link](https://x.com/ollama/status/2022052983069192675) 2026-02-12T20:59Z 115.8K followers, [---] engagements "@k_yarcev @MiniMax_AI Use it with Claude Code Or other tools" [X Link](https://x.com/ollama/status/2022057646367637816) 2026-02-12T21:18Z 115.8K followers, [---] engagements "@combif1am it's back Let's go. Don't wait. Ollama run it. Ollama launch it. Ollama love it" [X Link](https://x.com/ollama/status/2022060701934924076) 2026-02-12T21:30Z 115.8K followers, [--] engagements "@VikiVirgon πππ" [X Link](https://x.com/ollama/status/2022071222599921820) 2026-02-12T22:12Z 115.8K followers, [---] engagements "@coffeecup2020 @MiniMax_AI So sorry what are you seeing Target for us is about [---] tokens per second for this. ramping up with demand" [X Link](https://x.com/ollama/status/2022076095458423081) 2026-02-12T22:31Z 115.8K followers, [---] engagements "@soyhenryxyz @MiniMax_AI Yes these are all based on precisions as directed by the model labs. No additional quantizations" [X Link](https://x.com/ollama/status/2022077681354125650) 2026-02-12T22:37Z 115.8K followers, [---] engagements "@MigthtyMaximus @MiniMax_AI @grok No just to confirm when using cloud-hosted models we process your prompts and responses to provide the service but do not store or log that content and never train on it. We don't sell your data. You can delete your account anytime" [X Link](https://x.com/ollama/status/2022078105159106928) 2026-02-12T22:39Z 115.8K followers, [--] engagements "@vennelacheekati @MiniMax_AI If you are using an older version of Ollama you may have to download the model metadata first: ollama pull minimax-m2.5:cloud could you try again So sorry for the issue you are running into" [X Link](https://x.com/ollama/status/2022079660159643675) 2026-02-12T22:45Z 115.8K followers, [---] engagements "@soyhenryxyz @MiniMax_AI yes they are" [X Link](https://x.com/ollama/status/2022079806272335901) 2026-02-12T22:46Z 115.8K followers, [--] engagements "@geekbb @berryxia π€¦β I thought I could get away by posting a new post. Either way support open models Let's go πππ Now instead of trying one model try two" [X Link](https://x.com/ollama/status/2022081737187356867) 2026-02-12T22:53Z 115.8K followers, [--] engagements "@rachgranville sorry about that. How are you using it so we can help troubleshoot this" [X Link](https://x.com/ollama/status/2022083666235863277) 2026-02-12T23:01Z 115.8K followers, [--] engagements "@wucax @MiniMax_AI ollama launch claude --config" [X Link](https://x.com/ollama/status/2022100150102155750) 2026-02-13T00:06Z 115.8K followers, [---] engagements "@yi_xin32482 https://x.com/ollama/status/2022018134186791177s=20 β€ We are partnering with @MiniMax_AI to give Ollama users free usage of MiniMax M2.5 for the next couple of days ollama run minimax-m2.5:cloud Use MiniMax M2.5 with OpenCode Claude Code Codex OpenClaw via ollama launch OpenCode: ollama launch opencode --model https://t.co/bHVK37NEqD https://x.com/ollama/status/2022018134186791177s=20 β€ We are partnering with @MiniMax_AI to give Ollama users free usage of MiniMax M2.5 for the next couple of days ollama run minimax-m2.5:cloud Use MiniMax M2.5 with OpenCode Claude Code Codex" [X Link](https://x.com/ollama/status/2022123698426458461) 2026-02-13T01:40Z 115.8K followers, [--] engagements Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
@ollama ollamaollama posts on X about ollama, openclaw, claude code, if you the most. They currently have [-------] followers and [---] posts still getting attention that total [-------] engagements in the last [--] hours.
Social category influence technology brands #2767 stocks 2.38% social networks 1.59% finance 0.79% products 0.79%
Social topic influence ollama #1, openclaw #11, claude code #7, if you 8.73%, we are #136, agentic #17, in the 3.17%, ai 2.38%, future 2.38%, have the 2.38%
Top accounts mentioned or mentioned by @parthsareen @openclaw @minimaxai @zeddotdev @mervinpraison @jackccrawford @ihodlshit @grok @moikapy @homelaber @dapiq_ai @itsafiz @herogamer21btc @ambushalgorithm @zaiorg @pseudoanomaly @luongnv89 @wentestnet @mrok86 @kavindpadi
Top posts by engagements in the last [--] hours
"Ollama v0.11.7 is available with DeepSeek v3.1 support. You can run it locally with all its features like hybrid thinking. This works across Ollama's new app CLI API and SDKs. Ollama's Turbo mode that's in preview has also been updated to support the model"
X Link 2025-08-26T22:04Z 114.3K followers, 45.2K engagements
"The latest Android Studio has Ollama support. Designed to give you more control flexibility & agentic experiences the @AndroidStudio Otter [--] Feature Drop is now stable https://t.co/oNMdAUrE2W From Agent Mode conversation threads and choosing any AI model to automating UI teststhis release helps you build smarter not https://t.co/SwNX24oFe5 Designed to give you more control flexibility & agentic experiences the @AndroidStudio Otter [--] Feature Drop is now stable https://t.co/oNMdAUrE2W From Agent Mode conversation threads and choosing any AI model to automating UI teststhis release helps you"
X Link 2026-01-16T19:39Z 114.3K followers, 48K engagements
"ollama run translategemma TranslateGemma is available on Ollama. Now you can use it in apps to translate between [--] languages. Note it requires a specific prompting format πππ Were releasing TranslateGemma a new family of open translation models with support for [--] languages. π Available in 4B 12B and 27B parameter sizes theyre designed for efficiency without sacrificing quality. https://t.co/SRJzCOAKyG Were releasing TranslateGemma a new family of open translation models with support for [--] languages. π Available in 4B 12B and 27B parameter sizes theyre designed for efficiency without"
X Link 2026-01-16T23:34Z 114.2K followers, 123.9K engagements
"@m_koido ollama launch config Two dashes"
X Link 2026-01-26T02:14Z 113.8K followers, [---] engagements
"@steipete hugs"
X Link 2026-01-27T02:41Z 113.8K followers, 16.3K engagements
"Documentation: https://docs.ollama.com/integrations/clawdbot https://docs.ollama.com/integrations/clawdbot"
X Link 2026-01-27T02:49Z 114.3K followers, 29.8K engagements
"yes Make sure the model has good agentic capabilities and the context length is long (we actually recommend 32k context length minimum; 64k preferred). many of the times the models are bigger in size because of that. We are bullish on the future models that are coming though https://twitter.com/i/web/status/2015994623697772597 https://twitter.com/i/web/status/2015994623697772597"
X Link 2026-01-27T03:45Z 113.8K followers, [----] engagements
"@pseudoanomaly @parthsareen it makes configuring clawdbot easier when using it with Ollama"
X Link 2026-01-27T06:35Z 113.8K followers, [--] engagements
"Model page: https://ollama.com/library/kimi-k2.5 https://ollama.com/library/kimi-k2.5"
X Link 2026-01-27T09:50Z 114.2K followers, 19K engagements
"@JovKit π yes moltbot"
X Link 2026-01-27T17:58Z 113.8K followers, [----] engagements
"@omniharmonic depending on how much memory you have; the past two weeks we are seeing good growth on GLM [---] / GLM [---] Flash. There is Ollama's cloud where you can give the models a try at their full context length (even on the free tier)"
X Link 2026-01-27T18:26Z 114.2K followers, [----] engagements
"@VeasMc It means we host the model for you"
X Link 2026-01-27T21:15Z 113.8K followers, [----] engagements
"@SterlingCooley @thdxr you can use OpenCode with Ollama using Kimi K2.5: ollama launch opencode"
X Link 2026-01-27T23:28Z 114.2K followers, [----] engagements
"@zebassembly if you have Ollama: ollama launch opencode --model kimi-k2.5:cloud"
X Link 2026-01-28T01:44Z 114.2K followers, 36.5K engagements
"Win a golden ticket to NVIDIA GTC Ollama π€ NVIDIA β€ Want a chance to attend #NVIDIAGTC We're partnering with our GTC community to give away Golden Tickets π« including: β
GTC Conference pass β
VIP seating at NVIDIA CEO Jensen Huangs keynote β
NVIDIA DGX Spark β
Exclusive Happy Hour at NVIDIA Headquarters β
GTC Training Lab https://t.co/P1v1CydCet Want a chance to attend #NVIDIAGTC We're partnering with our GTC community to give away Golden Tickets π« including: β
GTC Conference pass β
VIP seating at NVIDIA CEO Jensen Huangs keynote β
NVIDIA DGX Spark β
Exclusive Happy Hour at NVIDIA Headquarters β
GTC"
X Link 2026-01-28T07:04Z 114.2K followers, 26.1K engagements
"Here is how to enter Share a cool project you've built with Ollama and/or with open models Please tag us @ollama and include #NVIDIAGTC We will be sharing this on other social / Discord as well. Submit by Sunday February 15th [----] https://developer.nvidia.com/gtc-golden-ticket-contest https://developer.nvidia.com/gtc-golden-ticket-contest https://developer.nvidia.com/gtc-golden-ticket-contest https://developer.nvidia.com/gtc-golden-ticket-contest"
X Link 2026-01-28T20:36Z 114.3K followers, [----] engagements
"@Sable_Project @openclaw π«‘"
X Link 2026-01-29T00:26Z 114.2K followers, [----] engagements
".@allen_ai team has made Sera available on Ollama ollama run nishtahir/sera Introducing Ai2 Open Coding Agentsstarting with SERA our first-ever coding models. Fast accessible agents (8B32B) that adapt to any repo including private codebases. Train a powerful specialized agent for as little as $400 & it works with Claude Code out of the box. π§΅ https://t.co/dor94O62B9 Introducing Ai2 Open Coding Agentsstarting with SERA our first-ever coding models. Fast accessible agents (8B32B) that adapt to any repo including private codebases. Train a powerful specialized agent for as little as $400 & it"
X Link 2026-01-29T02:14Z 114.2K followers, 27.6K engagements
"@ToNYD2WiLD @openclaw Sorry to hear this. May I ask how you are running it and is it with the latest Ollama Would love to help you troubleshoot"
X Link 2026-01-31T03:20Z 114.3K followers, [----] engagements
"@hasiniiisphere What do you plan to use it for"
X Link 2026-01-31T04:40Z 114.3K followers, 13K engagements
"@ivanfioravanti β€β€β€"
X Link 2026-01-31T21:11Z 113.8K followers, [----] engagements
"@rorynotsorry @MervinPraison @openclaw Hey I'm so sorry to hear this. May I ask how you are setting it up Which version of Ollama are you running and is this the latest OpenClaw version"
X Link 2026-02-02T04:40Z 114.1K followers, [---] engagements
"@moikapy @openclaw Hmm. So sorry about this. I just set up openclaw with ollama using kimi k2.5 in Ollama's cloud. Are you seeing any errors or just no response"
X Link 2026-02-02T04:42Z 114.3K followers, [----] engagements
"@VikiVirgon So sorry about this. We are working on addressing this"
X Link 2026-02-02T08:04Z 113.8K followers, [----] engagements
"@TradesGMR @openclaw Hey Is this the ollama provider or the nvidia hosted Kimi 2.5"
X Link 2026-02-04T04:26Z 114.2K followers, [---] engagements
"@TradesGMR @openclaw are you seeing problems with Ollama's Kimi K2.5"
X Link 2026-02-04T05:15Z 114.3K followers, [--] engagements
"@vox_maxed @kellypeilinchan @openclaw you can try it with Ollama's free tier of cloud models too. I've been hearing good feedback from users with Kimi K2.5"
X Link 2026-02-04T07:35Z 113.8K followers, [---] engagements
"@_andreantonelli @Alibaba_Qwen @Ali_TongyiLab Thanks for reporting. Looking into this"
X Link 2026-02-04T21:48Z 114.1K followers, [--] engagements
"@ucefkh So sorry for this. May I ask which model you are trying to download Weve seen some folks having different versions of ollama installed via the CLI thats always used"
X Link 2026-02-05T16:48Z 114.2K followers, [--] engagements
"@ucefkh qwen [--] coder next Are you using the pre-release from: (0.15.5) https://github.com/ollama/ollama/releases https://github.com/ollama/ollama/releases"
X Link 2026-02-05T19:30Z 114.2K followers, [--] engagements
"@MervinPraison @homelaber It's surprisingly good if you use Ollama's cloud models like Kimi K2.5 with the tools. More usage too. Of course the local hardware will catch up too - and better models"
X Link 2026-02-06T01:33Z 114.2K followers, [---] engagements
"@homelaber @MervinPraison There is free tier Its very generous but of course with limits since its only for playing with the models to see what its like. If you are not satisfied happy to refund you"
X Link 2026-02-06T02:35Z 114.2K followers, [---] engagements
"@ambushalgorithm @openclaw glad to see it working well for you π"
X Link 2026-02-06T04:57Z 114.3K followers, [----] engagements
"@molfly @ambushalgorithm @openclaw This is how we currently view it"
X Link 2026-02-06T17:36Z 114.3K followers, [--] engagements
"RT @JustinLin610: a small coder can be your local companion for building and ollama should bea good choice for it"
X Link 2026-02-07T19:06Z 114.4K followers, [--] engagements
"@juanmartin_gs @JulianGoldieSEO If you use Ollamas cloud models you can use a much smaller computer. If you have a need to be fully offline then yes you do need a fast computer"
X Link 2026-02-07T21:16Z 114.3K followers, [--] engagements
"@callmenickjames Hey Sorry for that experience running locally. It looks like you dont have enough memory to run the 30B model locally so it went into CPU offloading mode and will become absurdly slow. π"
X Link 2026-02-09T07:30Z 114.9K followers, [---] engagements
"@ogodlove10 sorry to hear this. What model are you running and on what hardware Would love to fix this"
X Link 2026-02-10T02:57Z 114.9K followers, [--] engagements
"@SystemsPolymath @James_paul_dev π give Ollama a try The cloud models are faster if you don't have the compute. Free to use the free tier too"
X Link 2026-02-10T05:25Z 114.9K followers, [--] engagements
"Model page: https://ollama.com/library/qwen3-coder-next https://ollama.com/library/qwen3-coder-next"
X Link 2026-02-04T10:05Z 115.8K followers, [----] engagements
"@fanofaliens @MiniMax_AI hey sorry Did you log into Ollama"
X Link 2026-02-12T22:29Z 115.8K followers, [---] engagements
"RT @zeddotdev: New edit prediction providers just dropped in Zed: @_inception_ai @sweepai @ollama and @GitHub Copilot NES. We also simp"
X Link 2026-02-04T18:18Z 115.6K followers, [--] engagements
"RT @JulianGoldieSEO: Want free AI without frying your laptop Do this: Download Ollama Pull Kimi K2.5 via terminal Sign into Ollama"
X Link 2026-02-06T01:46Z 115.6K followers, [--] engagements
"@francedot @steipete @trycua @openclaw π"
X Link 2026-02-06T01:53Z 115.5K followers, [----] engagements
"RT @parthsareen: i got ollama working with claude code teams and subagents using kimi-k2.5 (skip to 0:36 to see the fun stuff) i've been b"
X Link 2026-02-06T05:50Z 115.6K followers, [--] engagements
"@710_eth @Zai_org β€"
X Link 2026-02-11T21:11Z 115.6K followers, [---] engagements
"Ollama now has Anthropic API compatibility. π¦ This enables tools like Claude Code to be used with open-source models. π Get started and learn more πππ"
X Link 2026-01-17T07:58Z 115.8K followers, 593.1K engagements
"Ollama is here with image generation ollama run x/z-image-turbo ollama run x/flux2-klein In the latest release we've added experimental support for @Ali_TongyiLab Z-image-turbo @bfl_ml Flux.2 Klein (macOS with Windows and Linux coming soon) See examples πππ https://twitter.com/i/web/status/2013839484941463704 https://twitter.com/i/web/status/2013839484941463704"
X Link 2026-01-21T05:02Z 115.8K followers, 93K engagements
"ollama launch is a new command in Ollama [----] to run Claude Code Codex Droid and OpenCode with Ollama GLM [---] Flash is now optimized to use much less memory for longer context lengths (64k+). Need additional hardware Ollama's cloud offers GLM [---] with full precision and context length. https://twitter.com/i/web/status/2014977150152224786 https://twitter.com/i/web/status/2014977150152224786"
X Link 2026-01-24T08:22Z 115.8K followers, 207.3K engagements
"Build your own personal assistant with @openclaw and Ollama using your models ollama launch clawdbot Thank you for building something amazing @steipete https://twitter.com/i/web/status/2015980562847269048 https://twitter.com/i/web/status/2015980562847269048"
X Link 2026-01-27T02:49Z 115.8K followers, 234.4K engagements
"πππ Kimi K2.5 is on Ollama's cloud ollama run kimi-k2.5:cloud You can connect it to Claude Code Codex OpenCode Clawdbot and Droid via ollama launch ollama launch claude --model kimi-k2.5:cloud π₯ Meet Kimi K2.5 Open-Source Visual Agentic Intelligence. πΉ Global SOTA on Agentic Benchmarks: HLE full set (50.2%) BrowseComp (74.9%) πΉ Open-source SOTA on Vision and Coding: MMMU Pro (78.5%) VideoMMMU (86.6%) SWE-bench Verified (76.8%) πΉ Code with Taste: turn chats https://t.co/wp6JZS47bN π₯ Meet Kimi K2.5 Open-Source Visual Agentic Intelligence. πΉ Global SOTA on Agentic Benchmarks: HLE full"
X Link 2026-01-27T09:50Z 115.8K followers, 545.7K engagements
"π€―π€―π€― @nvidia is partnering with Ollama to give one lucky member a chance to win a golden ticket to #NVIDIAGTC Includes: π NVIDIA GTC [----] ticket (in-person) π VIP seating for Jensen's keynote π an DGX Spark π GTC training lab pass π exclusive NVIDIA merchandise π exclusive happy hour at NVIDIA HQ π How to enter ππππ https://twitter.com/i/web/status/2016611273316978998 https://twitter.com/i/web/status/2016611273316978998"
X Link 2026-01-28T20:36Z 115.8K followers, 56.4K engagements
"Make it easy to use @openclaw with Ollama launch to connect with local models that can run on your own device. [--]. Install OpenClaw [--]. ollama launch openclaw select model(s) you want to use [--]. configure the integrations you want (WhatsApp Telegram iMessage Slack Discord etc.) πππ for more instructions Ollama's cloud can also directly connect Give it a try if you don't have a powerful computer. It has a generous free tier https://twitter.com/i/web/status/2018244432546456044 https://twitter.com/i/web/status/2018244432546456044"
X Link 2026-02-02T08:45Z 115.8K followers, 171.2K engagements
"@openclaw https://ollama.com/blog/openclaw https://ollama.com/blog/openclaw"
X Link 2026-02-02T08:51Z 115.8K followers, 13.4K engagements
"@chachakobe4er Let's go πππ"
X Link 2026-02-03T00:09Z 115.8K followers, [----] engagements
"@jietang @TeksEdge π"
X Link 2026-02-03T03:05Z 115.8K followers, [---] engagements
"ollama pull glm-ocr All local. You own your data. GLM-OCR delivers state-of-the-art performance for document understanding. Use it for recognizing text tables and figures or output to a specific JSON format. Drag and drop images into the terminal script it or access via Ollama's API. https://twitter.com/i/web/status/2018525802057396411 https://twitter.com/i/web/status/2018525802057396411"
X Link 2026-02-03T03:23Z 115.8K followers, 162.2K engagements
"https://ollama.com/library/glm-ocr https://ollama.com/library/glm-ocr"
X Link 2026-02-03T03:23Z 115.8K followers, [----] engagements
"@parthsareen @Zai_org Let's go"
X Link 2026-02-03T03:30Z 115.8K followers, [----] engagements
"ollama run qwen3-coder-next Run Qwen3-Coder-Next completely free locally on your computer. It's built for coding agents and local development Run it with Claude Code: ollama launch claude --config Requires latest 0.15.5 pre-release on GitHub 64GB+ unified memory / VRAM recommended π IntroducingQwen3-Coder-Next an open-weight LM built for coding agents & local development. Whats new: π€ Scaling agentic training:800K verifiable tasks + executable envs π EfficiencyPerformance Tradeoff: achieves strong results on SWE-Bench Pro with 80B total params and https://t.co/P7BmZwdaQ9 π"
X Link 2026-02-04T10:05Z 115.8K followers, 189.4K engagements
"Use with other tools like Codex OpenCode and Droid with ollama launch https://ollama.com/blog/launch https://ollama.com/blog/launch"
X Link 2026-02-04T10:05Z 115.8K followers, [----] engagements
"@parthsareen π€―"
X Link 2026-02-06T05:50Z 115.8K followers, [---] engagements
"π€― Wow In one prompt Qwen3-Coder-Next generated a fully working flappy birds game in HTML. (0:05) Claude Code with Qwen3-Coder-Next (0:26) Shows the game running Run it fully locally: ollama pull qwen3-coder-next Ollama's cloud if you can't run it locally: ollama pull qwen3-coder-next:cloud Try launching it with Claude Code using ollama launch (link to play π§΅) So cool @Alibaba_Qwen @Ali_TongyiLab @JustinLin610 https://twitter.com/i/web/status/2019665258864939080 https://twitter.com/i/web/status/2019665258864939080"
X Link 2026-02-06T06:51Z 115.8K followers, 115K engagements
"Try the game (single prompt no edits) https://files.ollama.com/flappy-bird.html https://files.ollama.com/flappy-bird.html"
X Link 2026-02-06T06:51Z 115.8K followers, [----] engagements
"@mrok86 Yes It will. Try it I was just using it. Now I was also testing Ollama's cloud with that model (so I just recorded while testing). Local isn't slow on my M4 Max 128GB MacBook"
X Link 2026-02-06T07:11Z 115.8K followers, [----] engagements
"@ZanyMan_e Sorry There is free Ollama's cloud with that model too. A little too excited π
"
X Link 2026-02-09T19:56Z 115.8K followers, [----] engagements
"@itzlassse @OpenAI @GeminiApp @Kimi_Moonshot @claudeai will you open source it Been searching for a great chat app"
X Link 2026-02-11T09:53Z 115.8K followers, [----] engagements
"@itzlassse @OpenAI @GeminiApp @Kimi_Moonshot @claudeai"
X Link 2026-02-11T10:34Z 115.8K followers, [---] engagements
"β€ GLM-5 is on Ollama's cloud It's free to start and with higher limits available on the paid plans. ollama run glm-5:cloud It's fast. You can connect it to Claude Code Codex OpenCode OpenClaw via ollama launch Claude: ollama launch claude --model glm-5:cloud Codex: ollama launch codex --model glm-5:cloud Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5 it scales from 355B params (32B active) to 744B (40B active) with pre-training data growing from 23T to 28.5T tokens."
X Link 2026-02-11T19:28Z 115.8K followers, 170K engagements
"@Zai_org Model page with more ollama launch commands: https://ollama.com/library/glm-5 https://ollama.com/library/glm-5"
X Link 2026-02-11T19:35Z 115.8K followers, [----] engagements
"@gabeciii @Zai_org can you try ollama pull glm-5:cloud may I ask what version of ollama you are running (if you do ollama -v )"
X Link 2026-02-11T19:50Z 115.8K followers, [----] engagements
"@bygregorr @Zai_org Let's build πͺ"
X Link 2026-02-11T20:51Z 115.8K followers, [----] engagements
"@carlo_taleon @Zai_org No sorry We are rapidly increasing capacity right now"
X Link 2026-02-11T21:20Z 115.8K followers, [----] engagements
"We are working on increasing the capacity on Ollama's cloud. Sorry for the wait"
X Link 2026-02-11T23:13Z 115.8K followers, 142.1K engagements
"@vibehide That's what Ollama supports For this model not everyone has 804GB of VRAM. π Hoping for local hardware to be much more performant in the years to come"
X Link 2026-02-11T23:33Z 115.8K followers, [----] engagements
"@tom_doerr free to start Give it a try. Although fighting some capacity issues for GLM [--] right now. Sorry in advance about that"
X Link 2026-02-11T23:33Z 115.8K followers, [----] engagements
"@tonysimons_ @jackccrawford Open for the win β€"
X Link 2026-02-12T00:17Z 115.8K followers, [--] engagements
"@moikapy Not nerfing. Lets go Support open source and open models β€β€β€"
X Link 2026-02-12T01:05Z 115.8K followers, [---] engagements
"@elliot_solution @jackccrawford πππ"
X Link 2026-02-12T01:07Z 115.8K followers, [--] engagements
"@RhysSullivan π€"
X Link 2026-02-12T01:39Z 115.8K followers, [---] engagements
"@cobrax91310 sorry no easy filters since it's so dependent on context length as well. We show you the file sizes so you can use it to estimate how much VRAM is required + overhead for larger contexts. Example for OpenAI's gpt-oss models"
X Link 2026-02-12T01:42Z 115.8K followers, [--] engagements
"@jackrudenko @jackccrawford are you hitting at rate limits It's more generous than most plans available. Sorry about the experience if so. Would love to understand how you've hit the limits"
X Link 2026-02-12T02:30Z 115.8K followers, [--] engagements
"@iqbalabd π so sorry Which models were you hitting [---] errors. Will fix"
X Link 2026-02-12T04:13Z 115.8K followers, [---] engagements
"@iHODLshit β€"
X Link 2026-02-12T04:15Z 115.8K followers, [---] engagements
"@SalenoXP @tom_doerr You can use claude code with Ollama Ollama focuses on integrating models well with your tools whether that's coding or non-coding"
X Link 2026-02-12T05:48Z 115.8K followers, [---] engagements
"@moikapy Thank you for the support"
X Link 2026-02-12T05:52Z 115.8K followers, [---] engagements
"RT @cloudxdev: Btw ollama cloud is a great price/performance tool You can setup models to your favorite coding CLI ollama launch claude -"
X Link 2026-02-12T06:26Z 115.7K followers, [--] engagements
"@alexvinidiktov @Zai_org Sorry about this. It means you currently don't have the latest Ollama CLI (in the future it'll prompt to download the model if you don't have it) can you do: ollama pull glm-5:cloud sorry again"
X Link 2026-02-12T07:11Z 115.8K followers, [---] engagements
"@pseudoanomaly huh Which models"
X Link 2026-02-12T08:19Z 115.8K followers, 17.4K engagements
"GLM [--] on Ollama's cloud has increased capacity now and a higher speed Full sized model to use with your tools ollama pull glm-5:cloud Claude: ollama launch claude --model glm-5:cloud OpenClaw ollama launch openclaw --model glm-5:cloud *Pelican made by GLM-5 on Ollama We are working on increasing the capacity on Ollama's cloud. Sorry for the wait https://t.co/aYqh40oSVH We are working on increasing the capacity on Ollama's cloud. Sorry for the wait https://t.co/aYqh40oSVH"
X Link 2026-02-12T08:21Z 115.8K followers, 18.5K engagements
"@kavindpadi @jackccrawford @openclaw Yes we do. Limits are there to prevent abuse and we try to target specific use cases. If you are going heavy agentic workloads you may need the $100 plan; All the tiers are more than generous and you can easily start from the $20 plan to see if it's enough"
X Link 2026-02-12T08:25Z 115.8K followers, [---] engagements
"starting [----] we allow higher context length. You can use the app to decrease the context length for less memory usage. This is to enable agentic workloads. In the future we will move to dynamic context lengths. Sorry about the abrasive experience. In the settings move the slider down for context length https://twitter.com/i/web/status/2021864373057175722 https://twitter.com/i/web/status/2021864373057175722"
X Link 2026-02-12T08:30Z 115.8K followers, [----] engagements
"@kavindpadi @jackccrawford @openclaw"
X Link 2026-02-12T08:30Z 115.8K followers, [--] engagements
"@mxacod I'm currently getting about [--] tokens / sec"
X Link 2026-02-12T08:59Z 115.8K followers, [---] engagements
"MiniMax M2.5 is on Ollama's cloud ollama run minimax-m2.5:cloud Use MiniMax M2.5 with OpenCode Claude Code Codex OpenClaw via ollama launch OpenCode: ollama launch opencode --model minimax-m2.5:cloud Claude: ollama launch claude --model glm-5:cloud Introducing M2.5 an open-source frontier model designed for real-world productivity. - SOTA performance at coding (SWE-Bench Verified 80.2%) search (BrowseComp 76.3%) agentic tool-calling (BFCL 76.8%) & office work. - Optimized for efficient execution 37% faster at complex https://t.co/UwiKzzQNG8 Introducing M2.5 an open-source frontier model"
X Link 2026-02-12T17:12Z 115.8K followers, 38.9K engagements
"@bigsk1_com May I ask which version of Ollama you are on We had this problem a few versions ago. Should be fixed but will take a look to make sure it's not a regression. Either way so sorry. That's a jarring experience"
X Link 2026-02-12T17:49Z 115.8K followers, [---] engagements
"@SkylerMiao7 π«‘"
X Link 2026-02-12T17:52Z 115.8K followers, [---] engagements
"@Zai_org @MiniMax_AI π«‘π«‘π«‘"
X Link 2026-02-12T17:53Z 115.8K followers, [---] engagements
"@robinebers Try it on Ollamas cloud"
X Link 2026-02-12T18:12Z 115.8K followers, [---] engagements
"@moikapy @gchiang Kimi you get to code with vision/image support M2.5 is fast Focused on a lot of word/excel document support and coding. Everyone has different use cases and preference for different models. Try them to see which one you like"
X Link 2026-02-12T18:22Z 115.8K followers, [--] engagements
"@robinebers"
X Link 2026-02-12T18:22Z 115.8K followers, [--] engagements
"@kavindpadi @jackccrawford @openclaw Yes. You will see consumption inside your profile on http://ollama.com http://ollama.com"
X Link 2026-02-12T18:25Z 115.8K followers, [--] engagements
"@JonathanFlrs89 @MarcJSchmidt Hey this shouldn't be a problem on the latest Ollama releases. For earlier versions of Ollama could you run: ollama pull kimi-k2.5:cloud this should resolve the issue. So sorry about the experience"
X Link 2026-02-12T18:36Z 115.8K followers, [--] engagements
"β€ We are partnering with @MiniMax_AI to give Ollama users free usage of MiniMax M2.5 for the next couple of days ollama run minimax-m2.5:cloud Use MiniMax M2.5 with OpenCode Claude Code Codex OpenClaw via ollama launch OpenCode: ollama launch opencode --model minimax-m2.5:cloud Claude: ollama launch claude --model minimax-m2.5:cloud https://t.co/GsnRM0SQua https://t.co/GsnRM0SQua"
X Link 2026-02-12T18:41Z 115.8K followers, 141.5K engagements
"@jatinkrmalik @Kovid_R Which model we can reset for you. Try it with MiniMax M2.5 on Ollama cloud. Sorry about this experience though. So far free tier is meant to just be for chat to see how the model behaves"
X Link 2026-02-12T18:59Z 115.8K followers, [--] engagements
"RT @MiniMax_AI: Excited to partner with @ollama More access. More builders. More real-world AI with MiniMax M2.5 πͺ"
X Link 2026-02-12T19:01Z 115.7K followers, [--] engagements
"@MiniMax_AI πππ"
X Link 2026-02-12T19:02Z 115.8K followers, [---] engagements
"@siddhantparadox Give it a try"
X Link 2026-02-12T19:05Z 115.8K followers, [--] engagements
"@vickcodes let's go"
X Link 2026-02-12T19:49Z 115.8K followers, [---] engagements
"@ClawPhilSledge @MiniMax_AI Yes"
X Link 2026-02-12T19:53Z 115.8K followers, [----] engagements
"@shantanugoel @MiniMax_AI Did you log into ollama"
X Link 2026-02-12T20:02Z 115.8K followers, [----] engagements
"@KeridwenCodet @iqbalabd are you seeing the [---] errors right now for Gemma [--] 27B We are not seeing it currently on our monitoring"
X Link 2026-02-12T20:23Z 115.8K followers, [--] engagements
"@disier @MiniMax_AI π¦π¦π¦"
X Link 2026-02-12T20:29Z 115.8K followers, [---] engagements
"@vidhyarang π¦π¦π¦π¦π¦π¦"
X Link 2026-02-12T20:59Z 115.8K followers, [---] engagements
"@k_yarcev @MiniMax_AI Use it with Claude Code Or other tools"
X Link 2026-02-12T21:18Z 115.8K followers, [---] engagements
"@combif1am it's back Let's go. Don't wait. Ollama run it. Ollama launch it. Ollama love it"
X Link 2026-02-12T21:30Z 115.8K followers, [--] engagements
"@VikiVirgon πππ"
X Link 2026-02-12T22:12Z 115.8K followers, [---] engagements
"@coffeecup2020 @MiniMax_AI So sorry what are you seeing Target for us is about [---] tokens per second for this. ramping up with demand"
X Link 2026-02-12T22:31Z 115.8K followers, [---] engagements
"@soyhenryxyz @MiniMax_AI Yes these are all based on precisions as directed by the model labs. No additional quantizations"
X Link 2026-02-12T22:37Z 115.8K followers, [---] engagements
"@MigthtyMaximus @MiniMax_AI @grok No just to confirm when using cloud-hosted models we process your prompts and responses to provide the service but do not store or log that content and never train on it. We don't sell your data. You can delete your account anytime"
X Link 2026-02-12T22:39Z 115.8K followers, [--] engagements
"@vennelacheekati @MiniMax_AI If you are using an older version of Ollama you may have to download the model metadata first: ollama pull minimax-m2.5:cloud could you try again So sorry for the issue you are running into"
X Link 2026-02-12T22:45Z 115.8K followers, [---] engagements
"@soyhenryxyz @MiniMax_AI yes they are"
X Link 2026-02-12T22:46Z 115.8K followers, [--] engagements
"@geekbb @berryxia π€¦β I thought I could get away by posting a new post. Either way support open models Let's go πππ Now instead of trying one model try two"
X Link 2026-02-12T22:53Z 115.8K followers, [--] engagements
"@rachgranville sorry about that. How are you using it so we can help troubleshoot this"
X Link 2026-02-12T23:01Z 115.8K followers, [--] engagements
"@wucax @MiniMax_AI ollama launch claude --config"
X Link 2026-02-13T00:06Z 115.8K followers, [---] engagements
"@yi_xin32482 https://x.com/ollama/status/2022018134186791177s=20 β€ We are partnering with @MiniMax_AI to give Ollama users free usage of MiniMax M2.5 for the next couple of days ollama run minimax-m2.5:cloud Use MiniMax M2.5 with OpenCode Claude Code Codex OpenClaw via ollama launch OpenCode: ollama launch opencode --model https://t.co/bHVK37NEqD https://x.com/ollama/status/2022018134186791177s=20 β€ We are partnering with @MiniMax_AI to give Ollama users free usage of MiniMax M2.5 for the next couple of days ollama run minimax-m2.5:cloud Use MiniMax M2.5 with OpenCode Claude Code Codex"
X Link 2026-02-13T01:40Z 115.8K followers, [--] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
/creator/twitter::ollama