[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@ollama "NVIDIA DGX Spark is here It's so exciting to make Ollama run on @nvidia DGX Spark. Super amazing to see 128GB of unified memory and the Grace Blackwell architecture. ๐Ÿ‘‡๐Ÿ‘‡๐Ÿ‘‡"
X Link @ollama 2025-10-14T01:26Z 101.4K followers, 68.8K engagements

"Ollama v0.11.7 is available with DeepSeek v3.1 support. You can run it locally with all its features like hybrid thinking. This works across Ollama's new app CLI API and SDKs. Ollama's Turbo mode that's in preview has also been updated to support the model"
X Link @ollama 2025-08-26T22:04Z 101.4K followers, 45.1K engagements

"Ollama now has a web search API and MCP server โšก Augment local and cloud models with the latest content to improve accuracy ๐Ÿ”ง Build your own search agent ๐Ÿ” Directly plugs into existing MCP clients like @OpenAI Codex @cline Goose (@jack) and more Let's go ๐Ÿงต๐Ÿ‘‡"
X Link @ollama 2025-09-25T05:32Z 101.4K followers, 134.2K engagements

"IBM Granite X has improved instruction following and tool-calling capabilities. micro (3B) ollama run granite4:micro micro-h (3B) ollama run granite4:micro-h tiny-h (7B) ollama run granite4:tiny-h small-h (32B): ollama run granite4:small-h"
X Link @ollama 2025-10-02T16:08Z 101.4K followers, 51K engagements

"@tjalve So sorry. Are you having trouble with the CLI Whats the error you are getting. The app is in addition to using Ollama via CLI API and the SDKs. No login required either. Only if you want to use cloud services (and even that comes with a very generous free tier)"
X Link @ollama 2025-10-10T00:45Z 101.4K followers, XX engagements

"@NVIDIAGeForce GeForce Day ๐Ÿ˜"
X Link @ollama 2025-10-10T22:30Z 101.4K followers, 4277 engagements

".@lmsysorg's review of the NVIDIA DGX Spark is live"
X Link @ollama 2025-10-14T00:19Z 101.4K followers, 35K engagements

"Ollama's blog post: .@lmsysorg's NVIDIA DGX Spark in-depth review using SG Lang and Ollama"
X Link @ollama 2025-10-14T01:26Z 101.4K followers, 6718 engagements

"@bflgomes @lmsysorg hmm. most people won't have the compute to run it locally - but maybe let us add it to Ollama's Cloud for you to try first for free"
X Link @ollama 2025-10-14T06:48Z 101.4K followers, XX engagements

"@d0rc @nvidia @NVIDIAAI @NVIDIA_AI_PC Bigger models longer context lengths More real world tasks"
X Link @ollama 2025-10-14T07:43Z 101.4K followers, 1165 engagements

"@testboibestbot @nvidia @NVIDIAAI @NVIDIA_AI_PC working on adding it right now"
X Link @ollama 2025-10-14T17:04Z 101.4K followers, 1127 engagements

"Qwen3-VL 235B is available on Ollama's cloud It's free to try. ollama run qwen3-vl:235b-cloud The smaller models and the ability to run fully on-device will be coming very soon See examples and how to use the model on Ollama ๐Ÿ‘‡๐Ÿ‘‡๐Ÿ‘‡"
X Link @ollama 2025-10-14T22:23Z 101.4K followers, 66.1K engagements

"blog post on Qwen3-VL and everything you need to get started"
X Link @ollama 2025-10-14T22:23Z 101.4K followers, 5268 engagements

"@testboibestbot @nvidia @NVIDIAAI @NVIDIA_AI_PC"
X Link @ollama 2025-10-14T23:02Z 101.4K followers, 1265 engagements

"๐Ÿš€๐Ÿš€๐Ÿš€ Let's go ollama run qwen3-vl:235b-cloud"
X Link @ollama 2025-10-15T04:56Z 101.4K followers, 13.8K engagements

"@EveryoneIsGross @Alibaba_Qwen ๐Ÿ™€ working hard on it ๐Ÿ’ชโค"
X Link @ollama 2025-10-15T04:58Z 101.4K followers, XXX engagements

"@wassollichhier Working on it just cloud was finished first and no reason to hold the release up. Will be shipping local very soon. Sorry for the wait"
X Link @ollama 2025-10-15T07:27Z 101.4K followers, XXX engagements

"@CoherentThouts Ollama can run models locally and in the cloud (optional). All about serving users"
X Link @ollama 2025-10-15T16:09Z 101.4K followers, XXX engagements

"@JiNgErZz @MemMachine_ai โคโคโค thank you for the kindness Let's go"
X Link @ollama 2025-10-15T18:04Z 101.4K followers, 1080 engagements

"We are noticing some package managers don't have the latest Ollama versions - especially problematic and contributing to low performance. Please use the latest version of Ollama:"
X Link @ollama 2025-10-15T20:02Z 101.4K followers, 5816 engagements

".@ollama happily running with @jetbrains โคโคโค For anyone picking up the latest NVIDIA DGX Spark today please make sure to run the latest version of Ollama and have the latest @nvidia drivers installed"
X Link @ollama 2025-10-15T20:08Z 101.4K followers, 26.8K engagements

"@arungupta @jetbrains @nvidia @Merocle ready to work"
X Link @ollama 2025-10-15T20:43Z 101.4K followers, XXX engagements

"@trung_rta @jetbrains @nvidia depends -- we actually just learned through a HackerNews comment that the GGUFs distributed for gpt-oss labeled as MXFP4 have layers that are quantized to q8_0. Ollama instead uses bf16. This obviously creates performance differences on benchmarks"
X Link @ollama 2025-10-15T20:52Z 101.4K followers, XXX engagements

"@JustinLin610"
X Link @ollama 2025-10-16T05:23Z 101.4K followers, 5836 engagements

"@Leoreedmax Thank you for actually using both There is so much for each to improve upon. Each serving its own purpose. It's never zero sum. We all grow together - let's make using models amazing โคโคโค"
X Link @ollama 2025-10-16T05:27Z 101.4K followers, XXX engagements

"@steipete @Cocoanetics Ollama supports both local and cloud models. We are hopeful that as hardware and models improve more and more users will run their own models"
X Link @ollama 2025-10-16T17:44Z 101.4K followers, XXX engagements

"@ZMatalab @IBMResearch Would you like us to"
X Link @ollama 2025-10-16T22:49Z 101.4K followers, XX engagements

"@dgrreen @yatsiv_yuriy @elevenlabsio @matistanis @Imogen64405693 @JenniferHli @james406 @NowadaysAI @noah_333 @Ninaliuser"
X Link @ollama 2025-10-17T00:18Z 101.4K followers, XX engagements

"@steel_ph0enix @ColtonIdle @FrameworkPuter @OpenAI @lmstudio Vulkan support is experimental in Ollama. Once we complete more testing itll get bundled in Ollama. Its in the GitHub repo on main"
X Link @ollama 2025-10-18T18:39Z 101.4K followers, XX engagements

"@XYang2023 @SquashBionic @intel We have been working directly with Intel on what to support directly. More on that in the future. For now we have Vulkan merged in main on GitHub and going through testing. Once that is okay we will include it for the Ollama binaries"
X Link @ollama 2025-10-18T18:49Z 101.4K followers, XXX engagements

"@izag82161 Yes but please use a recommended model"
X Link @ollama 2025-10-18T18:53Z 101.4K followers, 2668 engagements

"@sqs glad I got my model shopping for the week done early"
X Link @ollama 2025-10-20T09:09Z 101.4K followers, 3705 engagements

"@i_naiveai @sqs July 18th is Ollamas birthday ๐ŸŽ‚"
X Link @ollama 2025-10-20T10:44Z 101.4K followers, XX engagements