[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
ollama posts on X about ollama, lets go, open ai, accuracy the most. They currently have XXXXXXX followers and XXX posts still getting attention that total XXXXX engagements in the last XX hours.
Social category influence technology brands XXXX% stocks XXXX%
Social topic influence ollama #1, lets go #1111, open ai #3015, accuracy 0.4%, ibm 0.4%, capabilities 0.4%, micro 0.4%, opensource 0.4%, blog 0.4%, up the XXX%
Top accounts mentioned or mentioned by @googledeepmind @nvidiaai @mistralai @alibabaqwen @thanosthinking @amd @aiatmeta @qualcomm @thomaspaulmann @amdradeon @intel @lalopenguin @jackccrawford @fileverse @avanika15 @technovangelist @pdev110 @realshojaei @nvidia @eyaltoledano
Top assets mentioned IBM (IBM)
Top posts by engagements in the last XX hours
"Ollama now has a web search API and MCP server โก Augment local and cloud models with the latest content to improve accuracy ๐ง Build your own search agent ๐ Directly plugs into existing MCP clients like @OpenAI Codex @cline Goose (@jack) and more Let's go ๐งต๐"
X Link @ollama 2025-09-25T05:32Z 101.4K followers, 134.1K engagements
"IBM Granite X has improved instruction following and tool-calling capabilities. micro (3B) ollama run granite4:micro micro-h (3B) ollama run granite4:micro-h tiny-h (7B) ollama run granite4:tiny-h small-h (32B): ollama run granite4:small-h"
X Link @ollama 2025-10-02T16:08Z 101.4K followers, 51K engagements
"Qwen3-VL 235B is available on Ollama's cloud It's free to try. ollama run qwen3-vl:235b-cloud The smaller models and the ability to run fully on-device will be coming very soon See examples and how to use the model on Ollama ๐๐๐"
X Link @ollama 2025-10-14T22:23Z 101.4K followers, 65.7K engagements
"blog post on Qwen3-VL and everything you need to get started"
X Link @ollama 2025-10-14T22:23Z 101.4K followers, 5268 engagements
".@lmsysorg's review of the NVIDIA DGX Spark is live"
X Link @ollama 2025-10-14T00:19Z 101.4K followers, 34.9K engagements
"NVIDIA DGX Spark is here It's so exciting to make Ollama run on @nvidia DGX Spark. Super amazing to see 128GB of unified memory and the Grace Blackwell architecture. ๐๐๐"
X Link @ollama 2025-10-14T01:26Z 101.4K followers, 68.8K engagements
"๐๐๐ Let's go ollama run qwen3-vl:235b-cloud"
X Link @ollama 2025-10-15T04:56Z 101.4K followers, 13.8K engagements
"We are noticing some package managers don't have the latest Ollama versions - especially problematic and contributing to low performance. Please use the latest version of Ollama:"
X Link @ollama 2025-10-15T20:02Z 101.4K followers, 5772 engagements
".@ollama happily running with @jetbrains โคโคโค For anyone picking up the latest NVIDIA DGX Spark today please make sure to run the latest version of Ollama and have the latest @nvidia drivers installed"
X Link @ollama 2025-10-15T20:08Z 101.4K followers, 26.7K engagements
"@JustinLin610"
X Link @ollama 2025-10-16T05:23Z 101.4K followers, 5814 engagements
"Ollama v0.8 is here Now it can stream responses with tool calling Example of Ollama doing web search:"
X Link @ollama 2025-05-28T21:14Z 101.4K followers, 151.2K engagements
"just one more cup before ollama gets ready"
X Link @ollama 2025-08-05T16:02Z 101.4K followers, 78.9K engagements
"@ALTIC_DEV @lmstudio We have Ollamas cloud models"
X Link @ollama 2025-10-01T05:59Z 101.4K followers, 1396 engagements
"@Technovangelist a lot more coding model improvements to come to Ollama โค"
X Link @ollama 2025-10-01T18:08Z 101.4K followers, 1021 engagements
"@DesignWithAllie @jmorgan @hoopcutter @hhua_ @ExaAILabs"
X Link @ollama 2025-10-03T16:57Z 101.4K followers, XXX engagements
"@matanSF @thanosthinking @FactoryAI Lets go"
X Link @ollama 2025-10-07T22:01Z 101.4K followers, XX engagements
"@tjalve So sorry. Are you having trouble with the CLI Whats the error you are getting. The app is in addition to using Ollama via CLI API and the SDKs. No login required either. Only if you want to use cloud services (and even that comes with a very generous free tier)"
X Link @ollama 2025-10-10T00:45Z 101.4K followers, XX engagements
"@reflection_ai โค would love to connect to see if we can help"
X Link @ollama 2025-10-10T01:41Z 101.4K followers, 1449 engagements
"@NVIDIAGeForce GeForce Day ๐"
X Link @ollama 2025-10-10T22:30Z 101.4K followers, 4239 engagements
"Ollama's blog post: .@lmsysorg's NVIDIA DGX Spark in-depth review using SG Lang and Ollama"
X Link @ollama 2025-10-14T01:26Z 101.4K followers, 6638 engagements
"@bflgomes @lmsysorg hmm. most people won't have the compute to run it locally - but maybe let us add it to Ollama's Cloud for you to try first for free"
X Link @ollama 2025-10-14T06:48Z 101.4K followers, XX engagements
"@d0rc @nvidia @NVIDIAAI @NVIDIA_AI_PC Bigger models longer context lengths More real world tasks"
X Link @ollama 2025-10-14T07:43Z 101.4K followers, 1153 engagements
"@testboibestbot @nvidia @NVIDIAAI @NVIDIA_AI_PC working on adding it right now"
X Link @ollama 2025-10-14T17:04Z 101.4K followers, 1116 engagements
"@testboibestbot @nvidia @NVIDIAAI @NVIDIA_AI_PC"
X Link @ollama 2025-10-14T23:02Z 101.4K followers, 1259 engagements
"@EveryoneIsGross @Alibaba_Qwen ๐ working hard on it ๐ชโค"
X Link @ollama 2025-10-15T04:58Z 101.4K followers, XXX engagements
"@wassollichhier Working on it just cloud was finished first and no reason to hold the release up. Will be shipping local very soon. Sorry for the wait"
X Link @ollama 2025-10-15T07:27Z 101.4K followers, XXX engagements
"@NVIDIAAIDev โคโคโค"
X Link @ollama 2025-10-15T16:06Z 101.4K followers, XXX engagements
"@selfhosted_ai Hoping very soon. Going through testing and iterations to make sure its good"
X Link @ollama 2025-10-15T16:07Z 101.4K followers, XXX engagements
"@CoherentThouts Ollama can run models locally and in the cloud (optional). All about serving users"
X Link @ollama 2025-10-15T16:09Z 101.4K followers, XXX engagements
"@JiNgErZz @MemMachine_ai โคโคโค thank you for the kindness Let's go"
X Link @ollama 2025-10-15T18:04Z 101.4K followers, 1078 engagements
"@arungupta @jetbrains @nvidia @Merocle ready to work"
X Link @ollama 2025-10-15T20:43Z 101.4K followers, XXX engagements
"@trung_rta @jetbrains @nvidia depends -- we actually just learned through a HackerNews comment that the GGUFs distributed for gpt-oss labeled as MXFP4 have layers that are quantized to q8_0. Ollama instead uses bf16. This obviously creates performance differences on benchmarks"
X Link @ollama 2025-10-15T20:52Z 101.4K followers, XXX engagements
"@Leoreedmax Thank you for actually using both There is so much for each to improve upon. Each serving its own purpose. It's never zero sum. We all grow together - let's make using models amazing โคโคโค"
X Link @ollama 2025-10-16T05:27Z 101.4K followers, XXX engagements
"@ruslanjabari"
X Link @ollama 2025-10-16T08:51Z 101.4K followers, XXX engagements
"@paraga @sqs let's go Good people โคโคโค"
X Link @ollama 2025-10-16T08:52Z 101.4K followers, XXX engagements
"@steipete @Cocoanetics Ollama supports both local and cloud models. We are hopeful that as hardware and models improve more and more users will run their own models"
X Link @ollama 2025-10-16T17:44Z 101.4K followers, XXX engagements
"@dgrreen @yatsiv_yuriy @elevenlabsio @matistanis @Imogen64405693 @JenniferHli @james406 @NowadaysAI @noah_333 @Ninaliuser"
X Link @ollama 2025-10-17T00:18Z 101.4K followers, XX engagements
"@steel_ph0enix @ColtonIdle @FrameworkPuter @OpenAI @lmstudio Vulkan support is experimental in Ollama. Once we complete more testing itll get bundled in Ollama. Its in the GitHub repo on main"
X Link @ollama 2025-10-18T18:39Z 101.4K followers, XX engagements