Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

# ![@ollama Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::1688410127378829312.png) @ollama ollama

ollama posts on X about blog, ollama, open ai, hybrid the most. They currently have XXXXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours.

### Engagements: XXXXX [#](/creator/twitter::1688410127378829312/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1688410127378829312/c:line/m:interactions.svg)

- X Week XXXXXXX +714%
- X Month XXXXXXX +36%
- X Months XXXXXXXXX -XX%
- X Year XXXXXXXXX +96%

### Mentions: XX [#](/creator/twitter::1688410127378829312/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1688410127378829312/c:line/m:posts_active.svg)

- X Week XX +47%
- X Month XXX +37%
- X Months XXX +365%
- X Year XXX +240%

### Followers: XXXXXXX [#](/creator/twitter::1688410127378829312/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1688410127378829312/c:line/m:followers.svg)

- X Week XXXXXXX +0.30%
- X Month XXXXXXX +0.87%
- X Months XXXXXXX +11%
- X Year XXXXXXX +101%

### CreatorRank: XXXXXXX [#](/creator/twitter::1688410127378829312/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1688410127378829312/c:line/m:influencer_rank.svg)

### Social Influence [#](/creator/twitter::1688410127378829312/influence)
---

**Social category influence**
[technology brands](/list/technology-brands)  XXXXX% [stocks](/list/stocks)  XXXX%

**Social topic influence**
[blog](/topic/blog) 2.02%, [ollama](/topic/ollama) #4, [open ai](/topic/open-ai) 2.02%, [hybrid](/topic/hybrid) 1.01%, [cli](/topic/cli) 1.01%, [ibm](/topic/ibm) 1.01%, [micro](/topic/micro) 1.01%, [a very](/topic/a-very) 1.01%, [lang](/topic/lang) 1.01%, [real world](/topic/real-world) XXXX%

**Top accounts mentioned or mentioned by**
[@nvidiaai](/creator/undefined) [@alibabaqwen](/creator/undefined) [@aiatmeta](/creator/undefined) [@mistralai](/creator/undefined) [@qualcomm](/creator/undefined) [@intel](/creator/undefined) [@googledeepmind](/creator/undefined) [@amd](/creator/undefined) [@amdradeon](/creator/undefined) [@thomaspaulmann](/creator/undefined) [@fileverse](/creator/undefined) [@jingerzz](/creator/undefined) [@realshojaei](/creator/undefined) [@nvidia](/creator/undefined) [@jackccrawford](/creator/undefined) [@eyaltoledano](/creator/undefined) [@technovangelist](/creator/undefined) [@leoreedmax](/creator/undefined) [@pdev110](/creator/undefined) [@ronxldwilson](/creator/undefined)

**Top assets mentioned**
[IBM (IBM)](/topic/ibm)
### Top Social Posts [#](/creator/twitter::1688410127378829312/posts)
---
Top posts by engagements in the last XX hours

"@sqs glad I got my model shopping for the week done early"  
[X Link](https://x.com/ollama/status/1980199605234217416) [@ollama](/creator/x/ollama) 2025-10-20T09:09Z 101.4K followers, 5407 engagements


"Ollama v0.11.7 is available with DeepSeek v3.1 support. You can run it locally with all its features like hybrid thinking. This works across Ollama's new app CLI API and SDKs. Ollama's Turbo mode that's in preview has also been updated to support the model"  
[X Link](https://x.com/ollama/status/1960463433515852144) [@ollama](/creator/x/ollama) 2025-08-26T22:04Z 101.4K followers, 45.1K engagements


"IBM Granite X has improved instruction following and tool-calling capabilities. micro (3B) ollama run granite4:micro micro-h (3B) ollama run granite4:micro-h tiny-h (7B) ollama run granite4:tiny-h small-h (32B): ollama run granite4:small-h"  
[X Link](https://x.com/ollama/status/1973782095811219574) [@ollama](/creator/x/ollama) 2025-10-02T16:08Z 101.4K followers, 51K engagements


"@tjalve So sorry. Are you having trouble with the CLI Whats the error you are getting. The app is in addition to using Ollama via CLI API and the SDKs. No login required either. Only if you want to use cloud services (and even that comes with a very generous free tier)"  
[X Link](https://x.com/ollama/status/1976449087643017364) [@ollama](/creator/x/ollama) 2025-10-10T00:45Z 101.4K followers, XX engagements


"@NVIDIAGeForce GeForce Day ๐Ÿ˜"  
[X Link](https://x.com/ollama/status/1976777502766121076) [@ollama](/creator/x/ollama) 2025-10-10T22:30Z 101.4K followers, 4278 engagements


"Ollama's blog post: .@lmsysorg's NVIDIA DGX Spark in-depth review using SG Lang and Ollama"  
[X Link](https://x.com/ollama/status/1977908723239489727) [@ollama](/creator/x/ollama) 2025-10-14T01:26Z 101.4K followers, 6719 engagements


"@bflgomes @lmsysorg hmm. most people won't have the compute to run it locally - but maybe let us add it to Ollama's Cloud for you to try first for free"  
[X Link](https://x.com/ollama/status/1977989877997183416) [@ollama](/creator/x/ollama) 2025-10-14T06:48Z 101.4K followers, XX engagements


"@d0rc @nvidia @NVIDIAAI @NVIDIA_AI_PC Bigger models longer context lengths More real world tasks"  
[X Link](https://x.com/ollama/status/1978003654222495898) [@ollama](/creator/x/ollama) 2025-10-14T07:43Z 101.4K followers, 1165 engagements


"@testboibestbot @nvidia @NVIDIAAI @NVIDIA_AI_PC working on adding it right now"  
[X Link](https://x.com/ollama/status/1978144781105594723) [@ollama](/creator/x/ollama) 2025-10-14T17:04Z 101.4K followers, 1127 engagements


"blog post on Qwen3-VL and everything you need to get started"  
[X Link](https://x.com/ollama/status/1978225300656861245) [@ollama](/creator/x/ollama) 2025-10-14T22:23Z 101.4K followers, 5268 engagements


"@testboibestbot @nvidia @NVIDIAAI @NVIDIA_AI_PC"  
[X Link](https://x.com/ollama/status/1978234981148156256) [@ollama](/creator/x/ollama) 2025-10-14T23:02Z 101.4K followers, 1265 engagements


"@EveryoneIsGross @Alibaba_Qwen ๐Ÿ™€ working hard on it ๐Ÿ’ชโค"  
[X Link](https://x.com/ollama/status/1978324547720941904) [@ollama](/creator/x/ollama) 2025-10-15T04:58Z 101.4K followers, XXX engagements


"@wassollichhier Working on it just cloud was finished first and no reason to hold the release up. Will be shipping local very soon. Sorry for the wait"  
[X Link](https://x.com/ollama/status/1978361993682718743) [@ollama](/creator/x/ollama) 2025-10-15T07:27Z 101.4K followers, XXX engagements


"@CoherentThouts Ollama can run models locally and in the cloud (optional). All about serving users"  
[X Link](https://x.com/ollama/status/1978493427269398984) [@ollama](/creator/x/ollama) 2025-10-15T16:09Z 101.4K followers, XXX engagements


"@JiNgErZz @MemMachine_ai โคโคโค thank you for the kindness Let's go"  
[X Link](https://x.com/ollama/status/1978522261293273155) [@ollama](/creator/x/ollama) 2025-10-15T18:04Z 101.4K followers, 1080 engagements


"@arungupta @jetbrains @nvidia @Merocle ready to work"  
[X Link](https://x.com/ollama/status/1978562307308376294) [@ollama](/creator/x/ollama) 2025-10-15T20:43Z 101.4K followers, XXX engagements


"@trung_rta @jetbrains @nvidia depends -- we actually just learned through a HackerNews comment that the GGUFs distributed for gpt-oss labeled as MXFP4 have layers that are quantized to q8_0. Ollama instead uses bf16. This obviously creates performance differences on benchmarks"  
[X Link](https://x.com/ollama/status/1978564721008672794) [@ollama](/creator/x/ollama) 2025-10-15T20:52Z 101.4K followers, XXX engagements


"@Leoreedmax Thank you for actually using both There is so much for each to improve upon. Each serving its own purpose. It's never zero sum. We all grow together - let's make using models amazing โคโคโค"  
[X Link](https://x.com/ollama/status/1978694212276093200) [@ollama](/creator/x/ollama) 2025-10-16T05:27Z 101.4K followers, XXX engagements


"@dgrreen @yatsiv_yuriy @elevenlabsio @matistanis @Imogen64405693 @JenniferHli @james406 @NowadaysAI @noah_333 @Ninaliuser"  
[X Link](https://x.com/ollama/status/1978978986412605520) [@ollama](/creator/x/ollama) 2025-10-17T00:18Z 101.4K followers, XX engagements


"@steel_ph0enix @ColtonIdle @FrameworkPuter @OpenAI @lmstudio Vulkan support is experimental in Ollama. Once we complete more testing itll get bundled in Ollama. Its in the GitHub repo on main"  
[X Link](https://x.com/ollama/status/1979618346007146850) [@ollama](/creator/x/ollama) 2025-10-18T18:39Z 101.4K followers, XX engagements


"Ollama now has a web search API and MCP server โšก Augment local and cloud models with the latest content to improve accuracy ๐Ÿ”ง Build your own search agent ๐Ÿ” Directly plugs into existing MCP clients like @OpenAI Codex @cline Goose (@jack) and more Let's go ๐Ÿงต๐Ÿ‘‡"  
[X Link](https://x.com/ollama/status/1971085470785319349) [@ollama](/creator/x/ollama) 2025-09-25T05:32Z 101.4K followers, 134.2K engagements


".@lmsysorg's review of the NVIDIA DGX Spark is live"  
[X Link](https://x.com/ollama/status/1977892001220645086) [@ollama](/creator/x/ollama) 2025-10-14T00:19Z 101.4K followers, 35.1K engagements


"NVIDIA DGX Spark is here It's so exciting to make Ollama run on @nvidia DGX Spark. Super amazing to see 128GB of unified memory and the Grace Blackwell architecture. ๐Ÿ‘‡๐Ÿ‘‡๐Ÿ‘‡"  
[X Link](https://x.com/ollama/status/1977908720525783321) [@ollama](/creator/x/ollama) 2025-10-14T01:26Z 101.4K followers, 68.9K engagements


"Qwen3-VL 235B is available on Ollama's cloud It's free to try. ollama run qwen3-vl:235b-cloud The smaller models and the ability to run fully on-device will be coming very soon See examples and how to use the model on Ollama ๐Ÿ‘‡๐Ÿ‘‡๐Ÿ‘‡"  
[X Link](https://x.com/ollama/status/1978225292784062817) [@ollama](/creator/x/ollama) 2025-10-14T22:23Z 101.4K followers, 66.4K engagements


"๐Ÿš€๐Ÿš€๐Ÿš€ Let's go ollama run qwen3-vl:235b-cloud"  
[X Link](https://x.com/ollama/status/1978324133327908956) [@ollama](/creator/x/ollama) 2025-10-15T04:56Z 101.4K followers, 13.9K engagements


"We are noticing some package managers don't have the latest Ollama versions - especially problematic and contributing to low performance. Please use the latest version of Ollama:"  
[X Link](https://x.com/ollama/status/1978552093674758362) [@ollama](/creator/x/ollama) 2025-10-15T20:02Z 101.4K followers, 5847 engagements


".@ollama happily running with @jetbrains โคโคโค For anyone picking up the latest NVIDIA DGX Spark today please make sure to run the latest version of Ollama and have the latest @nvidia drivers installed"  
[X Link](https://x.com/ollama/status/1978553698063200297) [@ollama](/creator/x/ollama) 2025-10-15T20:08Z 101.4K followers, 26.9K engagements


"@JustinLin610"  
[X Link](https://x.com/ollama/status/1978693255958073680) [@ollama](/creator/x/ollama) 2025-10-16T05:23Z 101.4K followers, 5853 engagements


"@ZMatalab @IBMResearch Would you like us to"  
[X Link](https://x.com/ollama/status/1978956521166799351) [@ollama](/creator/x/ollama) 2025-10-16T22:49Z 101.4K followers, XX engagements


"@XYang2023 @SquashBionic @intel We have been working directly with Intel on what to support directly. More on that in the future. For now we have Vulkan merged in main on GitHub and going through testing. Once that is okay we will include it for the Ollama binaries"  
[X Link](https://x.com/ollama/status/1979620858537837018) [@ollama](/creator/x/ollama) 2025-10-18T18:49Z 101.4K followers, XXX engagements


"@izag82161 Yes but please use a recommended model"  
[X Link](https://x.com/ollama/status/1979621923677155484) [@ollama](/creator/x/ollama) 2025-10-18T18:53Z 101.4K followers, 2722 engagements


"@i_naiveai @sqs July 18th is Ollamas birthday ๐ŸŽ‚"  
[X Link](https://x.com/ollama/status/1980223643453116727) [@ollama](/creator/x/ollama) 2025-10-20T10:44Z 101.4K followers, XXX engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@ollama Avatar @ollama ollama

ollama posts on X about blog, ollama, open ai, hybrid the most. They currently have XXXXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours.

Engagements: XXXXX #

Engagements Line Chart

  • X Week XXXXXXX +714%
  • X Month XXXXXXX +36%
  • X Months XXXXXXXXX -XX%
  • X Year XXXXXXXXX +96%

Mentions: XX #

Mentions Line Chart

  • X Week XX +47%
  • X Month XXX +37%
  • X Months XXX +365%
  • X Year XXX +240%

Followers: XXXXXXX #

Followers Line Chart

  • X Week XXXXXXX +0.30%
  • X Month XXXXXXX +0.87%
  • X Months XXXXXXX +11%
  • X Year XXXXXXX +101%

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence #


Social category influence technology brands XXXXX% stocks XXXX%

Social topic influence blog 2.02%, ollama #4, open ai 2.02%, hybrid 1.01%, cli 1.01%, ibm 1.01%, micro 1.01%, a very 1.01%, lang 1.01%, real world XXXX%

Top accounts mentioned or mentioned by @nvidiaai @alibabaqwen @aiatmeta @mistralai @qualcomm @intel @googledeepmind @amd @amdradeon @thomaspaulmann @fileverse @jingerzz @realshojaei @nvidia @jackccrawford @eyaltoledano @technovangelist @leoreedmax @pdev110 @ronxldwilson

Top assets mentioned IBM (IBM)

Top Social Posts #


Top posts by engagements in the last XX hours

"@sqs glad I got my model shopping for the week done early"
X Link @ollama 2025-10-20T09:09Z 101.4K followers, 5407 engagements

"Ollama v0.11.7 is available with DeepSeek v3.1 support. You can run it locally with all its features like hybrid thinking. This works across Ollama's new app CLI API and SDKs. Ollama's Turbo mode that's in preview has also been updated to support the model"
X Link @ollama 2025-08-26T22:04Z 101.4K followers, 45.1K engagements

"IBM Granite X has improved instruction following and tool-calling capabilities. micro (3B) ollama run granite4:micro micro-h (3B) ollama run granite4:micro-h tiny-h (7B) ollama run granite4:tiny-h small-h (32B): ollama run granite4:small-h"
X Link @ollama 2025-10-02T16:08Z 101.4K followers, 51K engagements

"@tjalve So sorry. Are you having trouble with the CLI Whats the error you are getting. The app is in addition to using Ollama via CLI API and the SDKs. No login required either. Only if you want to use cloud services (and even that comes with a very generous free tier)"
X Link @ollama 2025-10-10T00:45Z 101.4K followers, XX engagements

"@NVIDIAGeForce GeForce Day ๐Ÿ˜"
X Link @ollama 2025-10-10T22:30Z 101.4K followers, 4278 engagements

"Ollama's blog post: .@lmsysorg's NVIDIA DGX Spark in-depth review using SG Lang and Ollama"
X Link @ollama 2025-10-14T01:26Z 101.4K followers, 6719 engagements

"@bflgomes @lmsysorg hmm. most people won't have the compute to run it locally - but maybe let us add it to Ollama's Cloud for you to try first for free"
X Link @ollama 2025-10-14T06:48Z 101.4K followers, XX engagements

"@d0rc @nvidia @NVIDIAAI @NVIDIA_AI_PC Bigger models longer context lengths More real world tasks"
X Link @ollama 2025-10-14T07:43Z 101.4K followers, 1165 engagements

"@testboibestbot @nvidia @NVIDIAAI @NVIDIA_AI_PC working on adding it right now"
X Link @ollama 2025-10-14T17:04Z 101.4K followers, 1127 engagements

"blog post on Qwen3-VL and everything you need to get started"
X Link @ollama 2025-10-14T22:23Z 101.4K followers, 5268 engagements

"@testboibestbot @nvidia @NVIDIAAI @NVIDIA_AI_PC"
X Link @ollama 2025-10-14T23:02Z 101.4K followers, 1265 engagements

"@EveryoneIsGross @Alibaba_Qwen ๐Ÿ™€ working hard on it ๐Ÿ’ชโค"
X Link @ollama 2025-10-15T04:58Z 101.4K followers, XXX engagements

"@wassollichhier Working on it just cloud was finished first and no reason to hold the release up. Will be shipping local very soon. Sorry for the wait"
X Link @ollama 2025-10-15T07:27Z 101.4K followers, XXX engagements

"@CoherentThouts Ollama can run models locally and in the cloud (optional). All about serving users"
X Link @ollama 2025-10-15T16:09Z 101.4K followers, XXX engagements

"@JiNgErZz @MemMachine_ai โคโคโค thank you for the kindness Let's go"
X Link @ollama 2025-10-15T18:04Z 101.4K followers, 1080 engagements

"@arungupta @jetbrains @nvidia @Merocle ready to work"
X Link @ollama 2025-10-15T20:43Z 101.4K followers, XXX engagements

"@trung_rta @jetbrains @nvidia depends -- we actually just learned through a HackerNews comment that the GGUFs distributed for gpt-oss labeled as MXFP4 have layers that are quantized to q8_0. Ollama instead uses bf16. This obviously creates performance differences on benchmarks"
X Link @ollama 2025-10-15T20:52Z 101.4K followers, XXX engagements

"@Leoreedmax Thank you for actually using both There is so much for each to improve upon. Each serving its own purpose. It's never zero sum. We all grow together - let's make using models amazing โคโคโค"
X Link @ollama 2025-10-16T05:27Z 101.4K followers, XXX engagements

"@dgrreen @yatsiv_yuriy @elevenlabsio @matistanis @Imogen64405693 @JenniferHli @james406 @NowadaysAI @noah_333 @Ninaliuser"
X Link @ollama 2025-10-17T00:18Z 101.4K followers, XX engagements

"@steel_ph0enix @ColtonIdle @FrameworkPuter @OpenAI @lmstudio Vulkan support is experimental in Ollama. Once we complete more testing itll get bundled in Ollama. Its in the GitHub repo on main"
X Link @ollama 2025-10-18T18:39Z 101.4K followers, XX engagements

"Ollama now has a web search API and MCP server โšก Augment local and cloud models with the latest content to improve accuracy ๐Ÿ”ง Build your own search agent ๐Ÿ” Directly plugs into existing MCP clients like @OpenAI Codex @cline Goose (@jack) and more Let's go ๐Ÿงต๐Ÿ‘‡"
X Link @ollama 2025-09-25T05:32Z 101.4K followers, 134.2K engagements

".@lmsysorg's review of the NVIDIA DGX Spark is live"
X Link @ollama 2025-10-14T00:19Z 101.4K followers, 35.1K engagements

"NVIDIA DGX Spark is here It's so exciting to make Ollama run on @nvidia DGX Spark. Super amazing to see 128GB of unified memory and the Grace Blackwell architecture. ๐Ÿ‘‡๐Ÿ‘‡๐Ÿ‘‡"
X Link @ollama 2025-10-14T01:26Z 101.4K followers, 68.9K engagements

"Qwen3-VL 235B is available on Ollama's cloud It's free to try. ollama run qwen3-vl:235b-cloud The smaller models and the ability to run fully on-device will be coming very soon See examples and how to use the model on Ollama ๐Ÿ‘‡๐Ÿ‘‡๐Ÿ‘‡"
X Link @ollama 2025-10-14T22:23Z 101.4K followers, 66.4K engagements

"๐Ÿš€๐Ÿš€๐Ÿš€ Let's go ollama run qwen3-vl:235b-cloud"
X Link @ollama 2025-10-15T04:56Z 101.4K followers, 13.9K engagements

"We are noticing some package managers don't have the latest Ollama versions - especially problematic and contributing to low performance. Please use the latest version of Ollama:"
X Link @ollama 2025-10-15T20:02Z 101.4K followers, 5847 engagements

".@ollama happily running with @jetbrains โคโคโค For anyone picking up the latest NVIDIA DGX Spark today please make sure to run the latest version of Ollama and have the latest @nvidia drivers installed"
X Link @ollama 2025-10-15T20:08Z 101.4K followers, 26.9K engagements

"@JustinLin610"
X Link @ollama 2025-10-16T05:23Z 101.4K followers, 5853 engagements

"@ZMatalab @IBMResearch Would you like us to"
X Link @ollama 2025-10-16T22:49Z 101.4K followers, XX engagements

"@XYang2023 @SquashBionic @intel We have been working directly with Intel on what to support directly. More on that in the future. For now we have Vulkan merged in main on GitHub and going through testing. Once that is okay we will include it for the Ollama binaries"
X Link @ollama 2025-10-18T18:49Z 101.4K followers, XXX engagements

"@izag82161 Yes but please use a recommended model"
X Link @ollama 2025-10-18T18:53Z 101.4K followers, 2722 engagements

"@i_naiveai @sqs July 18th is Ollamas birthday ๐ŸŽ‚"
X Link @ollama 2025-10-20T10:44Z 101.4K followers, XXX engagements

creator/x::ollama
/creator/x::ollama