Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

# ![@UnslothAI Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::1730159888402395136.png) @UnslothAI Unsloth AI

Unsloth AI posts on X about llm, k2, kimi, llamacpp the most. They currently have XXXXXX followers and X posts still getting attention that total XXXXX engagements in the last XX hours.

### Engagements: XXXXX [#](/creator/twitter::1730159888402395136/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1730159888402395136/c:line/m:interactions.svg)

- X Week XXXXXXX +180%
- X Month XXXXXXX -XXXX%
- X Months XXXXXXXXX +472%
- X Year XXXXXXXXX +5,113%

### Mentions: X [#](/creator/twitter::1730159888402395136/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1730159888402395136/c:line/m:posts_active.svg)

- X Months XX +31%

### Followers: XXXXXX [#](/creator/twitter::1730159888402395136/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1730159888402395136/c:line/m:followers.svg)

- X Week XXXXXX +2.20%
- X Month XXXXXX +5.10%
- X Months XXXXXX +196%
- X Year XXXXXX +794%

### CreatorRank: XXXXXXX [#](/creator/twitter::1730159888402395136/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1730159888402395136/c:line/m:influencer_rank.svg)

### Social Influence [#](/creator/twitter::1730159888402395136/influence)
---

**Social topic influence**
[llm](/topic/llm) 33.33%, [k2](/topic/k2) 22.22%, [kimi](/topic/kimi) 22.22%, [llamacpp](/topic/llamacpp) #1, [ollama](/topic/ollama) 11.11%, [inference](/topic/inference) 11.11%, [bug](/topic/bug) XXXXX%

**Top accounts mentioned or mentioned by**
[@danielhanchen](/creator/undefined) [@kp_meister](/creator/undefined) [@corbtt](/creator/undefined) [@reach_vb](/creator/undefined) [@mintisan](/creator/undefined) [@ahmetkdev](/creator/undefined) [@googles](/creator/undefined) [@lmarenaai](/creator/undefined) [@huggingface](/creator/undefined) [@kaggle](/creator/undefined) [@googledeepmind](/creator/undefined) [@alby13](/creator/undefined) [@the_real_paolo](/creator/undefined) [@aicube_ai](/creator/undefined) [@buidinhngoc](/creator/undefined) [@vectro](/creator/undefined) [@srigi](/creator/undefined) [@dittmannaxel](/creator/undefined) [@sanghvian](/creator/undefined) [@levelsio](/creator/undefined)
### Top Social Posts [#](/creator/twitter::1730159888402395136/posts)
---
Top posts by engagements in the last XX hours

"You can now run Kimi K2 locally with our Dynamic 1.8-bit GGUFs We shrank the full 1.1TB model to just 245GB (-80% size reduction). The 2-bit XL GGUF performs exceptionally well on coding & passes all our code tests Guide: GGUFs:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1944780685409165589) 2025-07-14 15:27:02 UTC 26.7K followers, 124.8K engagements


"You can now fine-tune TTS models with Unsloth Train run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with XX% less VRAM. GitHub: Docs & Notebooks:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1923055371008213086) 2025-05-15 16:38:23 UTC 26.7K followers, 125.4K engagements


"A Complete Guide to Fine-tuning LLMs in XX mins Learn to: Choose the correct model & training method (LoRA FFT GRPO) Build Datasets & Chat templates Train with Unsloth notebooks Run & deploy your LLM in llama.cpp Ollama & Open WebUI Docs:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1945481829206905055) 2025-07-16 13:53:08 UTC 26.7K followers, 26.9K engagements


"For fast inference of 5+ tokens/s try to have your RAM + VRAM combined = the size of quant (e.g. 256GB). If not the model will still run with llama.cpp offloading but be slower. Kimi K2 GGUF:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1944798892811485670) 2025-07-14 16:39:23 UTC 26.7K followers, 5057 engagements


"Mistral releases Devstral 2507 the best open-source model for coding agents 🔥 The 24B model is now the #1 open LLM on SWE-Bench Verified scoring XXXX% Run Devstral-Small-2507 locally on 32GB RAM with our Dynamic quants & fine-tune with Unsloth GGUFs:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1943317113655189557) 2025-07-10 14:31:19 UTC 26.7K followers, 24.7K engagements


"Weve teamed up with @GoogleDeepMind for a challenge with a $10000 Unsloth prize 🦥 Show off your best fine-tuned Gemma 3n model using Unsloth optimized for an impactful task. The entire hackathon has $150000 prizes to be won Kaggle notebook:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1940414492791468240) 2025-07-02 14:17:20 UTC 26.7K followers, 46K engagements


"We made step-by-step guides to Fine-tune & Run every single LLM 🦥 What you'll learn: Technical analysis + Bug fixes explained for each model Best practices & optimal settings How to fine-tune with our notebooks Directory of model variants"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1942581217838211543) 2025-07-08 13:47:08 UTC 26.7K followers, 49.4K engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@UnslothAI Avatar @UnslothAI Unsloth AI

Unsloth AI posts on X about llm, k2, kimi, llamacpp the most. They currently have XXXXXX followers and X posts still getting attention that total XXXXX engagements in the last XX hours.

Engagements: XXXXX #

Engagements Line Chart

  • X Week XXXXXXX +180%
  • X Month XXXXXXX -XXXX%
  • X Months XXXXXXXXX +472%
  • X Year XXXXXXXXX +5,113%

Mentions: X #

Mentions Line Chart

  • X Months XX +31%

Followers: XXXXXX #

Followers Line Chart

  • X Week XXXXXX +2.20%
  • X Month XXXXXX +5.10%
  • X Months XXXXXX +196%
  • X Year XXXXXX +794%

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence #


Social topic influence llm 33.33%, k2 22.22%, kimi 22.22%, llamacpp #1, ollama 11.11%, inference 11.11%, bug XXXXX%

Top accounts mentioned or mentioned by @danielhanchen @kp_meister @corbtt @reach_vb @mintisan @ahmetkdev @googles @lmarenaai @huggingface @kaggle @googledeepmind @alby13 @the_real_paolo @aicube_ai @buidinhngoc @vectro @srigi @dittmannaxel @sanghvian @levelsio

Top Social Posts #


Top posts by engagements in the last XX hours

"You can now run Kimi K2 locally with our Dynamic 1.8-bit GGUFs We shrank the full 1.1TB model to just 245GB (-80% size reduction). The 2-bit XL GGUF performs exceptionally well on coding & passes all our code tests Guide: GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 15:27:02 UTC 26.7K followers, 124.8K engagements

"You can now fine-tune TTS models with Unsloth Train run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with XX% less VRAM. GitHub: Docs & Notebooks:"
@UnslothAI Avatar @UnslothAI on X 2025-05-15 16:38:23 UTC 26.7K followers, 125.4K engagements

"A Complete Guide to Fine-tuning LLMs in XX mins Learn to: Choose the correct model & training method (LoRA FFT GRPO) Build Datasets & Chat templates Train with Unsloth notebooks Run & deploy your LLM in llama.cpp Ollama & Open WebUI Docs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-16 13:53:08 UTC 26.7K followers, 26.9K engagements

"For fast inference of 5+ tokens/s try to have your RAM + VRAM combined = the size of quant (e.g. 256GB). If not the model will still run with llama.cpp offloading but be slower. Kimi K2 GGUF:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 16:39:23 UTC 26.7K followers, 5057 engagements

"Mistral releases Devstral 2507 the best open-source model for coding agents 🔥 The 24B model is now the #1 open LLM on SWE-Bench Verified scoring XXXX% Run Devstral-Small-2507 locally on 32GB RAM with our Dynamic quants & fine-tune with Unsloth GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-10 14:31:19 UTC 26.7K followers, 24.7K engagements

"Weve teamed up with @GoogleDeepMind for a challenge with a $10000 Unsloth prize 🦥 Show off your best fine-tuned Gemma 3n model using Unsloth optimized for an impactful task. The entire hackathon has $150000 prizes to be won Kaggle notebook:"
@UnslothAI Avatar @UnslothAI on X 2025-07-02 14:17:20 UTC 26.7K followers, 46K engagements

"We made step-by-step guides to Fine-tune & Run every single LLM 🦥 What you'll learn: Technical analysis + Bug fixes explained for each model Best practices & optimal settings How to fine-tune with our notebooks Directory of model variants"
@UnslothAI Avatar @UnslothAI on X 2025-07-08 13:47:08 UTC 26.7K followers, 49.4K engagements

creator/x::UnslothAI
/creator/x::UnslothAI