Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

[@UnslothAI](/creator/twitter/UnslothAI)
"You can now run Kimi K2 locally with our Dynamic 1.8-bit GGUFs We shrank the full 1.1TB model to just 245GB (-80% size reduction). The 2-bit XL GGUF performs exceptionally well on coding & passes all our code tests Guide: GGUFs:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1944780685409165589) 2025-07-14 15:27:02 UTC 26.8K followers, 126.1K engagements


"You can now fine-tune TTS models with Unsloth Train run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with XX% less VRAM. GitHub: Docs & Notebooks:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1923055371008213086) 2025-05-15 16:38:23 UTC 26.8K followers, 125.5K engagements


"A Complete Guide to Fine-tuning LLMs in XX mins Learn to: Choose the correct model & training method (LoRA FFT GRPO) Build Datasets & Chat templates Train with Unsloth notebooks Run & deploy your LLM in llama.cpp Ollama & Open WebUI Docs:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1945481829206905055) 2025-07-16 13:53:08 UTC 26.8K followers, 27.6K engagements


"For fast inference of 5+ tokens/s try to have your RAM + VRAM combined = the size of quant (e.g. 256GB). If not the model will still run with llama.cpp offloading but be slower. Kimi K2 GGUF:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1944798892811485670) 2025-07-14 16:39:23 UTC 26.8K followers, 5217 engagements


"Mistral releases Devstral 2507 the best open-source model for coding agents 🔥 The 24B model is now the #1 open LLM on SWE-Bench Verified scoring XXXX% Run Devstral-Small-2507 locally on 32GB RAM with our Dynamic quants & fine-tune with Unsloth GGUFs:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1943317113655189557) 2025-07-10 14:31:19 UTC 26.7K followers, 24.8K engagements


"Weve teamed up with @GoogleDeepMind for a challenge with a $10000 Unsloth prize 🦥 Show off your best fine-tuned Gemma 3n model using Unsloth optimized for an impactful task. The entire hackathon has $150000 prizes to be won Kaggle notebook:"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1940414492791468240) 2025-07-02 14:17:20 UTC 26.8K followers, 46.2K engagements


"We made step-by-step guides to Fine-tune & Run every single LLM 🦥 What you'll learn: Technical analysis + Bug fixes explained for each model Best practices & optimal settings How to fine-tune with our notebooks Directory of model variants"  
![@UnslothAI Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1730159888402395136.png) [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1942581217838211543) 2025-07-08 13:47:08 UTC 26.8K followers, 49.7K engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@UnslothAI "You can now run Kimi K2 locally with our Dynamic 1.8-bit GGUFs We shrank the full 1.1TB model to just 245GB (-80% size reduction). The 2-bit XL GGUF performs exceptionally well on coding & passes all our code tests Guide: GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 15:27:02 UTC 26.8K followers, 126.1K engagements

"You can now fine-tune TTS models with Unsloth Train run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with XX% less VRAM. GitHub: Docs & Notebooks:"
@UnslothAI Avatar @UnslothAI on X 2025-05-15 16:38:23 UTC 26.8K followers, 125.5K engagements

"A Complete Guide to Fine-tuning LLMs in XX mins Learn to: Choose the correct model & training method (LoRA FFT GRPO) Build Datasets & Chat templates Train with Unsloth notebooks Run & deploy your LLM in llama.cpp Ollama & Open WebUI Docs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-16 13:53:08 UTC 26.8K followers, 27.6K engagements

"For fast inference of 5+ tokens/s try to have your RAM + VRAM combined = the size of quant (e.g. 256GB). If not the model will still run with llama.cpp offloading but be slower. Kimi K2 GGUF:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 16:39:23 UTC 26.8K followers, 5217 engagements

"Mistral releases Devstral 2507 the best open-source model for coding agents 🔥 The 24B model is now the #1 open LLM on SWE-Bench Verified scoring XXXX% Run Devstral-Small-2507 locally on 32GB RAM with our Dynamic quants & fine-tune with Unsloth GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-10 14:31:19 UTC 26.7K followers, 24.8K engagements

"Weve teamed up with @GoogleDeepMind for a challenge with a $10000 Unsloth prize 🦥 Show off your best fine-tuned Gemma 3n model using Unsloth optimized for an impactful task. The entire hackathon has $150000 prizes to be won Kaggle notebook:"
@UnslothAI Avatar @UnslothAI on X 2025-07-02 14:17:20 UTC 26.8K followers, 46.2K engagements

"We made step-by-step guides to Fine-tune & Run every single LLM 🦥 What you'll learn: Technical analysis + Bug fixes explained for each model Best practices & optimal settings How to fine-tune with our notebooks Directory of model variants"
@UnslothAI Avatar @UnslothAI on X 2025-07-08 13:47:08 UTC 26.8K followers, 49.7K engagements

creator/twitter::1730159888402395136/posts
/creator/twitter::1730159888402395136/posts