[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@UnslothAI "You can now run Qwen3-235B-A22B-2507 with our Dynamic 2-bit GGUFs The full 250GB model gets reduced to just 88GB (-65% size). Achieve X tokens/s on 89GB unified memory or 80GB RAM + 8GB VRAM. GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-22 12:31:13 UTC 27.1K followers, 35.6K engagements

"Run Qwen3-Coder with our Dynamic 2-bit GGUFs We shrank the 480B parameter model to just 182GB (down from 512GB). Also run with 1M context length. Achieve X tokens/s on 182GB unified memory or 158GB RAM + 24GB VRAM. Qwen3-Coder-480B-A35B GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-23 01:26:26 UTC 27.1K followers, 23.1K engagements

"@Alibaba_Qwen Congrats guys on the release ✨ We're working on Dynamic quants & GGUFs so the community can run it locally 🤗"
@UnslothAI Avatar @UnslothAI on X 2025-07-21 17:58:34 UTC 27.1K followers, 12.1K engagements

"@Alibaba_Qwen Congrats guys on another epic release We're uploading Dynamic GGUFs and one with 1M context length so you guys can run it locally 🦥"
@UnslothAI Avatar @UnslothAI on X 2025-07-22 21:20:45 UTC 27.1K followers, 16K engagements

"You can now run Kimi K2 locally with our Dynamic 1.8-bit GGUFs We shrank the full 1.1TB model to just 245GB (-80% size reduction). The 2-bit XL GGUF performs exceptionally well on coding & passes all our code tests Guide: GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 15:27:02 UTC 27.1K followers, 127.4K engagements

"You can now fine-tune TTS models with Unsloth Train run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with XX% less VRAM. GitHub: Docs & Notebooks:"
@UnslothAI Avatar @UnslothAI on X 2025-05-15 16:38:23 UTC 27.1K followers, 125.6K engagements

"A Complete Guide to Fine-tuning LLMs in XX mins Learn to: Choose the correct model & training method (LoRA FFT GRPO) Build Datasets & Chat templates Train with Unsloth notebooks Run & deploy your LLM in llama.cpp Ollama & Open WebUI Docs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-16 13:53:08 UTC 27.1K followers, 28.6K engagements

"For fast inference of 5+ tokens/s try to have your RAM + VRAM combined = the size of quant (e.g. 256GB). If not the model will still run with llama.cpp offloading but be slower. Kimi K2 GGUF:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 16:39:23 UTC 27.1K followers, 5485 engagements

"Weve teamed up with @GoogleDeepMind for a challenge with a $10000 Unsloth prize 🦥 Show off your best fine-tuned Gemma 3n model using Unsloth optimized for an impactful task. The entire hackathon has $150000 prizes to be won Kaggle notebook:"
@UnslothAI Avatar @UnslothAI on X 2025-07-02 14:17:20 UTC 27.1K followers, 46.6K engagements

"We made step-by-step guides to Fine-tune & Run every single LLM 🦥 What you'll learn: Technical analysis + Bug fixes explained for each model Best practices & optimal settings How to fine-tune with our notebooks Directory of model variants"
@UnslothAI Avatar @UnslothAI on X 2025-07-08 13:47:08 UTC 27.1K followers, 50.1K engagements