[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@UnslothAI Avatar @UnslothAI Unsloth AI

Unsloth AI posts on X about 1m, k2, llamacpp, llm the most. They currently have XXXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours.

Engagements: XXXXX #

Engagements Line Chart

Mentions: XX #

Mentions Line Chart

Followers: XXXXXX #

Followers Line Chart

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence #


Social topic influence 1m #890, k2 16.67%, llamacpp #1, llm 16.67%, agentic 8.33%, faster 8.33%, ollama 8.33%, inference XXXX%

Top accounts mentioned or mentioned by @corbtt @danielhanchen @alibabaqwen @foley2k2 @googledeepmind @reach_vb @lightspeedindia @christiankasiml @pehdrew_ @levelsio @1littlecoder @secretailabs @far__away__ @vectro @evandro_zanatta @fr3akpl @rapp_dore_ @data_guy_vikas @maximiliendech1 @erla_ndpg

Top Social Posts #


Top posts by engagements in the last XX hours

"You can now run Qwen3-235B-A22B-2507 with our Dynamic 2-bit GGUFs The full 250GB model gets reduced to just 88GB (-65% size). Achieve X tokens/s on 89GB unified memory or 80GB RAM + 8GB VRAM. GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-22 12:31:13 UTC 27.1K followers, 35.9K engagements

"Run Qwen3-Coder with our Dynamic 2-bit GGUFs We shrank the 480B parameter model to just 182GB (down from 512GB). Also run with 1M context length. Achieve X tokens/s on 182GB unified memory or 158GB RAM + 24GB VRAM. Qwen3-Coder-480B-A35B GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-23 01:26:26 UTC 27.1K followers, 23.6K engagements

"@Alibaba_Qwen Congrats guys on the release ✨ We're working on Dynamic quants & GGUFs so the community can run it locally 🤗"
@UnslothAI Avatar @UnslothAI on X 2025-07-21 17:58:34 UTC 27.1K followers, 12.2K engagements

"@Alibaba_Qwen Congrats guys on another epic release We're uploading Dynamic GGUFs and one with 1M context length so you guys can run it locally 🦥"
@UnslothAI Avatar @UnslothAI on X 2025-07-22 21:20:45 UTC 27.1K followers, 16.3K engagements

"You can now run Kimi K2 locally with our Dynamic 1.8-bit GGUFs We shrank the full 1.1TB model to just 245GB (-80% size reduction). The 2-bit XL GGUF performs exceptionally well on coding & passes all our code tests Guide: GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 15:27:02 UTC 27.1K followers, 127.6K engagements

"You can now run Qwen3-235B-A22B-Thinking-2507 with our Dynamic 2-bit GGUFs The full 250GB model gets reduced to just 87GB (-65% size). Achieve X tokens/s on 88GB unified memory or 80GB RAM + 8GB VRAM. GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-25 10:34:58 UTC 27.1K followers, 6542 engagements

"You can now fine-tune TTS models with Unsloth Train run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with XX% less VRAM. GitHub: Docs & Notebooks:"
@UnslothAI Avatar @UnslothAI on X 2025-05-15 16:38:23 UTC 27.1K followers, 125.6K engagements

"A Complete Guide to Fine-tuning LLMs in XX mins Learn to: Choose the correct model & training method (LoRA FFT GRPO) Build Datasets & Chat templates Train with Unsloth notebooks Run & deploy your LLM in llama.cpp Ollama & Open WebUI Docs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-16 13:53:08 UTC 27.1K followers, 28.8K engagements

"For fast inference of 5+ tokens/s try to have your RAM + VRAM combined = the size of quant (e.g. 256GB). If not the model will still run with llama.cpp offloading but be slower. Kimi K2 GGUF:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 16:39:23 UTC 27.1K followers, 5511 engagements

"Weve teamed up with @GoogleDeepMind for a challenge with a $10000 Unsloth prize 🦥 Show off your best fine-tuned Gemma 3n model using Unsloth optimized for an impactful task. The entire hackathon has $150000 prizes to be won Kaggle notebook:"
@UnslothAI Avatar @UnslothAI on X 2025-07-02 14:17:20 UTC 27.1K followers, 46.7K engagements

"We made step-by-step guides to Fine-tune & Run every single LLM 🦥 What you'll learn: Technical analysis + Bug fixes explained for each model Best practices & optimal settings How to fine-tune with our notebooks Directory of model variants"
@UnslothAI Avatar @UnslothAI on X 2025-07-08 13:47:08 UTC 27.1K followers, 50.1K engagements