[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@UnslothAI Avatar @UnslothAI Unsloth AI

Unsloth AI posts on X about 1m, llamacpp, llm, agentic the most. They currently have XXXXXX followers and XX posts still getting attention that total XXXXXX engagements in the last XX hours.

Engagements: XXXXXX #

Engagements Line Chart

Mentions: XX #

Mentions Line Chart

Followers: XXXXXX #

Followers Line Chart

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence #


Social topic influence 1m #359, llamacpp #1, llm 18.18%, agentic 9.09%, faster 9.09%, ollama 9.09%, k2 9.09%, inference XXXX%

Top accounts mentioned or mentioned by @corbtt @danielhanchen @kp_meister @alibabaqwen @foley2k2 @reach_vb @mintisan @ahmetkdev @googledeepmind @secretailabs @data_guy_vikas @christiankasiml @rapp_dore_ @fr3akpl @erla_ndpg @maximiliendech1 @vectro @levelsio @far__away__ @evandro_zanatta

Top Social Posts #


Top posts by engagements in the last XX hours

"You can now run Qwen3-235B-A22B-2507 with our Dynamic 2-bit GGUFs The full 250GB model gets reduced to just 88GB (-65% size). Achieve X tokens/s on 89GB unified memory or 80GB RAM + 8GB VRAM. GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-22 12:31:13 UTC 27K followers, 34.4K engagements

"Run Qwen3-Coder with our Dynamic 2-bit GGUFs We shrank the 480B parameter model to just 182GB (down from 512GB). Also run with 1M context length. Achieve X tokens/s on 182GB unified memory or 158GB RAM + 24GB VRAM. Qwen3-Coder-480B-A35B GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-23 01:26:26 UTC 27K followers, 18.5K engagements

"@Alibaba_Qwen Congrats guys on the release ✨ We're working on Dynamic quants & GGUFs so the community can run it locally 🤗"
@UnslothAI Avatar @UnslothAI on X 2025-07-21 17:58:34 UTC 27K followers, 12K engagements

"@Alibaba_Qwen Congrats guys on another epic release We're uploading Dynamic GGUFs and one with 1M context length so you guys can run it locally 🦥"
@UnslothAI Avatar @UnslothAI on X 2025-07-22 21:20:45 UTC 27K followers, 14.4K engagements

"You can now run Kimi K2 locally with our Dynamic 1.8-bit GGUFs We shrank the full 1.1TB model to just 245GB (-80% size reduction). The 2-bit XL GGUF performs exceptionally well on coding & passes all our code tests Guide: GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 15:27:02 UTC 27K followers, 127.1K engagements

"You can now fine-tune TTS models with Unsloth Train run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with XX% less VRAM. GitHub: Docs & Notebooks:"
@UnslothAI Avatar @UnslothAI on X 2025-05-15 16:38:23 UTC 27K followers, 125.6K engagements

"A Complete Guide to Fine-tuning LLMs in XX mins Learn to: Choose the correct model & training method (LoRA FFT GRPO) Build Datasets & Chat templates Train with Unsloth notebooks Run & deploy your LLM in llama.cpp Ollama & Open WebUI Docs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-16 13:53:08 UTC 27K followers, 28.4K engagements

"For fast inference of 5+ tokens/s try to have your RAM + VRAM combined = the size of quant (e.g. 256GB). If not the model will still run with llama.cpp offloading but be slower. Kimi K2 GGUF:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 16:39:23 UTC 27K followers, 5440 engagements

"Weve teamed up with @GoogleDeepMind for a challenge with a $10000 Unsloth prize 🦥 Show off your best fine-tuned Gemma 3n model using Unsloth optimized for an impactful task. The entire hackathon has $150000 prizes to be won Kaggle notebook:"
@UnslothAI Avatar @UnslothAI on X 2025-07-02 14:17:20 UTC 27K followers, 46.5K engagements

"We made step-by-step guides to Fine-tune & Run every single LLM 🦥 What you'll learn: Technical analysis + Bug fixes explained for each model Best practices & optimal settings How to fine-tune with our notebooks Directory of model variants"
@UnslothAI Avatar @UnslothAI on X 2025-07-08 13:47:08 UTC 27K followers, 50K engagements