[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@UnslothAI Avatar @UnslothAI Unsloth AI

Unsloth AI posts on X about k2, llamacpp, llm, faster the most. They currently have XXXXXX followers and XX posts still getting attention that total XXXXXX engagements in the last XX hours.

Engagements: XXXXXX #

Engagements Line Chart

Mentions: X #

Mentions Line Chart

Followers: XXXXXX #

Followers Line Chart

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence #


Social topic influence k2 #84, llamacpp #1, llm #162, faster 10%, ollama 10%, inference 10%, bug XX%

Top accounts mentioned or mentioned by @corbtt @danielhanchen @kp_meister @alibabaqwen @foley2k2 @reach_vb @mintisan @ahmetkdev @googledeepmind @erla_ndpg @far__away__ @fr3akpl @maximiliendech1 @rapp_dore_ @data_guy_vikas @levelsio @1littlecoder @dittmannaxel @alby13 @the_real_paolo

Top Social Posts #


Top posts by engagements in the last XX hours

"You can now run Qwen3-235B-A22B-2507 with our Dynamic 2-bit GGUFs The full 250GB model gets reduced to just 88GB (-65% size). Achieve X tokens/s on 89GB unified memory or 80GB RAM + 8GB VRAM. GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-22 12:31:13 UTC 26.9K followers, 22.1K engagements

"@Alibaba_Qwen Congrats guys on the release ✨ We're working on Dynamic quants & GGUFs so the community can run it locally 🤗"
@UnslothAI Avatar @UnslothAI on X 2025-07-21 17:58:34 UTC 26.9K followers, 10.8K engagements

"@Alibaba_Qwen Congrats guys on another epic release We're uploading Dynamic GGUFs and one with 1M context length so you guys can run it locally 🦥"
@UnslothAI Avatar @UnslothAI on X 2025-07-22 21:20:45 UTC 26.9K followers, 2491 engagements

"You can now run Kimi K2 locally with our Dynamic 1.8-bit GGUFs We shrank the full 1.1TB model to just 245GB (-80% size reduction). The 2-bit XL GGUF performs exceptionally well on coding & passes all our code tests Guide: GGUFs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 15:27:02 UTC 26.9K followers, 126.8K engagements

"You can now fine-tune TTS models with Unsloth Train run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with XX% less VRAM. GitHub: Docs & Notebooks:"
@UnslothAI Avatar @UnslothAI on X 2025-05-15 16:38:23 UTC 26.9K followers, 125.5K engagements

"A Complete Guide to Fine-tuning LLMs in XX mins Learn to: Choose the correct model & training method (LoRA FFT GRPO) Build Datasets & Chat templates Train with Unsloth notebooks Run & deploy your LLM in llama.cpp Ollama & Open WebUI Docs:"
@UnslothAI Avatar @UnslothAI on X 2025-07-16 13:53:08 UTC 26.9K followers, 28.1K engagements

"For fast inference of 5+ tokens/s try to have your RAM + VRAM combined = the size of quant (e.g. 256GB). If not the model will still run with llama.cpp offloading but be slower. Kimi K2 GGUF:"
@UnslothAI Avatar @UnslothAI on X 2025-07-14 16:39:23 UTC 26.9K followers, 5373 engagements

"Weve teamed up with @GoogleDeepMind for a challenge with a $10000 Unsloth prize 🦥 Show off your best fine-tuned Gemma 3n model using Unsloth optimized for an impactful task. The entire hackathon has $150000 prizes to be won Kaggle notebook:"
@UnslothAI Avatar @UnslothAI on X 2025-07-02 14:17:20 UTC 26.9K followers, 46.4K engagements

"We made step-by-step guides to Fine-tune & Run every single LLM 🦥 What you'll learn: Technical analysis + Bug fixes explained for each model Best practices & optimal settings How to fine-tune with our notebooks Directory of model variants"
@UnslothAI Avatar @UnslothAI on X 2025-07-08 13:47:08 UTC 26.9K followers, 49.9K engagements