[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @UnslothAI Unsloth AI Unsloth AI posts on X about llm, k2, kimi, llamacpp the most. They currently have XXXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours. ### Engagements: XXXXX [#](/creator/twitter::1730159888402395136/interactions)  - X Week XXXXXXX +180% - X Month XXXXXXX -XXXX% - X Months XXXXXXXXX +472% - X Year XXXXXXXXX +5,113% ### Mentions: X [#](/creator/twitter::1730159888402395136/posts_active)  - X Months XX +31% ### Followers: XXXXXX [#](/creator/twitter::1730159888402395136/followers)  - X Week XXXXXX +2.20% - X Month XXXXXX +5.10% - X Months XXXXXX +196% - X Year XXXXXX +794% ### CreatorRank: XXXXXXX [#](/creator/twitter::1730159888402395136/influencer_rank)  ### Social Influence [#](/creator/twitter::1730159888402395136/influence) --- **Social topic influence** [llm](/topic/llm), [k2](/topic/k2), [kimi](/topic/kimi), [llamacpp](/topic/llamacpp) #1, [ollama](/topic/ollama), [inference](/topic/inference), [bug](/topic/bug) ### Top Social Posts [#](/creator/twitter::1730159888402395136/posts) --- Top posts by engagements in the last XX hours "You can now run Kimi K2 locally with our Dynamic 1.8-bit GGUFs We shrank the full 1.1TB model to just 245GB (-80% size reduction). The 2-bit XL GGUF performs exceptionally well on coding & passes all our code tests Guide: GGUFs:"  [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1944780685409165589) 2025-07-14 15:27:02 UTC 26.7K followers, 124.8K engagements "You can now fine-tune TTS models with Unsloth Train run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with XX% less VRAM. GitHub: Docs & Notebooks:"  [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1923055371008213086) 2025-05-15 16:38:23 UTC 26.7K followers, 125.4K engagements "A Complete Guide to Fine-tuning LLMs in XX mins Learn to: Choose the correct model & training method (LoRA FFT GRPO) Build Datasets & Chat templates Train with Unsloth notebooks Run & deploy your LLM in llama.cpp Ollama & Open WebUI Docs:"  [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1945481829206905055) 2025-07-16 13:53:08 UTC 26.7K followers, 26.9K engagements "For fast inference of 5+ tokens/s try to have your RAM + VRAM combined = the size of quant (e.g. 256GB). If not the model will still run with llama.cpp offloading but be slower. Kimi K2 GGUF:"  [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1944798892811485670) 2025-07-14 16:39:23 UTC 26.7K followers, 5057 engagements "Mistral releases Devstral 2507 the best open-source model for coding agents 🔥 The 24B model is now the #1 open LLM on SWE-Bench Verified scoring XXXX% Run Devstral-Small-2507 locally on 32GB RAM with our Dynamic quants & fine-tune with Unsloth GGUFs:"  [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1943317113655189557) 2025-07-10 14:31:19 UTC 26.7K followers, 24.7K engagements "Weve teamed up with @GoogleDeepMind for a challenge with a $10000 Unsloth prize 🦥 Show off your best fine-tuned Gemma 3n model using Unsloth optimized for an impactful task. The entire hackathon has $150000 prizes to be won Kaggle notebook:"  [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1940414492791468240) 2025-07-02 14:17:20 UTC 26.7K followers, 46K engagements "We made step-by-step guides to Fine-tune & Run every single LLM 🦥 What you'll learn: Technical analysis + Bug fixes explained for each model Best practices & optimal settings How to fine-tune with our notebooks Directory of model variants"  [@UnslothAI](/creator/x/UnslothAI) on [X](/post/tweet/1942581217838211543) 2025-07-08 13:47:08 UTC 26.7K followers, 49.4K engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Unsloth AI posts on X about llm, k2, kimi, llamacpp the most. They currently have XXXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours.
Social topic influence llm, k2, kimi, llamacpp #1, ollama, inference, bug
Top posts by engagements in the last XX hours
"You can now run Kimi K2 locally with our Dynamic 1.8-bit GGUFs We shrank the full 1.1TB model to just 245GB (-80% size reduction). The 2-bit XL GGUF performs exceptionally well on coding & passes all our code tests Guide: GGUFs:" @UnslothAI on X 2025-07-14 15:27:02 UTC 26.7K followers, 124.8K engagements
"You can now fine-tune TTS models with Unsloth Train run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with XX% less VRAM. GitHub: Docs & Notebooks:" @UnslothAI on X 2025-05-15 16:38:23 UTC 26.7K followers, 125.4K engagements
"A Complete Guide to Fine-tuning LLMs in XX mins Learn to: Choose the correct model & training method (LoRA FFT GRPO) Build Datasets & Chat templates Train with Unsloth notebooks Run & deploy your LLM in llama.cpp Ollama & Open WebUI Docs:" @UnslothAI on X 2025-07-16 13:53:08 UTC 26.7K followers, 26.9K engagements
"For fast inference of 5+ tokens/s try to have your RAM + VRAM combined = the size of quant (e.g. 256GB). If not the model will still run with llama.cpp offloading but be slower. Kimi K2 GGUF:" @UnslothAI on X 2025-07-14 16:39:23 UTC 26.7K followers, 5057 engagements
"Mistral releases Devstral 2507 the best open-source model for coding agents 🔥 The 24B model is now the #1 open LLM on SWE-Bench Verified scoring XXXX% Run Devstral-Small-2507 locally on 32GB RAM with our Dynamic quants & fine-tune with Unsloth GGUFs:" @UnslothAI on X 2025-07-10 14:31:19 UTC 26.7K followers, 24.7K engagements
"Weve teamed up with @GoogleDeepMind for a challenge with a $10000 Unsloth prize 🦥 Show off your best fine-tuned Gemma 3n model using Unsloth optimized for an impactful task. The entire hackathon has $150000 prizes to be won Kaggle notebook:" @UnslothAI on X 2025-07-02 14:17:20 UTC 26.7K followers, 46K engagements
"We made step-by-step guides to Fine-tune & Run every single LLM 🦥 What you'll learn: Technical analysis + Bug fixes explained for each model Best practices & optimal settings How to fine-tune with our notebooks Directory of model variants" @UnslothAI on X 2025-07-08 13:47:08 UTC 26.7K followers, 49.4K engagements
/creator/twitter::UnslothAI