[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@UnslothAI Avatar @UnslothAI Unsloth AI

Unsloth AI posts on X about ai, vram, inference, gpus the most. They currently have XXXXXX followers and XXX posts still getting attention that total XXXXXXX engagements in the last XX hours.

Engagements: XXXXXXX #

Engagements Line Chart

Mentions: X #

Mentions Line Chart

Followers: XXXXXX #

Followers Line Chart

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence

Social category influence automotive brands

Social topic influence ai #2033, vram #1, inference, gpus, solve, environment, outperform, rl, context window, smart

Top Social Posts

Top posts by engagements in the last XX hours

"You can now run FP8 reinforcement learning on consumer GPUs Try DeepSeek-R1s FP8 GRPO at home using only a 5GB GPU. Qwen3-1.7B fits in 5GB VRAM. We collabed with PyTorch to make FP8 RL inference XXX faster. Unsloth: XX% less VRAM XX longer context"
X Link 2025-11-25T16:37Z 36.6K followers, 142.1K engagements

"You can now train Mistral Ministral X with reinforcement learning in our free notebook You'll GRPO the model to solve sudoku autonomously. Learn about our new reward functions RL environment & reward hacking. Blog: Notebook:"
X Link 2025-12-04T15:01Z 36.6K followers, 40.2K engagements

"Can a 1-bit or 3-bit quantized model outperform GPT-4.1 or Claude-Opus-4 Yes Today we're excited to show how LLMs like DeepSeek-V3.1 can be quantized to just 1-bit or 3-bit and still beat SOTA models like Claude-Opus-4 (thinking) on Aider Polyglot. Details and blog below"
X Link 2025-09-10T15:21Z 36.6K followers, 160.7K engagements

"You can now run Qwen3-VL locally 💜 Run the 235B variant for SOTA vision/OCR on 128GB unified memory (dynamic 4-bit). Includes our chat template fixes. Qwen3-VL-2B runs at XX t/s on 4GB RAM. Fine-tune & RL via Unsloth free notebooks & export to GGUF"
X Link 2025-10-31T13:31Z 36.6K followers, 91.9K engagements

"You can now do 500K context length fine-tuning with Unsloth Train gpt-oss-20b to extend its context window to 530K on 80GB VRAM & 750K+ on 192GB - no accuracy loss. Unsloth's new algorithms + Tiled MLP = XX% less VRAM & 6x more context Blog + Notebook:"
X Link 2025-12-01T14:45Z 36.6K followers, 40.5K engagements

"Mistral releases Ministral X their new reasoning and instruct models 🔥 Ministral X comes in 3B 8B and 14B with vision support and best-in-class performance. Run the 14B models locally with 24GB RAM. Guide + Notebook: GGUFs:"
X Link 2025-12-02T15:17Z 36.6K followers, 79.8K engagements

"You can now train LLMs X faster with no accuracy loss via our new RoPE and MLP kernels. Our Triton kernels plus smart auto packing delivers X faster training & XX% less VRAM vs optimized FA3 setups. Train Qwen3-4B 3x faster on just 3.9GB VRAM. Blog:"
X Link 2025-12-10T14:41Z 36.6K followers, 561.2K engagements