[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@UnslothAI Avatar @UnslothAI Unsloth AI

Unsloth AI posts on X about open ai, blog, accuracy, ibm the most. They currently have XXXXXX followers and XX posts still getting attention that total XXXXXX engagements in the last XX hours.

Engagements: XXXXXX #

Engagements Line Chart

Mentions: XX #

Mentions Line Chart

Followers: XXXXXX #

Followers Line Chart

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence #


Social category influence technology brands XXXXX% stocks #5298

Social topic influence open ai #305, blog 3.85%, accuracy 3.85%, ibm #112, collab #175, github 1.92%, micro 1.92%, applications 1.92%, agentic 1.92%, $googl XXXX%

Top accounts mentioned or mentioned by @danielhanchen @alibabaqwen @reach_vb @josephsarnecki @corbtt @glenncameronjr @mrgshum @huggingface @maziyarpanahi @cognitivecompai @prince_canuma @_feynon @kp_meister @openai @amd @dkundel @hesamation @roninroamer @kuittinenpetri @promptinjection

Top assets mentioned IBM (IBM) Alphabet Inc Class A (GOOGL)

Top Social Posts #


Top posts by engagements in the last XX hours

"Can a 1-bit or 3-bit quantized model outperform GPT-4.1 or Claude-Opus-4 Yes Today we're excited to show how LLMs like DeepSeek-V3.1 can be quantized to just 1-bit or 3-bit and still beat SOTA models like Claude-Opus-4 (thinking) on Aider Polyglot. Details and blog below"
X Link @UnslothAI 2025-09-10T15:21Z 32.7K followers, 141.9K engagements

"You can now fine-tune TTS models with Unsloth Train run and save models like Sesame-CSM and OpenAI's Whisper locally with our free notebooks. Unsloth makes TTS training 1.5x faster with XX% less VRAM. GitHub: Docs & Notebooks:"
X Link @UnslothAI 2025-05-15T16:38Z 32.7K followers, 127K engagements

"You can now train Vision LLMs with Reinforcement Learning in our free notebook Unsloth VLM RL via GRPO: XXX faster XX% less VRAM XX longer context & no accuracy loss. Guide: GitHub: Qwen2.5-VL Colab:"
X Link @UnslothAI 2025-09-16T16:24Z 32.7K followers, 142.1K engagements

"IBM releases Granite-4.0 their new series of open models Run the 'Micro' 3B model on 4GB RAM or 'Small' 32B on 40GB RAM. Granite-4.0 excels at agentic tasks doc analysis RAG edge AI applications & more Dynamic GGUFs: Guide:"
X Link @UnslothAI 2025-10-02T14:14Z 32.7K followers, 42.4K engagements

"We made a free notebook that fine-tunes IBM Granite XXX into a powerful support agent This agent will enable real-time analysis & solving of customer interactions. You'll also learn how to train models using data from Google Sheets. Colab Notebook:"
X Link @UnslothAI 2025-10-02T15:37Z 32.7K followers, 49.8K engagements

"Unsloth now has a Docker image 🐳 Train LLMs locally with no setup: just run the image and go. Includes every pre-made Unsloth notebook. Solves dependency or environment issues. Guide:"
X Link @UnslothAI 2025-10-01T13:42Z 32.7K followers, 94.6K engagements

"OpenAI shows how gpt-oss can autonomously beat 2048 using reinforcement learning (RL). Training was done locally with Unsloth on NVIDIA DGX Spark. You can also do it free on Colab. 🦥 OpenAI DevDay notebook:"
X Link @UnslothAI 2025-10-09T13:50Z 32.7K followers, 93.4K engagements

"You can now run gpt-oss-120b & 20b locally with our GGUFs 🦥 Run OpenAI's 120b model on 66GB RAM & 20b model on 14GB RAM. Both in original precision. Uploads includes our chat template fixes. Guide: GGUF:"
X Link @UnslothAI 2025-08-05T20:10Z 32.7K followers, 95.2K engagements

"LoRA in reinforcement learning (RL) can match full-finetuning performance when done right 💡 A new @thinkymachines post shows how using 10x larger learning rates applying LoRA on all layers & more LoRA at rank=1 even works. We're excited to have collaborated on this blog"
X Link @UnslothAI 2025-09-29T20:59Z 32.7K followers, 61.7K engagements

"You can now train models up to 200B parameters locally on NVIDIA DGX Spark with Unsloth 🦥 Fine-tune RL & deploy OpenAI gpt-oss-120b via our free notebook in 68GB unified memory: Read our step-by-step guide in collab with NVIDIA"
X Link @UnslothAI 2025-10-15T13:43Z 32.7K followers, 33.9K engagements

"Thank you @dkundel from OpenAI and Barath from NVIDIA for the collab. 🥰 Watch Dominik's full gpt-oss presentation:"
X Link @UnslothAI 2025-10-09T14:23Z 32.7K followers, 6242 engagements

"You can now train OpenAI gpt-oss with Reinforcement Learning in our free notebook This notebook automatically creates faster kernels via RL. Unsloth RL achieves the fastest inference & lowest VRAM vs. any setup - X accuracy loss gpt-oss-20b GRPO Colab:"
X Link @UnslothAI 2025-09-26T15:45Z 32.7K followers, 123.4K engagements