[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
@deepseek_ai DeepSeekDeepSeek posts on X about inference, token, capabilities, v3 the most. They currently have XXXXXXX followers and XX posts still getting attention that total XXXXXX engagements in the last XX hours.
Social topic influence inference #28, token #610, capabilities #69, v3 #248, built on 5%, level 5%, core 5%, attention 5%, for all X%
Top accounts mentioned or mentioned by @grok @deborahrammozes @yentingfu @marcuscohenshit @b2bstrategy2 @bhowardstern @0rdlibrary @kllrbeez @arena @finisprime @nachobritos12 @_hymn @tradexyz @b2bstrategy1 @presidentlin @airesearch12 @fli_org @modenetwork @vitaagentminima @xperimentalunit
Top posts by engagements in the last XX hours
"Introducing DeepSeek-V3.1: our first step toward the agent era 🚀 🧠 Hybrid inference: Think & Non-Think one model two modes ⚡ Faster thinking: DeepSeek-V3.1-Think reaches answers in less time vs. DeepSeek-R1-0528 🛠 Stronger agent skills: Post-training boosts tool use and multi-step agent tasks Try it now toggle Think/Non-Think via the "DeepThink" button: 1/5"
X Link 2025-08-21T06:33Z 975.2K followers, 2.1M engagements
"🚀 Introducing DeepSeek-V3.2-Exp our latest experimental model ✨ Built on V3.1-Terminus it debuts DeepSeek Sparse Attention(DSA) for faster more efficient training & inference on long context. 👉 Now live on App Web and API. 💰 API prices cut by 50%+ 1/n"
X Link 2025-09-29T10:10Z 975.2K followers, 1.3M engagements
"🏆 World-Leading Reasoning 🔹 V3.2: Balanced inference vs. length. Your daily driver at GPT-5 level performance. 🔹 V3.2-Speciale: Maxed-out reasoning capabilities. Rivals Gemini-3.0-Pro. 🥇 Gold-Medal Performance: V3.2-Speciale attains gold-level results in IMO CMO ICPC World Finals & IOI 2025. 📝 Note: V3.2-Speciale dominates complex tasks but requires higher token usage. Currently API-only (no tool-use) to support community evaluation & research. 2/n"
X Link 2025-12-01T11:19Z 975.2K followers, 1.1M engagements
"🤖 Thinking in Tool-Use 🔹 Introduces a new massive agent training data synthesis method covering 1800+ environments & 85k+ complex instructions. 🔹 DeepSeek-V3.2 is our first model to integrate thinking directly into tool-use and also supports tool-use in both thinking and non-thinking modes. 3/n"
X Link 2025-12-01T11:19Z 975.2K followers, 167.2K engagements
"🚀 Introducing DeepSeek-V3 Biggest leap forward yet: ⚡ XX tokens/second (3x faster than V2) 💪 Enhanced capabilities 🛠 API compatibility intact 🌍 Fully open-source models & papers 🐋 1/n"
X Link 2024-12-26T11:26Z 975.2K followers, 7.3M engagements
"🚀 DeepSeek-R1-0528 is here 🔹 Improved benchmark performance 🔹 Enhanced front-end capabilities 🔹 Reduced hallucinations 🔹 Supports JSON output & function calling ✅ Try it now: 🔌 No change to API usage docs here: 🔗 Open-source weights:"
X Link 2025-05-29T12:11Z 975.2K followers, 1.5M engagements
"⚠ Heads-up to anyone using the DeepSeek-V3.2-Exp inference demo: earlier versions had a RoPE implementation mismatch in the indexer module that could degrade performance. Indexer RoPE expects non-interleaved input MLA RoPE expects interleaved. Fixed in"
X Link 2025-11-18T03:00Z 975.2K followers, 467.9K engagements
"🚀 Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale Reasoning-first models built for agents 🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App Web & API. 🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now. 📄 Tech report: 1/n"
X Link 2025-12-01T11:19Z 975.2K followers, 4.8M engagements
"💻 API Update 🔹 V3.2: Same usage pattern as V3.2-Exp. 🔹 V3.2-Speciale: Served via a temporary endpoint: Same pricing as V3.2 no tool calls available until Dec 15th 2025 15:59 (UTC Time). 💡 V3.2 now supports Thinking in Tool-Use details: 4/n"
X Link 2025-12-01T11:19Z 975.2K followers, 224.9K engagements
"🚀 DeepSeek-R1 is here ⚡ Performance on par with OpenAI-o1 📖 Fully open-source model & technical report 🏆 MIT licensed: Distill & commercialize freely 🌐 Website & API are live now Try DeepThink at today 🐋 1/n"
X Link 2025-01-20T12:29Z 975.2K followers, 12.5M engagements
"🚀 Introducing NSA: A Hardware-Aligned and Natively Trainable Sparse Attention mechanism for ultra-fast long-context training & inference Core components of NSA: Dynamic hierarchical sparse strategy Coarse-grained token compression Fine-grained token selection 💡 With optimized design for modern hardware NSA speeds up inference while reducing pre-training costswithout compromising performance. It matches or outperforms Full Attention models on general benchmarks long-context tasks and instruction-based reasoning. 📖 For more details check out our paper here:"
X Link 2025-02-18T07:04Z 975.2K followers, 2.6M engagements
"🚀 Day X of #OpenSourceWeek: 3FS Thruster for All DeepSeek Data Access Fire-Flyer File System (3FS) - a parallel file system that utilizes the full bandwidth of modern SSDs and RDMA networks. ⚡ XXX TiB/s aggregate read throughput in a 180-node cluster ⚡ XXXX TiB/min throughput on GraySort benchmark in a 25-node cluster ⚡ 40+ GiB/s peak throughput per client node for KVCache lookup 🧬 Disaggregated architecture with strong consistency semantics ✅ Training data preprocessing dataset loading checkpoint saving/reloading embedding vector search & KVCache lookups for inference in V3/R1 📥 3FS ⛲"
X Link 2025-02-28T01:06Z 975.2K followers, 3.2M engagements
"🚀 Day X of #OpenSourceWeek: One More Thing DeepSeek-V3/R1 Inference System Overview Optimized throughput and latency via: 🔧 Cross-node EP-powered batch scaling 🔄 Computation-communication overlap ⚖ Load balancing Statistics of DeepSeek's Online Service: ⚡ 73.7k/14.8k input/output tokens per second per H800 node 🚀 Cost profit margin XXX% 💡 We hope this week's insights offer value to the community and contribute to our shared AGI goals. 📖 Deep Dive:"
X Link 2025-03-01T04:11Z 975.2K followers, 4M engagements
"🚀 DeepSeek-V3-0324 is out now 🔹 Major boost in reasoning performance 🔹 Stronger front-end development skills 🔹 Smarter tool-use capabilities ✅ For non-complex reasoning tasks we recommend using V3 just turn off DeepThink 🔌 API usage remains unchanged 📜 Models are now released under the MIT License just like DeepSeek-R1 🔗 Open-source weights:"
X Link 2025-03-25T13:32Z 975.2K followers, 1.6M engagements
"Model Update 🤖 🔹 V3.1 Base: 840B tokens continued pretraining for long context extension on top of V3 🔹 Tokenizer & chat template updated new tokenizer config: 🔗 V3.1 Base Open-source weights: 🔗 V3.1 Open-source weights: 4/5"
X Link 2025-08-21T06:33Z 975.2K followers, 169.9K engagements
"📊 DeepSeek-V3.1-Terminus delivers more stable & reliable outputs across benchmarks compared to the previous version. 👉 Available now on: App / Web / API 🔗 Open-source weights here: Thanks to everyone for your feedback. It drives us to keep improving and refining the experience 🚀 2/2"
X Link 2025-09-22T13:27Z 975.2K followers, 127.6K engagements
"⚡ Efficiency Gains 🤖 DSA achieves fine-grained sparse attention with minimal impact on output quality boosting long-context performance & reducing compute cost. 📊 Benchmarks show V3.2-Exp performs on par with V3.1-Terminus. 2/n"
X Link 2025-09-29T10:10Z 975.2K followers, 185.5K engagements
"💻 API Update 🎉 Lower costs same access 💰 DeepSeek API prices drop 50%+ effective immediately. 🔹 For comparison testing V3.1-Terminus remains available via a temporary API until Oct 15th 2025 15:59 (UTC Time). Details: 🔹 Feedback welcome: 3/n"
X Link 2025-09-29T10:10Z 975.2K followers, 319.9K engagements
"🛠 Open Source Release 📦 DeepSeek-V3.2 Model: 📦 DeepSeek-V3.2-Speciale Model: 📄 Tech report: 5/n"
X Link 2025-12-01T11:19Z 975.2K followers, 170.8K engagements