[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @deepseek_ai DeepSeek DeepSeek posts on X about deepseekr1, inference, capabilities, mit the most. They currently have XXXXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours. ### Engagements: XXXXX [#](/creator/twitter::1714580962569588736/interactions)  - X Week XXXXXX +23% - X Month XXXXXXX -XX% - X Months XXXXXXXXXX +949% - X Year XXXXXXXXXX +5,194% ### Mentions: XX [#](/creator/twitter::1714580962569588736/posts_active)  - X Months XX +96% - X Year XX +168% ### Followers: XXXXXXX [#](/creator/twitter::1714580962569588736/followers)  - X Week XXXXXXX -XXXX% - X Month XXXXXXX -XXXX% - X Months XXXXXXX +1,230% - X Year XXXXXXX +10,098% ### CreatorRank: XXXXXXX [#](/creator/twitter::1714580962569588736/influencer_rank)  ### Social Influence [#](/creator/twitter::1714580962569588736/influence) --- **Social category influence** [social networks](/list/social-networks) [technology brands](/list/technology-brands) **Social topic influence** [deepseekr1](/topic/deepseekr1) #1, [inference](/topic/inference) #41, [capabilities](/topic/capabilities) #102, [mit](/topic/mit) #328, [scams](/topic/scams), [twitter](/topic/twitter), [cryptocurrency](/topic/cryptocurrency), [agi](/topic/agi), [hardware](/topic/hardware), [token](/topic/token) ### Top Social Posts [#](/creator/twitter::1714580962569588736/posts) --- Top posts by engagements in the last XX hours "DeepSeek has not issued any cryptocurrency. Currently there is only one official account on the Twitter platform. We will not contact anyone through other accounts.Please stay vigilant and guard against potential scams"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1877663619464478983) 2025-01-10 10:27:46 UTC 971.5K followers, 2.2M engagements "๐จ Off-Peak Discounts Alert Starting today enjoy off-peak discounts on the DeepSeek API Platform from 16:3000:30 UTC daily: ๐น DeepSeek-V3 at XX% off ๐น DeepSeek-R1 at a massive XX% off Maximize your resources smarter save more during these high-value hours"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1894710448676884671) 2025-02-26 11:25:47 UTC 971.5K followers, 872.9K engagements "๐ DeepSeek-VL2 is here Our next-gen vision-language model enters the MoE era. ๐ค DeepSeek-MoE arch + dynamic image tilling โก 3B/16B/27B sizes for flexible use ๐ Outstanding performance across all benchmarks ๐งต 1/n"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1867545550910017563) 2024-12-13 12:22:11 UTC 971.5K followers, 469.7K engagements "๐ Day 0: Warming up for #OpenSourceWeek We're a tiny team @deepseek_ai exploring AGI. Starting next week we'll be open-sourcing X repos sharing our small but sincere progress with full transparency. These humble building blocks in our online service have been documented deployed and battle-tested in production. As part of the open-source community we believe that every line shared becomes collective momentum that accelerates the journey. Daily unlocks are coming soon. No ivory towers - just pure garage-energy and community-driven innovation"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1892786555494019098) 2025-02-21 04:00:55 UTC 971.5K followers, 2.5M engagements "๐ Day X of #OpenSourceWeek: Optimized Parallelism Strategies โ DualPipe - a bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training. ๐ โ EPLB - an expert-parallel load balancer for V3/R1. ๐ ๐ Analyze computation-communication overlap in V3/R1. ๐"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1894931931554558199) 2025-02-27 02:05:53 UTC 971.5K followers, 2.5M engagements "๐ Introducing NSA: A Hardware-Aligned and Natively Trainable Sparse Attention mechanism for ultra-fast long-context training & inference Core components of NSA: Dynamic hierarchical sparse strategy Coarse-grained token compression Fine-grained token selection ๐ก With optimized design for modern hardware NSA speeds up inference while reducing pre-training costswithout compromising performance. It matches or outperforms Full Attention models on general benchmarks long-context tasks and instruction-based reasoning. ๐ For more details check out our paper here:"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1891745487071609327) 2025-02-18 07:04:05 UTC 971.6K followers, 2.6M engagements "๐ Day X of #OpenSourceWeek: DeepEP Excited to introduce DeepEP - the first open-source EP communication library for MoE model training and inference. โ Efficient and optimized all-to-all communication โ Both intranode and internode support with NVLink and RDMA โ High-throughput kernels for training and inference prefilling โ Low-latency kernels for inference decoding โ Native FP8 dispatch support โ Flexible GPU resource control for computation-communication overlapping ๐ GitHub:"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1894211757604049133) 2025-02-25 02:24:10 UTC 971.5K followers, 1.4M engagements "๐ Day X of #OpenSourceWeek: FlashMLA Honored to share FlashMLA - our efficient MLA decoding kernel for Hopper GPUs optimized for variable-length sequences and now in production. โ BF16 support โ Paged KV cache (block size 64) โก 3000 GB/s memory-bound & XXX TFLOPS compute-bound on H800 ๐ Explore on GitHub:"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1893836827574030466) 2025-02-24 01:34:20 UTC 971.5K followers, 1.7M engagements "๐ Day X of #OpenSourceWeek: DeepGEMM Introducing DeepGEMM - an FP8 GEMM library that supports both dense and MoE GEMMs powering V3/R1 training and inference. โก Up to 1350+ FP8 TFLOPS on Hopper GPUs โ No heavy dependency as clean as a tutorial โ Fully Just-In-Time compiled โ Core logic at XXX lines - yet outperforms expert-tuned kernels across most matrix sizes โ Supports dense layout and two MoE layouts ๐ GitHub:"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1894553164235640933) 2025-02-26 01:00:48 UTC 971.5K followers, 950.7K engagements "๐ DeepSeek-R1 is here โก Performance on par with OpenAI-o1 ๐ Fully open-source model & technical report ๐ MIT licensed: Distill & commercialize freely ๐ Website & API are live now Try DeepThink at today ๐ 1/n"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1881318130334814301) 2025-01-20 12:29:30 UTC 971.5K followers, 12.4M engagements "๐ Introducing DeepSeek-V3 Biggest leap forward yet: โก XX tokens/second (3x faster than V2) ๐ช Enhanced capabilities ๐ API compatibility intact ๐ Fully open-source models & papers ๐ 1/n"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1872242657348710721) 2024-12-26 11:26:48 UTC 971.5K followers, 7.3M engagements "๐ Day X of #OpenSourceWeek: One More Thing DeepSeek-V3/R1 Inference System Overview Optimized throughput and latency via: ๐ง Cross-node EP-powered batch scaling ๐ Computation-communication overlap โ Load balancing Statistics of DeepSeek's Online Service: โก 73.7k/14.8k input/output tokens per second per H800 node ๐ Cost profit margin XXX% ๐ก We hope this week's insights offer value to the community and contribute to our shared AGI goals. ๐ Deep Dive:"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1895688300574462431) 2025-03-01 04:11:25 UTC 971.5K followers, 3.9M engagements "๐ Excited to see everyones enthusiasm for deploying DeepSeek-R1 Here are our recommended settings for the best experience: No system prompt Temperature: XXX Official prompts for search & file upload: Guidelines to mitigate model bypass thinking: The official DeepSeek deployment runs the same model as the open-source versionenjoy the full DeepSeek-R1 experience ๐"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1890324295181824107) 2025-02-14 08:56:47 UTC 971.5K followers, 1.8M engagements "To prevent any potential harm we reiterate that @deepseek_ai is our sole official account on Twitter/X. Any accounts: - representing us - using identical avatars - using similar names are impersonations. Please stay vigilant to avoid being misled"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1884103376868368589) 2025-01-28 04:57:04 UTC 971.5K followers, 8.1M engagements "๐ DeepSeek-R1-0528 is here ๐น Improved benchmark performance ๐น Enhanced front-end capabilities ๐น Reduced hallucinations ๐น Supports JSON output & function calling โ Try it now: ๐ No change to API usage docs here: ๐ Open-source weights:"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1928061589107900779) 2025-05-29 12:11:19 UTC 971.5K followers, 1.4M engagements "๐ DeepSeek-V3-0324 is out now ๐น Major boost in reasoning performance ๐น Stronger front-end development skills ๐น Smarter tool-use capabilities โ For non-complex reasoning tasks we recommend using V3 just turn off DeepThink ๐ API usage remains unchanged ๐ Models are now released under the MIT License just like DeepSeek-R1 ๐ Open-source weights:"  [@deepseek_ai](/creator/x/deepseek_ai) on [X](/post/tweet/1904526863604883661) 2025-03-25 13:32:43 UTC 971.5K followers, 1.5M engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
DeepSeek posts on X about deepseekr1, inference, capabilities, mit the most. They currently have XXXXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours.
Social category influence social networks technology brands
Social topic influence deepseekr1 #1, inference #41, capabilities #102, mit #328, scams, twitter, cryptocurrency, agi, hardware, token
Top posts by engagements in the last XX hours
"DeepSeek has not issued any cryptocurrency. Currently there is only one official account on the Twitter platform. We will not contact anyone through other accounts.Please stay vigilant and guard against potential scams" @deepseek_ai on X 2025-01-10 10:27:46 UTC 971.5K followers, 2.2M engagements
"๐จ Off-Peak Discounts Alert Starting today enjoy off-peak discounts on the DeepSeek API Platform from 16:3000:30 UTC daily: ๐น DeepSeek-V3 at XX% off ๐น DeepSeek-R1 at a massive XX% off Maximize your resources smarter save more during these high-value hours" @deepseek_ai on X 2025-02-26 11:25:47 UTC 971.5K followers, 872.9K engagements
"๐ DeepSeek-VL2 is here Our next-gen vision-language model enters the MoE era. ๐ค DeepSeek-MoE arch + dynamic image tilling โก 3B/16B/27B sizes for flexible use ๐ Outstanding performance across all benchmarks ๐งต 1/n" @deepseek_ai on X 2024-12-13 12:22:11 UTC 971.5K followers, 469.7K engagements
"๐ Day 0: Warming up for #OpenSourceWeek We're a tiny team @deepseek_ai exploring AGI. Starting next week we'll be open-sourcing X repos sharing our small but sincere progress with full transparency. These humble building blocks in our online service have been documented deployed and battle-tested in production. As part of the open-source community we believe that every line shared becomes collective momentum that accelerates the journey. Daily unlocks are coming soon. No ivory towers - just pure garage-energy and community-driven innovation" @deepseek_ai on X 2025-02-21 04:00:55 UTC 971.5K followers, 2.5M engagements
"๐ Day X of #OpenSourceWeek: Optimized Parallelism Strategies โ
DualPipe - a bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training. ๐ โ
EPLB - an expert-parallel load balancer for V3/R1. ๐ ๐ Analyze computation-communication overlap in V3/R1. ๐" @deepseek_ai on X 2025-02-27 02:05:53 UTC 971.5K followers, 2.5M engagements
"๐ Introducing NSA: A Hardware-Aligned and Natively Trainable Sparse Attention mechanism for ultra-fast long-context training & inference Core components of NSA: Dynamic hierarchical sparse strategy Coarse-grained token compression Fine-grained token selection ๐ก With optimized design for modern hardware NSA speeds up inference while reducing pre-training costswithout compromising performance. It matches or outperforms Full Attention models on general benchmarks long-context tasks and instruction-based reasoning. ๐ For more details check out our paper here:" @deepseek_ai on X 2025-02-18 07:04:05 UTC 971.6K followers, 2.6M engagements
"๐ Day X of #OpenSourceWeek: DeepEP Excited to introduce DeepEP - the first open-source EP communication library for MoE model training and inference. โ
Efficient and optimized all-to-all communication โ
Both intranode and internode support with NVLink and RDMA โ
High-throughput kernels for training and inference prefilling โ
Low-latency kernels for inference decoding โ
Native FP8 dispatch support โ
Flexible GPU resource control for computation-communication overlapping ๐ GitHub:" @deepseek_ai on X 2025-02-25 02:24:10 UTC 971.5K followers, 1.4M engagements
"๐ Day X of #OpenSourceWeek: FlashMLA Honored to share FlashMLA - our efficient MLA decoding kernel for Hopper GPUs optimized for variable-length sequences and now in production. โ
BF16 support โ
Paged KV cache (block size 64) โก 3000 GB/s memory-bound & XXX TFLOPS compute-bound on H800 ๐ Explore on GitHub:" @deepseek_ai on X 2025-02-24 01:34:20 UTC 971.5K followers, 1.7M engagements
"๐ Day X of #OpenSourceWeek: DeepGEMM Introducing DeepGEMM - an FP8 GEMM library that supports both dense and MoE GEMMs powering V3/R1 training and inference. โก Up to 1350+ FP8 TFLOPS on Hopper GPUs โ
No heavy dependency as clean as a tutorial โ
Fully Just-In-Time compiled โ
Core logic at XXX lines - yet outperforms expert-tuned kernels across most matrix sizes โ
Supports dense layout and two MoE layouts ๐ GitHub:" @deepseek_ai on X 2025-02-26 01:00:48 UTC 971.5K followers, 950.7K engagements
"๐ DeepSeek-R1 is here โก Performance on par with OpenAI-o1 ๐ Fully open-source model & technical report ๐ MIT licensed: Distill & commercialize freely ๐ Website & API are live now Try DeepThink at today ๐ 1/n" @deepseek_ai on X 2025-01-20 12:29:30 UTC 971.5K followers, 12.4M engagements
"๐ Introducing DeepSeek-V3 Biggest leap forward yet: โก XX tokens/second (3x faster than V2) ๐ช Enhanced capabilities ๐ API compatibility intact ๐ Fully open-source models & papers ๐ 1/n" @deepseek_ai on X 2024-12-26 11:26:48 UTC 971.5K followers, 7.3M engagements
"๐ Day X of #OpenSourceWeek: One More Thing DeepSeek-V3/R1 Inference System Overview Optimized throughput and latency via: ๐ง Cross-node EP-powered batch scaling ๐ Computation-communication overlap โ Load balancing Statistics of DeepSeek's Online Service: โก 73.7k/14.8k input/output tokens per second per H800 node ๐ Cost profit margin XXX% ๐ก We hope this week's insights offer value to the community and contribute to our shared AGI goals. ๐ Deep Dive:" @deepseek_ai on X 2025-03-01 04:11:25 UTC 971.5K followers, 3.9M engagements
"๐ Excited to see everyones enthusiasm for deploying DeepSeek-R1 Here are our recommended settings for the best experience: No system prompt Temperature: XXX Official prompts for search & file upload: Guidelines to mitigate model bypass thinking: The official DeepSeek deployment runs the same model as the open-source versionenjoy the full DeepSeek-R1 experience ๐" @deepseek_ai on X 2025-02-14 08:56:47 UTC 971.5K followers, 1.8M engagements
"To prevent any potential harm we reiterate that @deepseek_ai is our sole official account on Twitter/X. Any accounts: - representing us - using identical avatars - using similar names are impersonations. Please stay vigilant to avoid being misled" @deepseek_ai on X 2025-01-28 04:57:04 UTC 971.5K followers, 8.1M engagements
"๐ DeepSeek-R1-0528 is here ๐น Improved benchmark performance ๐น Enhanced front-end capabilities ๐น Reduced hallucinations ๐น Supports JSON output & function calling โ
Try it now: ๐ No change to API usage docs here: ๐ Open-source weights:" @deepseek_ai on X 2025-05-29 12:11:19 UTC 971.5K followers, 1.4M engagements
"๐ DeepSeek-V3-0324 is out now ๐น Major boost in reasoning performance ๐น Stronger front-end development skills ๐น Smarter tool-use capabilities โ
For non-complex reasoning tasks we recommend using V3 just turn off DeepThink ๐ API usage remains unchanged ๐ Models are now released under the MIT License just like DeepSeek-R1 ๐ Open-source weights:" @deepseek_ai on X 2025-03-25 13:32:43 UTC 971.5K followers, 1.5M engagements
/creator/twitter::deepseek_ai