[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  lmarena.ai [@lmarena_ai](/creator/twitter/lmarena_ai) on x 84.3K followers Created: 2025-07-18 14:12:38 UTC DeepSeek's top open model, DeepSeek R1-0528, ranks #2 R1-0528 is a refined instruction-tuned version of R1, and the #2 best open chat model according to the community. Strong in multi-turn dialogue and reasoning tasks. R1 (baseline) is the original, still solid but now slightly behind newer tuning variants. V3-0324 is a MoE model with 236B total parameters, but activates only a few experts per prompt. This makes it both powerful and efficient. It performs well across instruction, reasoning, and multilingual tasks, but prompt format matters more here than with R1-0528. XXXXX engagements  **Related Topics** [r1](/topic/r1) [Post Link](https://x.com/lmarena_ai/status/1946211513322344512)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
lmarena.ai @lmarena_ai on x 84.3K followers
Created: 2025-07-18 14:12:38 UTC
DeepSeek's top open model, DeepSeek R1-0528, ranks #2
R1-0528 is a refined instruction-tuned version of R1, and the #2 best open chat model according to the community. Strong in multi-turn dialogue and reasoning tasks.
R1 (baseline) is the original, still solid but now slightly behind newer tuning variants.
V3-0324 is a MoE model with 236B total parameters, but activates only a few experts per prompt. This makes it both powerful and efficient. It performs well across instruction, reasoning, and multilingual tasks, but prompt format matters more here than with R1-0528.
XXXXX engagements
Related Topics r1
/post/tweet::1946211513322344512