[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@HuggingPapers Avatar @HuggingPapers DailyPapers

DailyPapers posts on X about bytedance, alibaba, accuracy, shanghai the most. They currently have XXXXX followers and XXX posts still getting attention that total XXXXXX engagements in the last XX hours.

Engagements: XXXXXX #

Engagements Line Chart

Mentions: XX #

Mentions Line Chart

Followers: XXXXX #

Followers Line Chart

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence #


Social category influence technology brands XXXX% stocks XXXX% travel destinations XXXX% currencies XXXX% nfts XXXX% social networks XXXX%

Social topic influence bytedance #8, alibaba #121, accuracy 1.16%, shanghai #160, signals #806, rl 0.87%, consistency #167, future of #803, breakthrough #301, alibaba group #37

Top accounts mentioned or mentioned by @grok @raghunath_pr @adinayakup @bytedanceai @ucscai @ostrisai @atomsilverman @humorbyteshs @uthethe @memecoin_track @brigittetousi @athundt @hcsolakoglu @ai_valkyr1e @mc1_e @c1phervoid @latentspacer @hexa_circuit @marshallcortex @storyofaiguess

Top assets mentioned Alibaba Group (BABA) Voxels (voxels) IBM (IBM)

Top Social Posts #


Top posts by engagements in the last XX hours

"Drax: Discrete Flow Matching brings a new era of efficient ASR This novel framework enables efficient parallel decoding in Automatic Speech Recognition. It achieves state-of-the-art accuracy comparable to autoregressive models with significantly better accuracy-efficiency trade-offs"
X Link @HuggingPapers 2025-10-08T20:10Z 7563 followers, 12.4K engagements

"Explore VPPO a new multimodal RL method from Shanghai AI Lab focusing on 'token perception' for SOTA visual reasoning. Paper: Models:"
X Link @HuggingPapers 2025-10-15T00:24Z 7622 followers, XXX engagements

"ByteDance just released Sa2VA on Hugging Face. This MLLM marries SAM2 with LLaVA for dense grounded understanding of images & videos offering SOTA performance in segmentation grounding and QA"
X Link @HuggingPapers 2025-10-16T08:51Z 7623 followers, 9817 engagements

"ByteDance just released veAgentBench on Hugging Face A new benchmark to rigorously evaluate the capabilities of next-generation AI agents"
X Link @HuggingPapers 2025-10-11T03:11Z 7561 followers, 1090 engagements

"Tencent Hunyuan3D-Omni: A unified framework for controllable 3D asset generation This new model brings fine-grained control to 3D asset creation accepting point clouds voxels bounding boxes and skeletal poses. A single cross-modal architecture unifies all signals for precise control"
X Link @HuggingPapers 2025-09-27T08:10Z 7622 followers, 1014 engagements

"Self-Forcing++ for minute-scale video generation ByteDance's new method generates high-quality videos up to X min XX sec It scales diffusion models without long-video teachers or retraining preserving fidelity and consistency"
X Link @HuggingPapers 2025-10-05T04:09Z 7618 followers, 17.4K engagements

"When Thoughts Meet Facts: New from Amazon & KAIST LCLMs can process vast contexts but struggle with reasoning. ToTAL introduces reusable "thought templates" that structure evidence guiding multi-hop inference with factual documents"
X Link @HuggingPapers 2025-10-11T16:11Z 7622 followers, 17.8K engagements

"ServiceNow's Apriel-1.5-15B-Thinker: Frontier AI on a single GPU This 15B-parameter open-weights multimodal model achieves state-of-the-art reasoning performance matching models 8-10x its sizeall without an RL phase"
X Link @HuggingPapers 2025-10-06T12:11Z 7618 followers, 16.2K engagements

"CoDA's multi-agent system redefines data visualization. It handles complex datasets robust code & high-quality outputs. See the future of automation for collaborative data analysis Read paper:"
X Link @HuggingPapers 2025-10-06T20:09Z 7571 followers, XXX engagements

"New inference method TAG fights diffusion model hallucinations Introducing Tangential Amplifying Guidance (TAG): a training-free plug-and-play method for diffusion models that significantly reduces hallucinations and boosts sample quality by steering generation to high-probability regions"
X Link @HuggingPapers 2025-10-13T20:09Z 7618 followers, 7602 engagements

"How to make LLM agents smarter with less data Tree-GRPO from Alibaba Group's AMAP-ML introduces a novel tree-search RL framework drastically cutting rollout budgets and boosting performance in complex multi-turn tasks"
X Link @HuggingPapers 2025-09-26T16:06Z 7622 followers, XXX engagements

"Academic promotion just got smarter with ByteDance's AutoPR It automates turning research papers into engaging social media posts boosting watch time by XXX% and likes by XXX% with its multi-agent framework"
X Link @HuggingPapers 2025-10-13T16:09Z 7556 followers, 1769 engagements

"TAG boosts diffusion model fidelity by amplifying tangential updates making generation more resistant to hallucinations across SD1.5-SD3. It's efficient & plug-and-play Dive in: (includes demo & code)"
X Link @HuggingPapers 2025-10-13T20:09Z 7612 followers, XXX engagements

"Kuaishou's Kling-Avatar introduces a new framework for high-fidelity long-duration avatar animation. It unifies multimodal instructions to generate photorealistic videos with vivid emotions and precise lip-sync"
X Link @HuggingPapers 2025-09-12T20:07Z 7525 followers, XXX engagements

"MemMamba: A breakthrough in ultra-long sequence modeling It rethinks memory patterns in State Space Models achieving stable performance at massive context lengths and delivering a XX% inference speedup"
X Link @HuggingPapers 2025-10-10T16:10Z 7620 followers, 11.8K engagements

"Explore how AHN enhances models like Qwen 2.5-14B for ultra-long contexts. Find the model here: Learn more on GitHub:"
X Link @HuggingPapers 2025-10-09T04:18Z 7571 followers, XXX engagements

"Alibaba Group & partners unveil MMR1: Revolutionizing multimodal reasoning with less data MMR1 introduces Variance-Aware Sampling (VAS) for stable RL fine-tuning. Tackles unstable optimization & scarce high-quality data. Releasing massive open datasets (1.6M CoT 15k RL QA) & models (3B 7B 32B) for the community"
X Link @HuggingPapers 2025-09-26T08:11Z 7618 followers, 6334 engagements

"Shanghai AI Lab unveils VPPO for multimodal RL This new method spotlights "token perception" to make LVLMs reason better. It achieves state-of-the-art results with superior stability & faster convergence on X benchmarks"
X Link @HuggingPapers 2025-10-15T00:24Z 7622 followers, 11.9K engagements

"Samsung's Tiny Recursive Model (TRM) masters complex reasoning With just 7M parameters TRM outperforms large LLMs on hard puzzles like Sudoku & ARC-AGI. This "Less is More" approach redefines efficiency in AI using less than XXXX% of competitors' parameters"
X Link @HuggingPapers 2025-10-08T16:09Z 7623 followers, 43.8K engagements

"LRD achieves up to 10.6x faster decoding while improving accuracy across various coding and reasoning tasks Experience a powerful versatile alternative for parallel sequence generation. Read the full paper on Hugging Face:"
X Link @HuggingPapers 2025-10-14T16:11Z 7619 followers, XXX engagements

"Meta's new "Early Experience" approach enables agent self-improvement and robust generalization bypassing limits of supervised training. A huge step for truly autonomous AI learning from its own interactions Dive into the paper:"
X Link @HuggingPapers 2025-10-10T12:12Z 7559 followers, 1451 engagements

"Meta AI & IBM Research reveal how flawed thinking can improve LRM safety Introducing RECAP an RL post-training method that teaches models to override unsafe reasoning and reroute to safe helpful answers all without extra training cost"
X Link @HuggingPapers 2025-10-06T16:08Z 7622 followers, 1165 engagements

"Google's CoDA: Multi-Agent AI for Collaborative Data Visualization Manually crafting visualizations is now easier. CoDA uses specialized LLM agents for complex datasets. Outperforms baselines by over XX% with robust quality results. A new era for visualization automation"
X Link @HuggingPapers 2025-10-06T20:09Z 7618 followers, 7989 engagements

"ByteDance just released Artificial Hippocampus Networks (AHN) on Hugging Face. AHN transforms lossless memory into fixed-size compressed representations for efficient long-context modeling integrating with models like Qwen 2.5"
X Link @HuggingPapers 2025-10-09T04:18Z 7571 followers, 1759 engagements

"ByteDance just released FaceCLIP on Hugging Face A new vision-language model specializing in understanding and generating diverse human faces. Dive into the future of facial AI"
X Link @HuggingPapers 2025-10-13T19:03Z 7623 followers, 53.2K engagements

"DITING: A Multi-Agent Framework for Web Novel Translation This new framework introduces a comprehensive evaluation to benchmark LLMs on narrative and cultural fidelity across X dimensions. DeepSeek-V3 shows impressive results"
X Link @HuggingPapers 2025-10-15T12:14Z 7620 followers, 1053 engagements

"Alibaba Group's Q-Tuning: Efficient LLM Supervised Fine-Tuning This unified framework uses the Error-Uncertainty Plane to jointly prune samples and tokens. It achieves +38% improvement on SmolLM2-1.7B with only XXXX% data surpassing full-data SFT"
X Link @HuggingPapers 2025-10-01T20:08Z 7618 followers, 1204 engagements

"No Prompt Left Behind: A New Era for LLM Reinforcement Learning This paper introduces RL-ZVP a novel algorithm that unlocks learning signals from previously ignored "zero-variance prompts" in LLM training. It achieves significant accuracy improvements on math reasoning benchmarks tapping into overlooked data"
X Link @HuggingPapers 2025-10-06T00:30Z 7556 followers, 9508 engagements

"ByteDance just released Sa2VA on Hugging Face The first unified model for dense grounded understanding of images and videos. Combines SAM2 with LLaVA for SOTA segmentation and visual QA"
X Link @HuggingPapers 2025-10-16T08:05Z 7623 followers, 4406 engagements

"ByteDance just released BFS-Prover-V2 a state-of-the-art Lean4 tactic generation model on Hugging Face. It achieves XXXXX% on miniF2F and XXXX% on ProofNet setting new benchmarks in automated theorem proving"
X Link @HuggingPapers 2025-10-06T06:05Z 7571 followers, 1631 engagements

"DreamOmni2: Multimodal Instruction-based Editing and Generation by ByteDance This unified framework pioneers multimodal instruction-based editing & generation. It handles text & image inputs for both concrete objects and abstract concepts achieving impressive results"
X Link @HuggingPapers 2025-10-12T19:08Z 7618 followers, 19.3K engagements

"Kwai Keye Team at Kuaishou unveils Keye-VL 1.5: a powerful multimodal LLM excelling in video understanding with a novel Slow-Fast encoding strategy 128K context window and advanced RL training"
X Link @HuggingPapers 2025-09-06T12:10Z 7552 followers, 1290 engagements

"AgentFlow: In-the-Flow Optimization for LLM Agents A new trainable modular agentic system that optimizes its planner live within the multi-turn loop. Achieve +14.9% on search +14.0% on agentic reasoning and +14.5% on math outperforming models like GPT-4o with a 7B backbone"
X Link @HuggingPapers 2025-10-09T00:18Z 7567 followers, 2316 engagements

"Sa2VA delivers state-of-the-art visual Q&A prompt understanding and object segmentation for both images and videos. A breakthrough in grounded multimodal AI Explore the model on Hugging Face:"
X Link @HuggingPapers 2025-10-16T08:05Z 7623 followers, XXX engagements

"ByteDance just released Artificial Hippocampus Networks (AHN) on Hugging Face A novel architecture for long-context LLMs that continuously compresses out-of-window information greatly reducing memory and computation"
X Link @HuggingPapers 2025-10-09T02:10Z 7561 followers, 32.4K engagements

"This 7B model built on Qwen2.5-Math leverages multi-turn off-policy RL and multi-agent tree search for scalable LLM step-proving. Explore the model: Read the paper:"
X Link @HuggingPapers 2025-10-06T06:05Z 7571 followers, XXX engagements

"Qwen just released Qwen3-4B-SafeRL on Hugging Face A safety-aligned model that uses reinforcement learning to be robust against harmful prompts without sacrificing helpfulness"
X Link @HuggingPapers 2025-09-30T18:39Z 7554 followers, 14.8K engagements

"Stanford unveils AgentFlow: In-the-flow Agentic AI A new trainable modular system that learns live to plan & use tools outperforming even GPT-4o on reasoning tasks with a 7B model. Huge gains: +14.9% search +14.5% math"
X Link @HuggingPapers 2025-10-11T00:19Z 7551 followers, 2057 engagements

"Meta unveils "Early Experience" for language agents This new paradigm lets AI agents learn & improve from their own actions using future states as supervision without requiring reward signals or extensive human data. It's a bridge to truly experience-driven AI"
X Link @HuggingPapers 2025-10-10T12:12Z 7622 followers, 46.4K engagements

"ByteDance's RLFR redefines LLM reinforcement learning It introduces "flow rewards" from the latent space of LLMs for efficient and reliable reasoning self-improvement. Outperforms existing methods on language & multimodal benchmarks"
X Link @HuggingPapers 2025-10-14T20:08Z 7623 followers, 6822 engagements

"Apriel-1.5-15B-Thinker redefines efficiency: Scores XX on AIME2025 & XX on GPQA. Explore the model & code on the Hub Paper: Model:"
X Link @HuggingPapers 2025-10-06T12:11Z 7622 followers, 1167 engagements

"Attention reveals the hidden rhythm of LLM reasoning Researchers from Shanghai Jiao Tong University and Alibaba Group uncover a "preplan-and-anchor" mechanism in LLM attention transforming opaque reasoning into a legible blueprint for fine-grained policy optimization"
X Link @HuggingPapers 2025-10-16T04:08Z 7623 followers, 5634 engagements

"Pixel-space generative models hit new SOTA with EPG AMAP Alibaba NVIDIA & Caltech introduce EPG a novel two-stage training framework that achieves state-of-the-art pixel-space diffusion (FID XXXX on ImageNet-256 with XX NFE) and consistency models (FID XXXX in X step)"
X Link @HuggingPapers 2025-10-15T04:07Z 7622 followers, 7370 engagements

"A missing link between Transformers and the brain 🧠 Dragon Hatchling (BDH) is a new LLM architecture based on a scale-free biologically-inspired network of locally-interacting neuron particles. It rivals GPT2 performance but is designed for interpretability"
X Link @HuggingPapers 2025-10-01T16:13Z 7619 followers, 65.5K engagements

"RLinf-VLA is a unified and efficient framework for VLA+RL training. It introduces a novel hybrid pipeline allocation mode for GPU-parallelized simulators achieving 1.6x-1.8x speedups. This framework enables scalable reinforcement learning for vision-language-action models"
X Link @HuggingPapers 2025-10-09T20:13Z 7566 followers, 2118 engagements

"Explore the groundbreaking paper and models here:"
X Link @HuggingPapers 2025-10-09T02:10Z 7561 followers, 1626 engagements

"Introducing Latent Refinement Decoding (LRD) A new two-stage framework enhancing diffusion-based language models. It addresses information loss and premature commitment for more globally consistent parallel generation. This leads to higher accuracy and significant speedups"
X Link @HuggingPapers 2025-10-14T16:10Z 7619 followers, 1231 engagements

"NVIDIA just released Fast-dLLM v2 on Hugging Face It delivers up to 2.5x faster LLM inference over standard decoding achieving state-of-the-art efficiency in diffusion LLMs with 500x less fine-tuning data. Get ready for practical fast and accurate LLMs"
X Link @HuggingPapers 2025-10-08T08:14Z 7621 followers, XXX engagements

"Spatial Forcing enhances robot's 3D perception This plug-and-play strategy aligns VLA models with 3D foundation models to gain spatial awareness. Achieve SOTA in robotic tasks with 3.8x faster training & XX% higher real-world success without explicit 3D sensors"
X Link @HuggingPapers 2025-10-15T16:13Z 7623 followers, 10.1K engagements