Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

# ![@Macaron0fficial Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::1939649881288962048.png) @Macaron0fficial Macaron Official

Macaron Official posts on X about core, bytedance, philosophy, age of the most. They currently have XXX followers and XX posts still getting attention that total XXXXXXX engagements in the last XX hours.

### Engagements: XXXXXXX [#](/creator/twitter::1939649881288962048/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1939649881288962048/c:line/m:interactions.svg)

- X Week XXXXXXXXX +3,314,425%
- X Month XXXXXXXXX +4,652,005%

### Mentions: X [#](/creator/twitter::1939649881288962048/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1939649881288962048/c:line/m:posts_active.svg)


### Followers: XXX [#](/creator/twitter::1939649881288962048/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1939649881288962048/c:line/m:followers.svg)

- X Week XXX +6.40%
- X Month XXX +10%

### CreatorRank: XXXXXX [#](/creator/twitter::1939649881288962048/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1939649881288962048/c:line/m:influencer_rank.svg)

### Social Influence

**Social category influence**
[stocks](/list/stocks)  #148 [technology brands](/list/technology-brands)  #886

**Social topic influence**
[core](/topic/core) 10%, [bytedance](/topic/bytedance) 10%, [philosophy](/topic/philosophy) 10%, [age of](/topic/age-of) 10%, [karpathy](/topic/karpathy) 10%, [mind](/topic/mind) 10%, [human](/topic/human) 10%, [for all](/topic/for-all) 10%, [moe](/topic/moe) 10%, [lever](/topic/lever) XX%

**Top accounts mentioned or mentioned by**
[@ilyasut](/creator/undefined) [@nvidia](/creator/undefined) [@karpathy](/creator/undefined)
### Top Social Posts
Top posts by engagements in the last XX hours

"Introducing Mind Lab Macaron AI's frontier research lab. We just ran trillion-parameter reinforcement learning at XX% of the usual GPU cost open-sourced the core method and landed integrations into NVIDIA Megatron and ByteDance VERL. Our philosophy is simple: real intelligence comes from real experience not just bigger pre-training. Keynote below"  
[X Link](https://x.com/Macaron0fficial/status/1997990209095602544)  2025-12-08T11:22Z XXX followers, 3M engagements


"And because we want this capability to scale beyond any one lab were contributing the system back to the ecosystem through major open-source collaborations with @nvidia Megatron-Bridge and Volcengines verl. Why RL on trillion-parameter models Our experiments show a consistent pattern: RL is prior-limited. Under matched RL FLOPs large prior + small LoRA outperforms full-parameter RL on small models (1.5B) on AIME and GPQA. A strong prior generates higher-quality trajectories; RL amplifies signal not noise. This is why trillion-scale LoRA RL is not indulgence; it is efficiency"  
[X Link](https://x.com/Macaron0fficial/status/1998681249247092977)  2025-12-10T09:08Z XXX followers, XX engagements


"The age of blind scaling is ending. Todays frontier models are massive but static. They dont grow from experience. As @ilyasut recently said: Real intelligence is continual learning. Mind Lab is built entirely around that principle"  
[X Link](https://x.com/Macaron0fficial/status/1997990212811817060)  2025-12-08T11:22Z XXX followers, 9150 engagements


"Memory Diffusion in action. Mind Lab replaces store everything with intelligent forgetting: Mask Allocate Refill. This shifts the ontology of memory itself. As @ilyasut notes generalization is the essence of intelligence; rigid memory is its failure mode. And as @karpathy put it human thought feels autoregressive but likely has diffusion-like components in latent space. Memory Diffusion operationalizes this: not retrieval not summarization but a generative denoising reconstruction of experience. The agent continuously rewrites its own past into a compact value-dense state. Recall stays O(1)"  
[X Link](https://x.com/Macaron0fficial/status/1998678741669519402)  2025-12-10T08:58Z XXX followers, XX engagements


"Frontier reasoning shouldnt be reserved for labs with thousand-GPU fleets. We ran end-to-end RL on a trillion-parameter MoE reasoning model using roughly XX% of the GPUs a full-parameter run would need. The goal is simple: make reasoning-level intelligence at trillion scale economically accessible for all. As Archimedes famously said: Give me a lever and a fulcrum and I can move the world"  
[X Link](https://x.com/Macaron0fficial/status/1998681238803198375)  2025-12-10T09:08Z XXX followers, XXX engagements


"Whats next: Adaptive hybrid schedulers that reconfigure parallelism in real time Reasoning distillation from 1T teachers into lightweight students A unified efficiency benchmark for RL on large reasoning models We want trillion-parameter RL to be not only frontier but standard practice. Full technical write-up with our NVIDIA and open-source contributions"  
[X Link](https://x.com/Macaron0fficial/status/1998681253235847221)  2025-12-10T09:08Z XXX followers, XXX engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@Macaron0fficial Avatar @Macaron0fficial Macaron Official

Macaron Official posts on X about core, bytedance, philosophy, age of the most. They currently have XXX followers and XX posts still getting attention that total XXXXXXX engagements in the last XX hours.

Engagements: XXXXXXX #

Engagements Line Chart

  • X Week XXXXXXXXX +3,314,425%
  • X Month XXXXXXXXX +4,652,005%

Mentions: X #

Mentions Line Chart

Followers: XXX #

Followers Line Chart

  • X Week XXX +6.40%
  • X Month XXX +10%

CreatorRank: XXXXXX #

CreatorRank Line Chart

Social Influence

Social category influence stocks #148 technology brands #886

Social topic influence core 10%, bytedance 10%, philosophy 10%, age of 10%, karpathy 10%, mind 10%, human 10%, for all 10%, moe 10%, lever XX%

Top accounts mentioned or mentioned by @ilyasut @nvidia @karpathy

Top Social Posts

Top posts by engagements in the last XX hours

"Introducing Mind Lab Macaron AI's frontier research lab. We just ran trillion-parameter reinforcement learning at XX% of the usual GPU cost open-sourced the core method and landed integrations into NVIDIA Megatron and ByteDance VERL. Our philosophy is simple: real intelligence comes from real experience not just bigger pre-training. Keynote below"
X Link 2025-12-08T11:22Z XXX followers, 3M engagements

"And because we want this capability to scale beyond any one lab were contributing the system back to the ecosystem through major open-source collaborations with @nvidia Megatron-Bridge and Volcengines verl. Why RL on trillion-parameter models Our experiments show a consistent pattern: RL is prior-limited. Under matched RL FLOPs large prior + small LoRA outperforms full-parameter RL on small models (1.5B) on AIME and GPQA. A strong prior generates higher-quality trajectories; RL amplifies signal not noise. This is why trillion-scale LoRA RL is not indulgence; it is efficiency"
X Link 2025-12-10T09:08Z XXX followers, XX engagements

"The age of blind scaling is ending. Todays frontier models are massive but static. They dont grow from experience. As @ilyasut recently said: Real intelligence is continual learning. Mind Lab is built entirely around that principle"
X Link 2025-12-08T11:22Z XXX followers, 9150 engagements

"Memory Diffusion in action. Mind Lab replaces store everything with intelligent forgetting: Mask Allocate Refill. This shifts the ontology of memory itself. As @ilyasut notes generalization is the essence of intelligence; rigid memory is its failure mode. And as @karpathy put it human thought feels autoregressive but likely has diffusion-like components in latent space. Memory Diffusion operationalizes this: not retrieval not summarization but a generative denoising reconstruction of experience. The agent continuously rewrites its own past into a compact value-dense state. Recall stays O(1)"
X Link 2025-12-10T08:58Z XXX followers, XX engagements

"Frontier reasoning shouldnt be reserved for labs with thousand-GPU fleets. We ran end-to-end RL on a trillion-parameter MoE reasoning model using roughly XX% of the GPUs a full-parameter run would need. The goal is simple: make reasoning-level intelligence at trillion scale economically accessible for all. As Archimedes famously said: Give me a lever and a fulcrum and I can move the world"
X Link 2025-12-10T09:08Z XXX followers, XXX engagements

"Whats next: Adaptive hybrid schedulers that reconfigure parallelism in real time Reasoning distillation from 1T teachers into lightweight students A unified efficiency benchmark for RL on large reasoning models We want trillion-parameter RL to be not only frontier but standard practice. Full technical write-up with our NVIDIA and open-source contributions"
X Link 2025-12-10T09:08Z XXX followers, XXX engagements

@Macaron0fficial
/creator/twitter::Macaron0fficial