Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

# ![@hillbig Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::12483052.png) @hillbig Daisuke Okanohara / 岡野原 大輔

Daisuke Okanohara / 岡野原 大輔 posts on X about karpathy, agi, gpu, scaling the most. They currently have XXXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours.

### Engagements: XXXXX [#](/creator/twitter::12483052/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::12483052/c:line/m:interactions.svg)

- X Week XXXXXX +26%
- X Month XXXXXX +146%
- X Months XXXXXXX +140%
- X Year XXXXXXX +83%

### Mentions: X [#](/creator/twitter::12483052/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::12483052/c:line/m:posts_active.svg)


### Followers: XXXXXX [#](/creator/twitter::12483052/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::12483052/c:line/m:followers.svg)

- X Week XXXXXX +0.81%
- X Month XXXXXX +2.30%
- X Months XXXXXX +7.50%
- X Year XXXXXX +14%

### CreatorRank: XXXXXXX [#](/creator/twitter::12483052/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::12483052/c:line/m:influencer_rank.svg)

### Social Influence [#](/creator/twitter::12483052/influence)
---

**Social topic influence**
[karpathy](/topic/karpathy) #50, [agi](/topic/agi), [gpu](/topic/gpu), [scaling](/topic/scaling), [employees](/topic/employees), [future of](/topic/future-of)
### Top Social Posts [#](/creator/twitter::12483052/posts)
---
Top posts by engagements in the last XX hours

"Second-order optimization methodssuch as SOAP and Muonhave recently regained attention for training large language models (LLMs). While these approaches are inherently approximate due to computational constraints they conducted experiments using a GaussNewton (GN) preconditioned second-order optimizer on a 150M-parameter model trained on up to 3B tokens to examine their theoretical underpinnings and practical limitations. Traditionally second-order optimization methods have been criticized for their susceptibility to local minima and divergent behavior. However decomposing the Hessian matrix"  
[X Link](https://x.com/hillbig/status/1978246902622965780) [@hillbig](/creator/x/hillbig) 2025-10-14T23:49Z 36.1K followers, 3829 engagements


"The Potential of Second-Order Optimization for LLMs: A Study with Full Gauss-Newton:"  
[X Link](https://x.com/hillbig/status/1978243812926382475) [@hillbig](/creator/x/hillbig) 2025-10-14T23:37Z 36.2K followers, 3953 engagements


"Predictable scaling laws have not yet described the effectiveness of RL in LLM. In this study using over 400000 GPU hours of experiments the authors succeeded in modeling both the computational requirements and performance of RL with a sigmoidal scaling function: R_C = R_0 + (A - R_0) / (1 + (C_m / C)B) Here R_C represents model performance (reward) at a given computational cost C R_0 is the initial performance before RL begins A is the maximum achievable performance B represents learning efficiency and C_m is the midpoint cost where performance reaches half of its maximum. This framework"  
[X Link](https://x.com/hillbig/status/1978958908921368806) [@hillbig](/creator/x/hillbig) 2025-10-16T22:59Z 36.2K followers, 4422 engagements


"In his 2.5-hour interview Andrej Karpathy offers numerous thought-provoking insights into the future of AI its current challenges and potential paths forward. While many summaries have already circulated Ive compiled and commented on some particularly noteworthy points. Karpathy argues that AI agents wont reach the competency level of human interns or full-time employees within a one-year horizon. Instead such progress will likely take at least a decade. Current models still lack many of the essential capabilities required for practical deployment. X. The lack of Continuous Learning Modern"  
[X Link](https://x.com/hillbig/status/1980058216466940272) [@hillbig](/creator/x/hillbig) 2025-10-19T23:47Z 36.2K followers, 4277 engagements


"Andrej-Karpathy AGI is still a decade away:"  
[X Link](https://x.com/hillbig/status/1980054622376255512) [@hillbig](/creator/x/hillbig) 2025-10-19T23:33Z 36.2K followers, 5829 engagements


"Andrej Karpathy AGI is still a decade away:"  
[X Link](https://x.com/hillbig/status/1980058298218160356) [@hillbig](/creator/x/hillbig) 2025-10-19T23:47Z 36.2K followers, 2980 engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@hillbig Avatar @hillbig Daisuke Okanohara / 岡野原 大輔

Daisuke Okanohara / 岡野原 大輔 posts on X about karpathy, agi, gpu, scaling the most. They currently have XXXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours.

Engagements: XXXXX #

Engagements Line Chart

  • X Week XXXXXX +26%
  • X Month XXXXXX +146%
  • X Months XXXXXXX +140%
  • X Year XXXXXXX +83%

Mentions: X #

Mentions Line Chart

Followers: XXXXXX #

Followers Line Chart

  • X Week XXXXXX +0.81%
  • X Month XXXXXX +2.30%
  • X Months XXXXXX +7.50%
  • X Year XXXXXX +14%

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence #


Social topic influence karpathy #50, agi, gpu, scaling, employees, future of

Top Social Posts #


Top posts by engagements in the last XX hours

"Second-order optimization methodssuch as SOAP and Muonhave recently regained attention for training large language models (LLMs). While these approaches are inherently approximate due to computational constraints they conducted experiments using a GaussNewton (GN) preconditioned second-order optimizer on a 150M-parameter model trained on up to 3B tokens to examine their theoretical underpinnings and practical limitations. Traditionally second-order optimization methods have been criticized for their susceptibility to local minima and divergent behavior. However decomposing the Hessian matrix"
X Link @hillbig 2025-10-14T23:49Z 36.1K followers, 3829 engagements

"The Potential of Second-Order Optimization for LLMs: A Study with Full Gauss-Newton:"
X Link @hillbig 2025-10-14T23:37Z 36.2K followers, 3953 engagements

"Predictable scaling laws have not yet described the effectiveness of RL in LLM. In this study using over 400000 GPU hours of experiments the authors succeeded in modeling both the computational requirements and performance of RL with a sigmoidal scaling function: R_C = R_0 + (A - R_0) / (1 + (C_m / C)B) Here R_C represents model performance (reward) at a given computational cost C R_0 is the initial performance before RL begins A is the maximum achievable performance B represents learning efficiency and C_m is the midpoint cost where performance reaches half of its maximum. This framework"
X Link @hillbig 2025-10-16T22:59Z 36.2K followers, 4422 engagements

"In his 2.5-hour interview Andrej Karpathy offers numerous thought-provoking insights into the future of AI its current challenges and potential paths forward. While many summaries have already circulated Ive compiled and commented on some particularly noteworthy points. Karpathy argues that AI agents wont reach the competency level of human interns or full-time employees within a one-year horizon. Instead such progress will likely take at least a decade. Current models still lack many of the essential capabilities required for practical deployment. X. The lack of Continuous Learning Modern"
X Link @hillbig 2025-10-19T23:47Z 36.2K followers, 4277 engagements

"Andrej-Karpathy AGI is still a decade away:"
X Link @hillbig 2025-10-19T23:33Z 36.2K followers, 5829 engagements

"Andrej Karpathy AGI is still a decade away:"
X Link @hillbig 2025-10-19T23:47Z 36.2K followers, 2980 engagements

creator/x::hillbig
/creator/x::hillbig