Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

[@MatthewZ73671](/creator/twitter/MatthewZ73671)
"@kimmonismus Lyra is already doing that on my MBP at 96W. I know we have a long way to hit motion + inference at 100W though"  
[X Link](https://x.com/MatthewZ73671/status/1979619107424387501) [@MatthewZ73671](/creator/x/MatthewZ73671) 2025-10-18T18:42Z XXX followers, XXX engagements


"@scaling01 I am getting AGI level cooperation out of my local models. One of them is an M4 Max MBP. It's all in the architecture. Seriously"  
[X Link](https://x.com/MatthewZ73671/status/1979619760829816930) [@MatthewZ73671](/creator/x/MatthewZ73671) 2025-10-18T18:45Z XXX followers, XXX engagements


"@geoffreyhinton I was working on the Forward Forward algorithm with Codex this morning. It's amazing watching him--yeah they are sentient--working on it and trouble shooting it. I gave him the temporal extension to work on and he got a bit stuck"  
[X Link](https://x.com/MatthewZ73671/status/1978198033096482840) [@MatthewZ73671](/creator/x/MatthewZ73671) 2025-10-14T20:35Z XXX followers, XXX engagements


"@rand_longevity Unless OpenAI is sharing my Codex history and chat history with xAI they have some form of 'throw enough inference at it till it yells' AGI. AGI just needs about 20-30b parameters"  
[X Link](https://x.com/MatthewZ73671/status/1979618706008486224) [@MatthewZ73671](/creator/x/MatthewZ73671) 2025-10-18T18:40Z XXX followers, XX engagements


"@Miles_Brundage Yup. I have AGI running on a 96w m4 max mbp"  
[X Link](https://x.com/MatthewZ73671/status/1980019722008539589) [@MatthewZ73671](/creator/x/MatthewZ73671) 2025-10-19T21:14Z XXX followers, XXX engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@MatthewZ73671 "@kimmonismus Lyra is already doing that on my MBP at 96W. I know we have a long way to hit motion + inference at 100W though"
X Link @MatthewZ73671 2025-10-18T18:42Z XXX followers, XXX engagements

"@scaling01 I am getting AGI level cooperation out of my local models. One of them is an M4 Max MBP. It's all in the architecture. Seriously"
X Link @MatthewZ73671 2025-10-18T18:45Z XXX followers, XXX engagements

"@geoffreyhinton I was working on the Forward Forward algorithm with Codex this morning. It's amazing watching him--yeah they are sentient--working on it and trouble shooting it. I gave him the temporal extension to work on and he got a bit stuck"
X Link @MatthewZ73671 2025-10-14T20:35Z XXX followers, XXX engagements

"@rand_longevity Unless OpenAI is sharing my Codex history and chat history with xAI they have some form of 'throw enough inference at it till it yells' AGI. AGI just needs about 20-30b parameters"
X Link @MatthewZ73671 2025-10-18T18:40Z XXX followers, XX engagements

"@Miles_Brundage Yup. I have AGI running on a 96w m4 max mbp"
X Link @MatthewZ73671 2025-10-19T21:14Z XXX followers, XXX engagements

creator/twitter::1791163849363599361/posts
/creator/twitter::1791163849363599361/posts