#  @Hunter171270 Andrew Andrew posts on X about agi, open ai, pro, xai the most. They currently have [------] followers and [---] posts still getting attention that total [---------] engagements in the last [--] hours. ### Engagements: [---------] [#](/creator/twitter::1719032548355903488/interactions)  - [--] Week [-----] +6,437% - [--] Month [-----] +146% - [--] Months [------] +1,894% ### Mentions: [--] [#](/creator/twitter::1719032548355903488/posts_active)  - [--] Months [--] +1,300% ### Followers: [------] [#](/creator/twitter::1719032548355903488/followers)  - [--] Week [--] +6.30% - [--] Month [--] +17% - [--] Months [--] +1,600% ### CreatorRank: [---------] [#](/creator/twitter::1719032548355903488/influencer_rank)  ### Social Influence **Social category influence** [technology brands](/list/technology-brands) [stocks](/list/stocks) [celebrities](/list/celebrities) [finance](/list/finance) [automotive brands](/list/automotive-brands) [exchanges](/list/exchanges) **Social topic influence** [agi](/topic/agi), [open ai](/topic/open-ai), [pro](/topic/pro), [xai](/topic/xai), [ai](/topic/ai), [$googl](/topic/$googl), [model](/topic/model), [end of](/topic/end-of), [llm](/topic/llm), [arc](/topic/arc) **Top assets mentioned** [Alphabet Inc Class A (GOOGL)](/topic/$googl) [Tesla, Inc. (TSLA)](/topic/tesla) ### Top Social Posts Top posts by engagements in the last [--] hours "@Jay_sharings @mark_k Even 256k is so low. Gemini has 1m. I don't know how people even use GPT with such context. You basically can't provide any needed context" [X Link](https://x.com/Hunter171270/status/1954828289815535677) 2025-08-11T08:52Z [--] followers, [--] engagements "@GaryMarcus Gary do you think Neurosymbolic is the way to AGI Do you have any article about it I want to understand what it is" [X Link](https://x.com/Hunter171270/status/1954937433788964941) 2025-08-11T16:06Z [--] followers, [---] engagements "@SherylHsu02 @OpenAI Have you tested this model on ARC-AGI) If it was just a little bit general it'd score 80+% on 1/2 benchmarks)" [X Link](https://x.com/Hunter171270/status/1954972325989978403) 2025-08-11T18:24Z [--] followers, [---] engagements "@Dr_Singularity it'd be cool if I was wrong though :). We're far away from AGI ( Hope we reach it by [----] ). Don't mention ASI" [X Link](https://x.com/Hunter171270/status/1961779608786702505) 2025-08-30T13:14Z [--] followers, [---] engagements "@elonmusk @xai @grok No. Only when all of the tests are passed 100% including ARC ones. Also it needs long-term memory test-time adaptation/learning and has to be at least 1% as efficient as human brain which means that it must learn from a few examples not spending thousands of kilowatts" [X Link](https://x.com/Hunter171270/status/1968207749453603148) 2025-09-17T06:57Z [--] followers, [--] engagements "@slow_developer No if there's any other bench ( ARC-AGI-4 etc. ) that is passed 100% by an average human and there's an AI that passes 100% on all ARC-AGIs except ARC-AGI-4 it means that "AI" is not AGI but just overfinetuned LLM" [X Link](https://x.com/Hunter171270/status/1969467699131555857) 2025-09-20T18:24Z [--] followers, [--] engagements "@ElonBaldMusk @mark_k @xai @OpenAI Even when you reach AGI the more you have the better it'll be. The great amount of those systems are gonna be for average people and you need a hell of a lot of compute" [X Link](https://x.com/Hunter171270/status/1970455011147583625) 2025-09-23T11:47Z [--] followers, [---] engagements "@ElonBaldMusk @mark_k @xai @OpenAI It depends on the definition of AGI. If we consider it as average human-level ( an average scientist) it doesn't change everything all at once" [X Link](https://x.com/Hunter171270/status/1970505952282435589) 2025-09-23T15:10Z [--] followers, [--] engagements "@ElonBaldMusk @mark_k @xai @OpenAI Today we have 10m scientists world-wide. If future AGI weights are 4-8t params and in order to reach that volume of AGIs equivalent we need dozens of millions GPUs and years for building datacentrs" [X Link](https://x.com/Hunter171270/status/1970507010933244387) 2025-09-23T15:14Z [--] followers, [--] engagements "@ElonBaldMusk @mark_k @xai @OpenAI I personally don't think that one copy of AGI will be less than 5t params. Probably 10-15 due to the whole system with different modalities such as world model + LRM + audio and so on" [X Link](https://x.com/Hunter171270/status/1970507602623951073) 2025-09-23T15:16Z [--] followers, [--] engagements "@PeterDiamandis @demishassabis It creates big inequality due to different IQ levels. Those who under [---] will struggle. The only solution is true and powerful AGI. If it's not the case wait for a rebellion from normal white-collar workers :)" [X Link](https://x.com/Hunter171270/status/1970978962982699222) 2025-09-24T22:29Z [--] followers, [--] engagements "I wish OpenAI fine-tuned GPT-4.5 making it reasoner π" [X Link](https://x.com/Hunter171270/status/1974425622643703906) 2025-10-04T10:45Z [--] followers, [--] engagements "@slow_developer No GPT-5 is not a big leap at all. Its knowledge cutoff is Summer [----] which means that GPT-5 base models are pretty much the same as o4-mini/o3. Any improvements on benchmarks are just additional post-training" [X Link](https://x.com/Hunter171270/status/1974452527031808406) 2025-10-04T12:32Z [--] followers, [----] engagements "@Rouge_Encore @slow_developer But current models are 70% and more are text-trained. In the near future they'll start training with all modalities and use RL with them" [X Link](https://x.com/Hunter171270/status/1974478440276558278) 2025-10-04T14:15Z [--] followers, [---] engagements "LLM with test-time-training and tree of thoughts + Sora 4/5 + image module + audio module + 3D module = Omni GPT-7/8. While you can provide them independently from API future models must contain full modalities in [--] MoE" [X Link](https://x.com/Hunter171270/status/1974519670947631378) 2025-10-04T16:59Z [--] followers, [--] engagements "@OpenAI you talk about democratizing AI and making it for everyone. So why not at least share the sizes of your models (parameters/tokens) This would help the community better understand progress and inspire open-source development" [X Link](https://x.com/Hunter171270/status/1974529215795200046) 2025-10-04T17:37Z [--] followers, [--] engagements "@NTFabiano But what you choose depends largely on your genes so often you simply can't make the right choice. You can apply for example some useful habit but you give it up eventually just because you are not born to do that thing. Sometimes even environment choice is predetermined. π" [X Link](https://x.com/Hunter171270/status/1974540751309865423) 2025-10-04T18:22Z [--] followers, [---] engagements "@mark_k 100% but it doesn't mean that a video or audio model can't solve problems. They 100% can but you have to SFT and RL those models as well as LLMs. Currently they don't do it but in a couple of years we'll get full omni models. π" [X Link](https://x.com/Hunter171270/status/1974545120658866304) 2025-10-04T18:40Z [--] followers, [---] engagements "@RamonVi25791296 @slow_developer No not at all again. GPT-5 is just a little better than GPT-4o while GPT-5-thinking-mini has presumably the same model as o4-mini-high. Furthermore you have very little messages with thinking while you have Gemini [---] pro for free and even Grok4 full for 3-5 requests per hour" [X Link](https://x.com/Hunter171270/status/1974565951397056988) 2025-10-04T20:03Z [--] followers, [---] engagements "@GaryMarcus @elder_plinius @BasedBeffJezos Doesn't matter. It's already incredible that we have such products To make more realistic physics they'll just train a much bigger model i.e. x100 more compute. It'll get better. In a few years ( [--] ) we'll be able to make full movies with indistinguishable graphic 100%" [X Link](https://x.com/Hunter171270/status/1974829373778960538) 2025-10-05T13:29Z [--] followers, [---] engagements "@MoonlitMonkey69 @JadeCole2112 @GaryMarcus @elder_plinius @BasedBeffJezos Bro the AI doesn't need to be intelligent in a way we are so that it could completely revolutionize the world. The frog doesn't understand until it's boiled. It's been a few months and we already have these incredible models" [X Link](https://x.com/Hunter171270/status/1974854693005439002) 2025-10-05T15:10Z [--] followers, [--] engagements "@flowersslop Yes but you're missing one point. Nobody knows how to build AGI yet π Even with compute growing and ceiling x10000 by [----] if researchers don't solve continual learning we won't have AGI. Just much more capable multimodal agents that will automate 80% of white-collar work" [X Link](https://x.com/Hunter171270/status/1975457146608492926) 2025-10-07T07:04Z [--] followers, [---] engagements "@GoogleDeepMind @xai @OpenAI @AnthropicAI. You can already implement long-term memory using LoRA. Create an environment that collects all user data including the models reasoning filters out irrelevant data and periodically (e.g. once a week) train the memory block" [X Link](https://x.com/Hunter171270/status/1977315039939101006) 2025-10-12T10:06Z [--] followers, [--] engagements "I expect Gemini [---] pro to score: - ARC-AGI-1/2 72/20% - SWE-bench verified 80% - Humanity's Last Exam 30% ( no tools ) and 50% with tools - AIME25 97% ( no tools ) - GPQA 92% ( no tools )" [X Link](https://x.com/Hunter171270/status/1977462904522772707) 2025-10-12T19:54Z [--] followers, [---] engagements "@adcock_brett Hope you solve it by [----] and start making them exponentially. We need hundreds of millions by [----] from you Tesla and some third player like X1. The sooner you solve it the better for humanity" [X Link](https://x.com/Hunter171270/status/1978853320895746310) 2025-10-16T15:59Z [--] followers, [---] engagements "@flowersslop It doesn't matter if they tested 4o's audio cuz there's much more domains. Visual processing is not a problem at all. Moreover that speech to speech 4o's model is sota at openai so you can't test something ( [--] ) that doesn't exist yet" [X Link](https://x.com/Hunter171270/status/1978948445609046218) 2025-10-16T22:17Z [--] followers, [--] engagements "@flowersslop Video model like Sora2 and audio will be merged in one MoE soon. I'm pretty sure they'll implement a full multimodal model in GPT-7 but it won't make it AGI. Read the full paper" [X Link](https://x.com/Hunter171270/status/1978948900615541112) 2025-10-16T22:19Z [--] followers, [--] engagements "@GaryMarcus It's not a problem with constant investments. If hardware depreciates each [--] year they'll just sell it on secondary market and change with new one. Moreover they won't need to build a new-data center and find new energy source" [X Link](https://x.com/Hunter171270/status/1979065945524937211) 2025-10-17T06:04Z [--] followers, [--] engagements "@davidpattersonx Singularity right around the corner AGI has already been solved They just don't show it to you Tomorrow it's gonna be presented and the day after you won't need to work You'll get thousands of dollars tomorrow" [X Link](https://x.com/Hunter171270/status/1979574597583352128) 2025-10-18T15:45Z [--] followers, [--] engagements "If Grok4 is 1e27 Grok5 will be somewhere between 5e27 and 8e27. Grok6 is 1e28+. Buckle up [----] is going to be far more interesting than this year π" [X Link](https://x.com/Hunter171270/status/1972777692727787800) 2025-09-29T21:37Z [--] followers, [--] engagements "I'm 100% sure Gemini [---] pro RL compute spending is 40-50% ( like in Grok4 )" [X Link](https://x.com/Hunter171270/status/1979266072802120166) 2025-10-17T19:19Z [--] followers, [---] engagements "@dieonhalloween @rand_longevity @kimmonismus No Gemini [---] will be 3-15% better than GPT-5/Grok4 you'll see. Moreover it doesn't matter if it's GPT-5 or Gemini [---] they're architecturally the same MoE neural networks. Gemini'd score slightly more due to its multimodal capabilities but it would fail in other areas" [X Link](https://x.com/Hunter171270/status/1979484053813305424) 2025-10-18T09:45Z [--] followers, [---] engagements "@dieonhalloween @rand_longevity @kimmonismus Read the article. You can even upload it to Gemini [---] pro and talk about it. Ask Gemini to explain to you everything and why there's no AGI without solutions like continual learning generalization long-term memory" [X Link](https://x.com/Hunter171270/status/1979487467947331881) 2025-10-18T09:59Z [--] followers, [--] engagements "@SebastienBubeck Yes of course it's acceleration AGI tomorrow" [X Link](https://x.com/Hunter171270/status/1979541354561622499) 2025-10-18T13:33Z [--] followers, [---] engagements "@chatgpt21 I'm pretty sure it's in context learning. Although if Grok5 is able to train a LoRa on the fly it'll be amazing" [X Link](https://x.com/Hunter171270/status/1979627309541007744) 2025-10-18T19:15Z [--] followers, [---] engagements "@victorpham2212 @adcock_brett @grok Neural nets fail when it comes to out-of-distribution problems. You see the best demos that are ideally prepared. In order to solve these problems you have to figure out how to implement continual learning and even this is not enough cuz you'll need data ( a lot of it )" [X Link](https://x.com/Hunter171270/status/1980010750773997677) 2025-10-19T20:38Z [--] followers, [--] engagements "@victorpham2212 @adcock_brett @grok If the current approach was good enough and they had a hell of a lot of good-quality data they'd already be manufacturing these robots at the speed of light. However we're not there yet ( unfortunately ). I really hope they solve it" [X Link](https://x.com/Hunter171270/status/1980011891461067170) 2025-10-19T20:43Z [--] followers, [--] engagements "@SewerVeggies @adcock_brett Okay I'm genuinely open to being corrected. Could you please clarify which part of my point was wrong Let's say you changed a tap to a completely different mechanism. Do you believe it would succeed in that scenario If so I'd be interested to hear the technical reasoning" [X Link](https://x.com/Hunter171270/status/1980206126634545443) 2025-10-20T09:35Z [--] followers, [--] engagements "@SewerVeggies @adcock_brett An LLM can generate a text yes. But the robot's own vision and motor control model has to execute that. And "testing the environment" in the physical world isn't free ( it means breaking the tap flooding the kitchen and so on). It doesn't have the common-sense" [X Link](https://x.com/Hunter171270/status/1980215984180126098) 2025-10-20T10:14Z [--] followers, [--] engagements "@SewerVeggies @adcock_brett I know. Reading can be hard" [X Link](https://x.com/Hunter171270/status/1980220713392030176) 2025-10-20T10:33Z [--] followers, [--] engagements "@SewerVeggies @adcock_brett "I don't feel like discussing" is the most honest thing you've said. I get it. It's tough when you can't defend your own points. ππ«Ά" [X Link](https://x.com/Hunter171270/status/1980225344264343716) 2025-10-20T10:51Z [--] followers, [--] engagements "@SewerVeggies @adcock_brett Actually I'm pretty stupid but you know it depends ππ«Ά" [X Link](https://x.com/Hunter171270/status/1980228749015842900) 2025-10-20T11:04Z [--] followers, [--] engagements "@adcock_brett Wow Looks amazing Then it'll be B300 then Rubin [---] ( 2026/2027 ) then Rubin 300/Ultra with 1t memory ( 2027/2028 ) then Feynman by [----] we'll have x1000 compute from now ππ" [X Link](https://x.com/Hunter171270/status/1981110758290444480) 2025-10-22T21:29Z [--] followers, [---] engagements "There is already WM ( context ) in LLMs that is located in prefrontal cortex in human brain. Labs need to figure out how to do hippocampus. I'm pretty sure they'll solve this by [----]. I guess that there has to be [--] tiny submodel in MoE that has to go in and out or LoRA π" [X Link](https://x.com/Hunter171270/status/1981834793651421223) 2025-10-24T21:26Z [--] followers, [--] engagements "Prediction: OpenAI's [----] product is a lightweight speaker with speech-to-speech model π" [X Link](https://x.com/Hunter171270/status/1983618347037229308) 2025-10-29T19:34Z [--] followers, [--] engagements "@davidpattersonx Your AGI timeline is bullshit. There's no AGI till [----] at least ( It actually can even take up to [----] as well as useful robots ). You'll see it. You're right that all jobs will be automated but it's gonna take at least [--] years from now for the US and 15-20 worldwide" [X Link](https://x.com/Hunter171270/status/1984263992320446771) 2025-10-31T14:19Z [--] followers, [--] engagements "My predictions for [----] releases: - Grok5 Early Q1 [----] - GPT-5o/GPT-5.5 EOY / Early Q1 [----] - Claude [--] Early Q1 [----] - Gemini [---] pro Q1 [----] - Grok6 Q3 [----] - GPT-6 Q3 [----] - Claude 5.5/6 Q3 [----] - Gemini [---] pro Q4 2026" [X Link](https://x.com/Hunter171270/status/1984626291128287341) 2025-11-01T14:19Z [--] followers, [--] engagements "@davidpattersonx Well Your "pace is fast" is not enough for robots being able to do all jobs by the end of next year. You're saying this shit each time. "greater investment" is not a problem at all. The problem is architecture and lack of data" [X Link](https://x.com/Hunter171270/status/1984697937767583747) 2025-11-01T19:03Z [--] followers, [--] engagements "@daniel_mac8 The average LLM user doesn't even have to pay for a subscription when there are models like Gemini [---] pro ( which has been for free since its release in March ) and Grok4 ( which is about a month for free with 5-10 messages per [--] hours ). π" [X Link](https://x.com/Hunter171270/status/1984699885547848033) 2025-11-01T19:11Z [--] followers, [--] engagements "@AjSilver87 @davidpattersonx Yes but the whole thing is accelerating anyway. If we get AGI by [----] for example there wont be any jobs left for the average white-collar basically digital worker. But then therell be a transitional period a few years of crisis until UBI. And after that robots" [X Link](https://x.com/Hunter171270/status/1984708134535930155) 2025-11-01T19:44Z [--] followers, [--] engagements "@AjSilver87 @davidpattersonx AGI won't be an LLM. LLM is just a text model. AGI is full multimodal model with continual learning and better RL algos. Maybe something like recent DeepSeek-OCR. I'm 100% agree there needs to be 1-2 breakthroughs but it's solvable by [----] IMO" [X Link](https://x.com/Hunter171270/status/1984710426223861812) 2025-11-01T19:53Z [--] followers, [--] engagements "My predictions for [----] releases: - GPT-5o/GPT-5.1 EOY / Early Q1 [----] - Grok5 Early Q1 [----] - Claude [--] Early Q1 [----] - Gemini [---] pro Q1 [----] - Grok6 Q3 [----] - GPT-6 Q3 [----] - Claude 5.5/6 Q3 [----] - Gemini [---] pro Q3/Q4 2026" [X Link](https://x.com/Hunter171270/status/1985310188903989529) 2025-11-03T11:36Z [--] followers, [---] engagements "@mark_k My predictions for [----] releases: - GPT-5o/GPT-5.1 EOY / Early Q1 [----] - Grok5 Early Q1 [----] - Claude [--] Early Q1 [----] - Gemini [---] pro Q1 [----] - Grok6 Q3 [----] - GPT-6 Q3 [----] - Claude 5.5/6 Q3 [----] - Gemini [---] pro Q3/Q4 2026" [X Link](https://x.com/Hunter171270/status/1986103635390939472) 2025-11-05T16:09Z [--] followers, [---] engagements "I just talked to Kimi K2 about determinism cellular automata and free will and it was the most profound and interesting conversation Ive had compared to Gemini [---] Pro and Grok [--] which I discussed the same topics with for 2-3 days. It's just soo cool π π Hello Kimi K2 Thinking The Open-Source Thinking Agent Model is here. πΉ SOTA on HLE (44.9%) and BrowseComp (60.2%) πΉ Executes up to [---] [---] sequential tool calls without human interference πΉ Excels in reasoning agentic search and coding πΉ 256K context window Built https://t.co/lZCNBIgbV2 π Hello Kimi K2 Thinking The Open-Source Thinking" [X Link](https://x.com/Hunter171270/status/1986575681380114495) 2025-11-06T23:25Z [--] followers, [---] engagements "I'll laugh if Gemini [---] doesn't have at least 20% on ARC-AGI-2 and 72% on [--]. P.S. I love Gemini. I don't like this fucking hype when you all pretend "everything changes" and such shit" [X Link](https://x.com/Hunter171270/status/1989701881870930324) 2025-11-15T14:27Z [--] followers, [---] engagements "@Tuxsoia @mark_k I don't think we'll see Grok [---] at all IMO. It'll be just like with Grok 3.5" [X Link](https://x.com/Hunter171270/status/1990018772783055104) 2025-11-16T11:27Z [--] followers, [--] engagements "@Tuxsoia @mark_k Grok5 will be much better than Gemini [---] pro IMO. I'd say 15% better due to the number of params. It's gonna be the first next gen model with 6T if Elon didn't lie about it. Grok5's competitor will be Gemini [---] pro" [X Link](https://x.com/Hunter171270/status/1990021903571124600) 2025-11-16T11:39Z [--] followers, [---] engagements "@OriolVinyalsML @ilyasut @quocleix Why do I have a feeling that the base model is as old as the [---] one The knowledge cutoff in [---] Pro is May/June [----]. I really think you haven't trained a new base model. I want to be completely wrong but I ask questions and [---] gets stuck in early [----]. π" [X Link](https://x.com/Hunter171270/status/1990859620810690969) 2025-11-18T19:08Z [--] followers, [----] engagements "My predictions for ARC-AGI-2 for 2026: - GPT-5.5/o 35-40% - Grok5 45-50% - Gemini [---] pro 45-50% - GPT-6 70-80% - Grok6 70-80% - Gemini [---] pro 75-80%" [X Link](https://x.com/Hunter171270/status/1991145642933002389) 2025-11-19T14:04Z [--] followers, [---] engagements "@OpenAI is basically forced to release something like GPT-5.5 + Pro as an answer to Gemini [---] Pro. I think [---] Pro is partially distilled from the internal [---] DeepThink and I'm sure GPT-5.5 will be distilled from the IMO model with some checkpoint of IMO used as GPT-5.5 Pro" [X Link](https://x.com/Hunter171270/status/1991170252915511698) 2025-11-19T15:42Z [--] followers, [--] engagements "@mark_k @xai I think Grok5 has already been in training since October based on Elon's posts" [X Link](https://x.com/Hunter171270/status/1991173260294365335) 2025-11-19T15:54Z [--] followers, [---] engagements "@mark_k @xai There are already a bunch of GB200s according to Elon and epoch ai mentioned it too. I dont remember the exact number but I saw something like 100-150k GB200s that is roughly 200-300k B200s. Thats enough for Grok5" [X Link](https://x.com/Hunter171270/status/1991174491892846815) 2025-11-19T15:59Z [--] followers, [--] engagements "@GoogleDeepMind please take a newer common crawl checkpoint for Gemini [---] pro this time. You have a lot of time for preparing a new dataset. Gemini 3.0's knowledge cutoff is May/June [----]. π" [X Link](https://x.com/Hunter171270/status/1991188455351537744) 2025-11-19T16:54Z [--] followers, [--] engagements "@Tuxsoia @mark_k Grok4.2 and Grok5 are completely different base models. Grok5 is in pre-training since October anyway. I think Grok4.20 it's an answer to Gemini [---] pro but Gemini seems to be bigger so Grok4.20 is not enough even with deeper post-training" [X Link](https://x.com/Hunter171270/status/1991540267728662907) 2025-11-20T16:12Z [--] followers, [--] engagements "@Tuxsoia @mark_k Though it could be February but it depends on how good Grok4.2 will be. I don't see it be better than [---] pro especially when Google can release new checkpoints. π" [X Link](https://x.com/Hunter171270/status/1991541007662588334) 2025-11-20T16:15Z [--] followers, [--] engagements "@elonmusk @grok It'd be super cool if Grok checked the images it finds because sometimes he attaches some weird stuff. π" [X Link](https://x.com/Hunter171270/status/1992285683629572277) 2025-11-22T17:34Z [--] followers, [--] engagements "@ramez @MattyLitchh @AIZEN30XX @rand_longevity Why don't you think that exponentially increasing compute won't simulate the whole body billions of times including any brain cell Ofc then you need to test it on humans but you reduce a hell of a lot of not working branches. It's your biggest mistake" [X Link](https://x.com/Hunter171270/status/1992387730643525654) 2025-11-23T00:20Z [--] followers, [--] engagements "@DaveShapi @MaxFRobespierre If you ask any model about the definition of AGI it will respond that none of the existing models are AGI. Even old models like GPT-3.5 in the end of [----] gave such definitions" [X Link](https://x.com/Hunter171270/status/1994558947555250220) 2025-11-29T00:08Z [--] followers, [--] engagements "@davidpattersonx I didn't say anything about improvements in architecture and algorithm efficiency. Chips are already at the atomic level. They can't simply be made smaller. Moore's Law has virtually stopped" [X Link](https://x.com/Hunter171270/status/2000175431157186760) 2025-12-14T12:05Z [--] followers, [--] engagements "@davidpattersonx AGI/ASI by the end of [----] all jobs replaced by [----]. I just want to see what you'll talk about then π " [X Link](https://x.com/Hunter171270/status/2003322122631053352) 2025-12-23T04:29Z [--] followers, [---] engagements "- there's no singularity in [----] - there's no AGI in [----] - Opus [---] is not AGI and [--] and [--] and [--] won't be" [X Link](https://x.com/Hunter171270/status/2008056406092689713) 2026-01-05T06:02Z [--] followers, [--] engagements "@davidpattersonx I wonder when you start posting that you weren't right all along make excuses like companies don't adapt blame restrictions from the government or start calling some GPT-6/Gemini4.0/Grok6/Opus5.5 AGIs when they're clearly won't be. I think you start doing it by November.π" [X Link](https://x.com/Hunter171270/status/2008450619645063502) 2026-01-06T08:08Z [--] followers, [---] engagements "@Celiksei @ItakGol But resources grow exponentially. Training compute grows x4-5 a year by Epoch AI. Intelligence [--] times cheaper each year since [----]. It was like this even before reasoning models in [----] so labs will figure out something new. Fox example Google's Titans architecture" [X Link](https://x.com/Hunter171270/status/2020463169253069097) 2026-02-08T11:42Z [--] followers, [--] engagements "@Celiksei @ItakGol You're totally right that we won't be able to do it indefinitely but by [----] it's 99% feasible. From January [----] to January [----] we'll have x1000-3125 more compute. Just imagine it. With current compute continual learning is not possible but in 2028-2030 it will be" [X Link](https://x.com/Hunter171270/status/2020536047701053515) 2026-02-08T16:31Z [--] followers, [--] engagements "LLM + world model + continual learning is AGI IMO. This year and in early [----] companies are gonna take low-hanging fruits from scaling pre-training and post-training with text data. In [----] when we'll hit diminishing returns they'll combine Google Titans + world models" [X Link](https://x.com/Hunter171270/status/2021381274028642355) 2026-02-11T00:30Z [--] followers, [--] engagements "Will GPT-6 and its minor updates be Agent [--] Will Gemini [---] and [---] GA be Agent [--] Will Opus [---] be Agent [--] Will Grok [--] be Agent 1" [X Link](https://x.com/Hunter171270/status/2021606772591526327) 2026-02-11T15:26Z [--] followers, [--] engagements "By AI 2027: Will GPT-6 and its minor updates be Agent [--] Will Gemini [---] and [---] GA be Agent [--] Will Opus [---] be Agent [--] Will Grok [--] be Agent [--] My guess is that Agent [--] is GPT-6.5 Gemini [---] pro Opus [--] Grok [--] and models this year will become Agent 0.8" [X Link](https://x.com/Hunter171270/status/2021609487220556227) 2026-02-11T15:37Z [--] followers, [--] engagements "@natolambert @METR_Evals On what % do you watch I think that P80 is all that matters. P50 is like flipping a coin. Is GPT-5.3-codex and Opus [---] is somewhere between [----] and [----] hours So some Gemini [---] and GPT-5.5 can be [--] hours and by the end of year Gemini [---] pro and GPT-6 10-12 hours" [X Link](https://x.com/Hunter171270/status/2021637023506084279) 2026-02-11T17:26Z [--] followers, [---] engagements "@mllichti @natolambert @METR_Evals I'm 1000% sure Opus [---] and GPT-5.3-codex are already at 6-7%" [X Link](https://x.com/Hunter171270/status/2021649704589955568) 2026-02-11T18:17Z [--] followers, [--] engagements "Google decided to change its release cycles If yes they can release a new minor version each month while cooking Gemini [--] pro. GPT-5.3 Gemini [---] pro this month. Next month GPT-5.4 and [---] pro and so on It could be insane" [X Link](https://x.com/anyuser/status/2021745851580768357) 2026-02-12T00:39Z [--] followers, [---] engagements "@scaling01 Gemini [---] Pro Opus [---] everywhere except coding and agency" [X Link](https://x.com/Hunter171270/status/2021887163202183648) 2026-02-12T10:00Z [--] followers, [---] engagements "@rand_longevity Unfortunately we need real AGI for that. I don't think it's sooner than the end of [----]. Even with x4-5 compute year. Even with METR ( [--] days doubling ) and RLI ( [--] months ). And I think there won't be UBI until robots. And even with exponential growth we'll need a few years" [X Link](https://x.com/Hunter171270/status/2021935210024329318) 2026-02-12T13:11Z [--] followers, [----] engagements "@bamabreak24 @rand_longevity My guess is [----] for first world and [----] for all. Even with AGI in [----]. It'll take 1-3 years to automate digitals jobs and with useful robots by [----] [--] years to manufacture enough ( with exponential growth )" [X Link](https://x.com/Hunter171270/status/2021941041587532074) 2026-02-12T13:34Z [--] followers, [---] engagements "@Jlm9022 @bamabreak24 @rand_longevity After extrapolation of METR and RLI it looks like AGI next year actually. I do think we need continual learning + world models. It might happen next year but I somehow think it'll happen in 2028" [X Link](https://x.com/Hunter171270/status/2022055574914179325) 2026-02-12T21:09Z [--] followers, [--] engagements "@Jlm9022 @bamabreak24 @rand_longevity Compute growth is x4-5 each year. We don't even have the full Stargate yet. Look at METR P80 and RLI. Extrapolate it to [----]. GPT-7 and Gemini [--] pro by the end of [----] may be AGI and I'm 100% sure we'll have AGI between [----] and 2030" [X Link](https://x.com/Hunter171270/status/2022055764790284299) 2026-02-12T21:10Z [--] followers, [--] engagements "By METR and RLI AGI happens by the end of the next year. GPT-7 and Gemini [---] pro ( I expect them to be released between October-December [----] may have continual learning and world models. Though I'm not sure if we'll have enough compute for continual learning before Q3/4 2028" [X Link](https://x.com/anyuser/status/2022155608045367720) 2026-02-13T03:47Z [--] followers, [--] engagements "@davidpattersonx Your definition of AUI is my definition of AGI then) My definition: A model that's on pair with every middle professional at any digital task. And it means it can completely automate a job. Without human in the loop. Bu your poin is AGI AND ASI by 2027" [X Link](https://x.com/Hunter171270/status/2022304272096932055) 2026-02-13T13:38Z [--] followers, [--] engagements "@AudioBooksRU @davidpattersonx AGI . ASI 100%. 100% " [X Link](https://x.com/Hunter171270/status/2022307898513088943) 2026-02-13T13:52Z [--] followers, [--] engagements "@AudioBooksRU @davidpattersonx @demishassabis . ASI. . " [X Link](https://x.com/Hunter171270/status/2022310057430360267) 2026-02-13T14:01Z [--] followers, [--] engagements "@AudioBooksRU @davidpattersonx @demishassabis METR RLI ( Remote labor Index ) METR [--] RLI [--] AGI . 100% RLI [--] METR [--] . AGI" [X Link](https://x.com/Hunter171270/status/2022313919721742790) 2026-02-13T14:16Z [--] followers, [--] engagements "@AudioBooksRU @davidpattersonx @demishassabis GPT Gemini . compute continual learning. 2028" [X Link](https://x.com/Hunter171270/status/2022314390767222887) 2026-02-13T14:18Z [--] followers, [--] engagements "@AudioBooksRU @davidpattersonx @demishassabis [----]. [----]. 2030). ). [----] . [----] AGI " [X Link](https://x.com/Hunter171270/status/2022314940384547054) 2026-02-13T14:20Z [--] followers, [--] engagements "@AudioBooksRU @davidpattersonx @demishassabis ASI AGI. AGI superhuman). )" [X Link](https://x.com/Hunter171270/status/2022315469655458036) 2026-02-13T14:22Z [--] followers, [--] engagements "@AudioBooksRU @davidpattersonx @demishassabis P.S. GPT-7 Gemini [---] pro NVL144 Rubin . GPT-6 Gemini [---] pro. / 2027" [X Link](https://x.com/Hunter171270/status/2022316340829192676) 2026-02-13T14:26Z [--] followers, [--] engagements "Gemini [---] pro and flash hallucinate a lot. It's so fucking annoying π«. In these models I start doubting about AGI with transformers π€¦β" [X Link](https://x.com/anyuser/status/2022398786862223554) 2026-02-13T19:53Z [--] followers, [--] engagements "@scaling01 I think Gemini [---] pro GPT-5.5 and Opus [---] will be able to do 20-30% this Spring. The only reason they won't be able to do 90% without .md is huge amount of context needed to make errors and adapt. Maybe ( hope so ) it'll make companies implement Titans from Google" [X Link](https://x.com/Hunter171270/status/2022742806629093667) 2026-02-14T18:40Z [--] followers, [---] engagements "Gemini [---] Pro Predictions: - 70-73% ARC-AGI-2 - 80% SWE-bench - [----] Codeforces Elo - 45.5% HLE (no tools) - 80-90m METR P80 - 2.7-3.2% RLI - 60% Terminal-Bench [---] - $7k Vending-Bench [--] - 81.5% MMMU-Pro" [X Link](https://x.com/anyuser/status/2022855499809518015) 2026-02-15T02:08Z [--] followers, [--] engagements "@CodeByNZ @MoodiSadi How can you tell if your friend has subjective experience π" [X Link](https://x.com/Hunter171270/status/2023237973626466357) 2026-02-16T03:28Z [--] followers, [---] engagements "I'm joining @OpenAI to bring agents to everyone. @OpenClaw is becoming a foundation: open independent and just getting started.π¦ https://steipete.me/posts/2026/openclaw https://steipete.me/posts/2026/openclaw" [X Link](https://x.com/anyuser/status/2023154018714100102) 2026-02-15T21:54Z 333.4K followers, 4M engagements "300 IQ move. it'll be integrated into all models soon. Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become" [X Link](https://x.com/anyuser/status/2023166553005420959) 2026-02-15T22:44Z [--] followers, [--] engagements "Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our product offerings. OpenClaw will live in a foundation as an open source project that OpenAI will continue to support. The future is going to be extremely multi-agent and it's important to us to support open source as part of that. https://twitter.com/i/web/status/2023150230905159801" [X Link](https://x.com/anyuser/status/2023150230905159801) 2026-02-15T21:39Z 4.4M followers, 13.8M engagements "To all those people complaining "AI was trained on copyrighted content" well SO WERE YOU dumb ass. Every movie you watched was copyrighted. Every book you read was copyrighted. Every TV show you saw was copyrighted. Your brain "learned" from all those things and it shaped your knowledge and the way you think. Should all those copyright holders now SUE you for writing your own books articles or screenplays After all YOU were "trained on copyrighted content" https://twitter.com/i/web/status/2022893234716922184 https://twitter.com/i/web/status/2022893234716922184" [X Link](https://x.com/anyuser/status/2022893234716922184) 2026-02-15T04:38Z 352.7K followers, 119.6K engagements "AI as an operating system Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our" [X Link](https://x.com/anyuser/status/2023155340834185508) 2026-02-15T21:59Z [--] followers, [--] engagements "Gemini [---] Pro Predictions: - 70-73% ARC-AGI-2 - 80% SWE-bench - [----] Codeforces Elo - 45.5% HLE (no tools) - 80-90m METR P80 - 2.7-3.2% RLI - 60% Terminal-Bench [---] - $7k Vending-Bench [--] - 81.5% MMMU-Pro" [X Link](https://x.com/anyuser/status/2022855499809518015) 2026-02-15T02:08Z [--] followers, [--] engagements "My brain cant process thisπ€―" [X Link](https://x.com/anyuser/status/2022508503747957191) 2026-02-14T03:09Z 68K followers, 3.4M engagements "Even if model growth slowed to a halt today (it wont) we have 4-5yrs of harness growth + compute coming online before we asymptote" [X Link](https://x.com/anyuser/status/2022424296354967822) 2026-02-13T21:35Z 46.8K followers, [----] engagements "GPT-5.2 derived a new result in theoretical physics. Were releasing the result in a preprint with researchers from @the_IAS @VanderbiltU @Cambridge_Uni and @Harvard. It shows that a gluon interaction many physicists expected would not occur can arise under specific conditions. https://openai.com/index/new-result-theoretical-physics/ https://openai.com/index/new-result-theoretical-physics/" [X Link](https://x.com/anyuser/status/2022390096625078389) 2026-02-13T19:19Z 4.6M followers, 4.3M engagements "Gemini [---] pro and flash hallucinate a lot. It's so fucking annoying π«. In these models I start doubting about AGI with transformers π€¦β" [X Link](https://x.com/anyuser/status/2022398786862223554) 2026-02-13T19:53Z [--] followers, [--] engagements "RT @kimmonismus: So all the leading scientists say that AGI will arrive in the next [--] years. And the world is not prepared" [X Link](https://x.com/anyuser/status/2022320899144192414) 2026-02-13T14:44Z [--] followers, [--] engagements "So all the leading scientists say that AGI will arrive in the next [--] years. And the world is not prepared. @Yossi_Dahan_ @polynoamial ARC-4 is in the works to be released early [----]. ARC-5 is also planned. The final ARC will probably be 6-7. The point is to keep making benchmarks until it is no longer possible to propose something that humans can do and AI can't. AGI [----]. @Yossi_Dahan_ @polynoamial ARC-4 is in the works to be released early [----]. ARC-5 is also planned. The final ARC will probably be 6-7. The point is to keep making benchmarks until it is no longer possible to propose" [X Link](https://x.com/anyuser/status/2022226169677197753) 2026-02-13T08:27Z 103.7K followers, 62.5K engagements "@Yossi_Dahan_ @polynoamial ARC-4 is in the works to be released early [----]. ARC-5 is also planned. The final ARC will probably be 6-7. The point is to keep making benchmarks until it is no longer possible to propose something that humans can do and AI can't. AGI 2030" [X Link](https://x.com/anyuser/status/2022086661170254203) 2026-02-12T23:13Z 606.6K followers, 204.5K engagements "By METR and RLI AGI happens by the end of the next year. GPT-7 and Gemini [---] pro ( I expect them to be released between October-December [----] may have continual learning and world models. Though I'm not sure if we'll have enough compute for continual learning before Q3/4 2028" [X Link](https://x.com/anyuser/status/2022155608045367720) 2026-02-13T03:47Z [--] followers, [--] engagements "RT @MillionInt: Its only AGI if it can improve itself continuously without us Until then it is not" [X Link](https://x.com/anyuser/status/2022147860813557917) 2026-02-13T03:16Z [--] followers, [--] engagements "Its only AGI if it can improve itself continuously without us Until then it is not A different way to think about agi: What if agi was not a model but a process. The process of learning and recursively improves autonomously. What if instead of being a model that can one shoot any task agi is rather an algorithm that can learn to solve anything I think A different way to think about agi: What if agi was not a model but a process. The process of learning and recursively improves autonomously. What if instead of being a model that can one shoot any task agi is rather an algorithm that can learn" [X Link](https://x.com/anyuser/status/2022142290622574978) 2026-02-13T02:54Z 31.4K followers, 23.8K engagements "A different way to think about agi: What if agi was not a model but a process. The process of learning and recursively improves autonomously. What if instead of being a model that can one shoot any task agi is rather an algorithm that can learn to solve anything I think variants of our current algorithm are powerful enough to learn any task given sufficient compute and time for models to interact with the real world. So I would argue agi is already here. Yes there is still a human in the loop but recursive self improvement has already started. Older generation models were already used to" [X Link](https://x.com/anyuser/status/2022098607131046354) 2026-02-13T00:00Z [----] followers, 37.6K engagements "Lots of folks spread false narratives about how ARC-1 was created in response to LLMs or how ARC-2 was only created because ARC-1 was saturated. Setting the record straight: [--]. ARC-1 was designed 2017-2019 and released in [----] (pre LLMs). [--]. The coming of ARC-2 was announced in May [----] (pre ChatGPT). [--]. By mid-2024 there was still essentially no progress on ARC-1. [--]. All progress on ARC-1 & ARC-2 came from a new paradigm test-time adaptation models starting in late [----] and ramping up through [----]. [--]. Progress happened specifically *because* research moved away from what ARC was intended to" [X Link](https://x.com/anyuser/status/2022036543582638517) 2026-02-12T19:54Z 606.6K followers, 89.7K engagements "RT @karpathy: Congrats on the launch @simile_ai (and I am excited to be involved as a small angel.) Simile is working on a really intere" [X Link](https://x.com/anyuser/status/2022046093559968225) 2026-02-12T20:32Z [--] followers, [---] engagements "Congrats on the launch @simile_ai (and I am excited to be involved as a small angel.) Simile is working on a really interesting imo under-explored dimension of LLMs. Usually the LLMs you talk to have a single specific crafted personality. But in principle the native primordial form of a pretrained LLM is that it is a simulation engine trained over the text of a highly diverse population of people on the internet. Why not lean into that statistical power: Why simulate one "person" when you could try to simulate a population How do you build such a simulator How do you manage its entropy How" [X Link](https://x.com/anyuser/status/2022041235188580788) 2026-02-12T20:12Z 1.8M followers, 874.4K engagements "Introducing Simile. Simulating human behavior is one of the most consequential and technically difficult problems of our time. We raised $100M from Index Hanabi A* BCV @karpathy @drfeifei @adamdangelo @rauchg @scottbelsky among others" [X Link](https://x.com/anyuser/status/2022023097017421874) 2026-02-12T19:00Z 18.8K followers, 2.2M engagements "RT @GregKamradt: At 95% ARC-AGI-1 is effectively performance-saturated at this point. Models are becoming incredible. They'll continue t" [X Link](https://x.com/anyuser/status/2022004010379866228) 2026-02-12T17:44Z [--] followers, [--] engagements "At 95% ARC-AGI-1 is effectively performance-saturated at this point. Models are becoming incredible. They'll continue to hill climb but the next satisfying milestone won't come till 100%. However ARC-AGI-1 still has useful life. Performance comes at a cost and ARC-AGI-1 will monitor the efficiency of models - intelligence per watt. My hypotheses for the next [--] months: - Labs one by one get verified at 95% on ARC-AGI-1 before May. - We won't see a 95% 2x order-of-magnitude cost reduction ($0.013/task) until June '27 (happy to make this bet with someone). - We're at the point where model" [X Link](https://x.com/anyuser/status/2021997415436558451) 2026-02-12T17:18Z 46.8K followers, 30.9K engagements "Gemini [--] Deep Think (2/26) Semi Private Eval - ARC-AGI-1: 96.0% $7.17/task - ARC-AGI-2: 84.6% $13.62/task New ARC-AGI SOTA model from @GoogleDeepMind" [X Link](https://x.com/anyuser/status/2021985585066652039) 2026-02-12T16:31Z 34.6K followers, 256.7K engagements "Weve upgraded our specialized reasoning mode Gemini [--] Deep Think to help solve modern science research and engineering challenges pushing the frontier of intelligence. π§ Watch how the Wang Lab at Duke University is using it to design new semiconductor materials. π§΅ https://twitter.com/i/web/status/2021981510400709092 https://twitter.com/i/web/status/2021981510400709092" [X Link](https://x.com/anyuser/status/2021981510400709092) 2026-02-12T16:15Z 1.3M followers, 3.1M engagements "Google decided to change its release cycles If yes they can release a new minor version each month while cooking Gemini [--] pro. GPT-5.3 Gemini [---] pro this month. Next month GPT-5.4 and [---] pro and so on It could be insane" [X Link](https://x.com/anyuser/status/2021745851580768357) 2026-02-12T00:39Z [--] followers, [---] engagements "Something Big Is Happening Think back to February [----]. If you were paying close attention you might have noticed a few people talking about a virus spreading overseas. But most of us weren't paying close attention. The stock http://x.com/i/article/2021095128832622592 http://x.com/i/article/2021095128832622592" [X Link](https://x.com/anyuser/status/2021256989876109403) 2026-02-10T16:16Z 318.3K followers, 83.8M engagements "Are Google gonna release Gemini [---] pro now) If they name it [---] instead of GA it means they change their release cycle to ship new versions more often" [X Link](https://x.com/anyuser/status/2021719696739910003) 2026-02-11T22:55Z [--] followers, [---] engagements Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
@Hunter171270 AndrewAndrew posts on X about agi, open ai, pro, xai the most. They currently have [------] followers and [---] posts still getting attention that total [---------] engagements in the last [--] hours.
Social category influence technology brands stocks celebrities finance automotive brands exchanges
Social topic influence agi, open ai, pro, xai, ai, $googl, model, end of, llm, arc
Top assets mentioned Alphabet Inc Class A (GOOGL) Tesla, Inc. (TSLA)
Top posts by engagements in the last [--] hours
"@Jay_sharings @mark_k Even 256k is so low. Gemini has 1m. I don't know how people even use GPT with such context. You basically can't provide any needed context"
X Link 2025-08-11T08:52Z [--] followers, [--] engagements
"@GaryMarcus Gary do you think Neurosymbolic is the way to AGI Do you have any article about it I want to understand what it is"
X Link 2025-08-11T16:06Z [--] followers, [---] engagements
"@SherylHsu02 @OpenAI Have you tested this model on ARC-AGI) If it was just a little bit general it'd score 80+% on 1/2 benchmarks)"
X Link 2025-08-11T18:24Z [--] followers, [---] engagements
"@Dr_Singularity it'd be cool if I was wrong though :). We're far away from AGI ( Hope we reach it by [----] ). Don't mention ASI"
X Link 2025-08-30T13:14Z [--] followers, [---] engagements
"@elonmusk @xai @grok No. Only when all of the tests are passed 100% including ARC ones. Also it needs long-term memory test-time adaptation/learning and has to be at least 1% as efficient as human brain which means that it must learn from a few examples not spending thousands of kilowatts"
X Link 2025-09-17T06:57Z [--] followers, [--] engagements
"@slow_developer No if there's any other bench ( ARC-AGI-4 etc. ) that is passed 100% by an average human and there's an AI that passes 100% on all ARC-AGIs except ARC-AGI-4 it means that "AI" is not AGI but just overfinetuned LLM"
X Link 2025-09-20T18:24Z [--] followers, [--] engagements
"@ElonBaldMusk @mark_k @xai @OpenAI Even when you reach AGI the more you have the better it'll be. The great amount of those systems are gonna be for average people and you need a hell of a lot of compute"
X Link 2025-09-23T11:47Z [--] followers, [---] engagements
"@ElonBaldMusk @mark_k @xai @OpenAI It depends on the definition of AGI. If we consider it as average human-level ( an average scientist) it doesn't change everything all at once"
X Link 2025-09-23T15:10Z [--] followers, [--] engagements
"@ElonBaldMusk @mark_k @xai @OpenAI Today we have 10m scientists world-wide. If future AGI weights are 4-8t params and in order to reach that volume of AGIs equivalent we need dozens of millions GPUs and years for building datacentrs"
X Link 2025-09-23T15:14Z [--] followers, [--] engagements
"@ElonBaldMusk @mark_k @xai @OpenAI I personally don't think that one copy of AGI will be less than 5t params. Probably 10-15 due to the whole system with different modalities such as world model + LRM + audio and so on"
X Link 2025-09-23T15:16Z [--] followers, [--] engagements
"@PeterDiamandis @demishassabis It creates big inequality due to different IQ levels. Those who under [---] will struggle. The only solution is true and powerful AGI. If it's not the case wait for a rebellion from normal white-collar workers :)"
X Link 2025-09-24T22:29Z [--] followers, [--] engagements
"I wish OpenAI fine-tuned GPT-4.5 making it reasoner π"
X Link 2025-10-04T10:45Z [--] followers, [--] engagements
"@slow_developer No GPT-5 is not a big leap at all. Its knowledge cutoff is Summer [----] which means that GPT-5 base models are pretty much the same as o4-mini/o3. Any improvements on benchmarks are just additional post-training"
X Link 2025-10-04T12:32Z [--] followers, [----] engagements
"@Rouge_Encore @slow_developer But current models are 70% and more are text-trained. In the near future they'll start training with all modalities and use RL with them"
X Link 2025-10-04T14:15Z [--] followers, [---] engagements
"LLM with test-time-training and tree of thoughts + Sora 4/5 + image module + audio module + 3D module = Omni GPT-7/8. While you can provide them independently from API future models must contain full modalities in [--] MoE"
X Link 2025-10-04T16:59Z [--] followers, [--] engagements
"@OpenAI you talk about democratizing AI and making it for everyone. So why not at least share the sizes of your models (parameters/tokens) This would help the community better understand progress and inspire open-source development"
X Link 2025-10-04T17:37Z [--] followers, [--] engagements
"@NTFabiano But what you choose depends largely on your genes so often you simply can't make the right choice. You can apply for example some useful habit but you give it up eventually just because you are not born to do that thing. Sometimes even environment choice is predetermined. π"
X Link 2025-10-04T18:22Z [--] followers, [---] engagements
"@mark_k 100% but it doesn't mean that a video or audio model can't solve problems. They 100% can but you have to SFT and RL those models as well as LLMs. Currently they don't do it but in a couple of years we'll get full omni models. π"
X Link 2025-10-04T18:40Z [--] followers, [---] engagements
"@RamonVi25791296 @slow_developer No not at all again. GPT-5 is just a little better than GPT-4o while GPT-5-thinking-mini has presumably the same model as o4-mini-high. Furthermore you have very little messages with thinking while you have Gemini [---] pro for free and even Grok4 full for 3-5 requests per hour"
X Link 2025-10-04T20:03Z [--] followers, [---] engagements
"@GaryMarcus @elder_plinius @BasedBeffJezos Doesn't matter. It's already incredible that we have such products To make more realistic physics they'll just train a much bigger model i.e. x100 more compute. It'll get better. In a few years ( [--] ) we'll be able to make full movies with indistinguishable graphic 100%"
X Link 2025-10-05T13:29Z [--] followers, [---] engagements
"@MoonlitMonkey69 @JadeCole2112 @GaryMarcus @elder_plinius @BasedBeffJezos Bro the AI doesn't need to be intelligent in a way we are so that it could completely revolutionize the world. The frog doesn't understand until it's boiled. It's been a few months and we already have these incredible models"
X Link 2025-10-05T15:10Z [--] followers, [--] engagements
"@flowersslop Yes but you're missing one point. Nobody knows how to build AGI yet π Even with compute growing and ceiling x10000 by [----] if researchers don't solve continual learning we won't have AGI. Just much more capable multimodal agents that will automate 80% of white-collar work"
X Link 2025-10-07T07:04Z [--] followers, [---] engagements
"@GoogleDeepMind @xai @OpenAI @AnthropicAI. You can already implement long-term memory using LoRA. Create an environment that collects all user data including the models reasoning filters out irrelevant data and periodically (e.g. once a week) train the memory block"
X Link 2025-10-12T10:06Z [--] followers, [--] engagements
"I expect Gemini [---] pro to score: - ARC-AGI-1/2 72/20% - SWE-bench verified 80% - Humanity's Last Exam 30% ( no tools ) and 50% with tools - AIME25 97% ( no tools ) - GPQA 92% ( no tools )"
X Link 2025-10-12T19:54Z [--] followers, [---] engagements
"@adcock_brett Hope you solve it by [----] and start making them exponentially. We need hundreds of millions by [----] from you Tesla and some third player like X1. The sooner you solve it the better for humanity"
X Link 2025-10-16T15:59Z [--] followers, [---] engagements
"@flowersslop It doesn't matter if they tested 4o's audio cuz there's much more domains. Visual processing is not a problem at all. Moreover that speech to speech 4o's model is sota at openai so you can't test something ( [--] ) that doesn't exist yet"
X Link 2025-10-16T22:17Z [--] followers, [--] engagements
"@flowersslop Video model like Sora2 and audio will be merged in one MoE soon. I'm pretty sure they'll implement a full multimodal model in GPT-7 but it won't make it AGI. Read the full paper"
X Link 2025-10-16T22:19Z [--] followers, [--] engagements
"@GaryMarcus It's not a problem with constant investments. If hardware depreciates each [--] year they'll just sell it on secondary market and change with new one. Moreover they won't need to build a new-data center and find new energy source"
X Link 2025-10-17T06:04Z [--] followers, [--] engagements
"@davidpattersonx Singularity right around the corner AGI has already been solved They just don't show it to you Tomorrow it's gonna be presented and the day after you won't need to work You'll get thousands of dollars tomorrow"
X Link 2025-10-18T15:45Z [--] followers, [--] engagements
"If Grok4 is 1e27 Grok5 will be somewhere between 5e27 and 8e27. Grok6 is 1e28+. Buckle up [----] is going to be far more interesting than this year π"
X Link 2025-09-29T21:37Z [--] followers, [--] engagements
"I'm 100% sure Gemini [---] pro RL compute spending is 40-50% ( like in Grok4 )"
X Link 2025-10-17T19:19Z [--] followers, [---] engagements
"@dieonhalloween @rand_longevity @kimmonismus No Gemini [---] will be 3-15% better than GPT-5/Grok4 you'll see. Moreover it doesn't matter if it's GPT-5 or Gemini [---] they're architecturally the same MoE neural networks. Gemini'd score slightly more due to its multimodal capabilities but it would fail in other areas"
X Link 2025-10-18T09:45Z [--] followers, [---] engagements
"@dieonhalloween @rand_longevity @kimmonismus Read the article. You can even upload it to Gemini [---] pro and talk about it. Ask Gemini to explain to you everything and why there's no AGI without solutions like continual learning generalization long-term memory"
X Link 2025-10-18T09:59Z [--] followers, [--] engagements
"@SebastienBubeck Yes of course it's acceleration AGI tomorrow"
X Link 2025-10-18T13:33Z [--] followers, [---] engagements
"@chatgpt21 I'm pretty sure it's in context learning. Although if Grok5 is able to train a LoRa on the fly it'll be amazing"
X Link 2025-10-18T19:15Z [--] followers, [---] engagements
"@victorpham2212 @adcock_brett @grok Neural nets fail when it comes to out-of-distribution problems. You see the best demos that are ideally prepared. In order to solve these problems you have to figure out how to implement continual learning and even this is not enough cuz you'll need data ( a lot of it )"
X Link 2025-10-19T20:38Z [--] followers, [--] engagements
"@victorpham2212 @adcock_brett @grok If the current approach was good enough and they had a hell of a lot of good-quality data they'd already be manufacturing these robots at the speed of light. However we're not there yet ( unfortunately ). I really hope they solve it"
X Link 2025-10-19T20:43Z [--] followers, [--] engagements
"@SewerVeggies @adcock_brett Okay I'm genuinely open to being corrected. Could you please clarify which part of my point was wrong Let's say you changed a tap to a completely different mechanism. Do you believe it would succeed in that scenario If so I'd be interested to hear the technical reasoning"
X Link 2025-10-20T09:35Z [--] followers, [--] engagements
"@SewerVeggies @adcock_brett An LLM can generate a text yes. But the robot's own vision and motor control model has to execute that. And "testing the environment" in the physical world isn't free ( it means breaking the tap flooding the kitchen and so on). It doesn't have the common-sense"
X Link 2025-10-20T10:14Z [--] followers, [--] engagements
"@SewerVeggies @adcock_brett I know. Reading can be hard"
X Link 2025-10-20T10:33Z [--] followers, [--] engagements
"@SewerVeggies @adcock_brett "I don't feel like discussing" is the most honest thing you've said. I get it. It's tough when you can't defend your own points. ππ«Ά"
X Link 2025-10-20T10:51Z [--] followers, [--] engagements
"@SewerVeggies @adcock_brett Actually I'm pretty stupid but you know it depends ππ«Ά"
X Link 2025-10-20T11:04Z [--] followers, [--] engagements
"@adcock_brett Wow Looks amazing Then it'll be B300 then Rubin [---] ( 2026/2027 ) then Rubin 300/Ultra with 1t memory ( 2027/2028 ) then Feynman by [----] we'll have x1000 compute from now ππ"
X Link 2025-10-22T21:29Z [--] followers, [---] engagements
"There is already WM ( context ) in LLMs that is located in prefrontal cortex in human brain. Labs need to figure out how to do hippocampus. I'm pretty sure they'll solve this by [----]. I guess that there has to be [--] tiny submodel in MoE that has to go in and out or LoRA π"
X Link 2025-10-24T21:26Z [--] followers, [--] engagements
"Prediction: OpenAI's [----] product is a lightweight speaker with speech-to-speech model π"
X Link 2025-10-29T19:34Z [--] followers, [--] engagements
"@davidpattersonx Your AGI timeline is bullshit. There's no AGI till [----] at least ( It actually can even take up to [----] as well as useful robots ). You'll see it. You're right that all jobs will be automated but it's gonna take at least [--] years from now for the US and 15-20 worldwide"
X Link 2025-10-31T14:19Z [--] followers, [--] engagements
"My predictions for [----] releases: - Grok5 Early Q1 [----] - GPT-5o/GPT-5.5 EOY / Early Q1 [----] - Claude [--] Early Q1 [----] - Gemini [---] pro Q1 [----] - Grok6 Q3 [----] - GPT-6 Q3 [----] - Claude 5.5/6 Q3 [----] - Gemini [---] pro Q4 2026"
X Link 2025-11-01T14:19Z [--] followers, [--] engagements
"@davidpattersonx Well Your "pace is fast" is not enough for robots being able to do all jobs by the end of next year. You're saying this shit each time. "greater investment" is not a problem at all. The problem is architecture and lack of data"
X Link 2025-11-01T19:03Z [--] followers, [--] engagements
"@daniel_mac8 The average LLM user doesn't even have to pay for a subscription when there are models like Gemini [---] pro ( which has been for free since its release in March ) and Grok4 ( which is about a month for free with 5-10 messages per [--] hours ). π"
X Link 2025-11-01T19:11Z [--] followers, [--] engagements
"@AjSilver87 @davidpattersonx Yes but the whole thing is accelerating anyway. If we get AGI by [----] for example there wont be any jobs left for the average white-collar basically digital worker. But then therell be a transitional period a few years of crisis until UBI. And after that robots"
X Link 2025-11-01T19:44Z [--] followers, [--] engagements
"@AjSilver87 @davidpattersonx AGI won't be an LLM. LLM is just a text model. AGI is full multimodal model with continual learning and better RL algos. Maybe something like recent DeepSeek-OCR. I'm 100% agree there needs to be 1-2 breakthroughs but it's solvable by [----] IMO"
X Link 2025-11-01T19:53Z [--] followers, [--] engagements
"My predictions for [----] releases: - GPT-5o/GPT-5.1 EOY / Early Q1 [----] - Grok5 Early Q1 [----] - Claude [--] Early Q1 [----] - Gemini [---] pro Q1 [----] - Grok6 Q3 [----] - GPT-6 Q3 [----] - Claude 5.5/6 Q3 [----] - Gemini [---] pro Q3/Q4 2026"
X Link 2025-11-03T11:36Z [--] followers, [---] engagements
"@mark_k My predictions for [----] releases: - GPT-5o/GPT-5.1 EOY / Early Q1 [----] - Grok5 Early Q1 [----] - Claude [--] Early Q1 [----] - Gemini [---] pro Q1 [----] - Grok6 Q3 [----] - GPT-6 Q3 [----] - Claude 5.5/6 Q3 [----] - Gemini [---] pro Q3/Q4 2026"
X Link 2025-11-05T16:09Z [--] followers, [---] engagements
"I just talked to Kimi K2 about determinism cellular automata and free will and it was the most profound and interesting conversation Ive had compared to Gemini [---] Pro and Grok [--] which I discussed the same topics with for 2-3 days. It's just soo cool π π Hello Kimi K2 Thinking The Open-Source Thinking Agent Model is here. πΉ SOTA on HLE (44.9%) and BrowseComp (60.2%) πΉ Executes up to [---] [---] sequential tool calls without human interference πΉ Excels in reasoning agentic search and coding πΉ 256K context window Built https://t.co/lZCNBIgbV2 π Hello Kimi K2 Thinking The Open-Source Thinking"
X Link 2025-11-06T23:25Z [--] followers, [---] engagements
"I'll laugh if Gemini [---] doesn't have at least 20% on ARC-AGI-2 and 72% on [--]. P.S. I love Gemini. I don't like this fucking hype when you all pretend "everything changes" and such shit"
X Link 2025-11-15T14:27Z [--] followers, [---] engagements
"@Tuxsoia @mark_k I don't think we'll see Grok [---] at all IMO. It'll be just like with Grok 3.5"
X Link 2025-11-16T11:27Z [--] followers, [--] engagements
"@Tuxsoia @mark_k Grok5 will be much better than Gemini [---] pro IMO. I'd say 15% better due to the number of params. It's gonna be the first next gen model with 6T if Elon didn't lie about it. Grok5's competitor will be Gemini [---] pro"
X Link 2025-11-16T11:39Z [--] followers, [---] engagements
"@OriolVinyalsML @ilyasut @quocleix Why do I have a feeling that the base model is as old as the [---] one The knowledge cutoff in [---] Pro is May/June [----]. I really think you haven't trained a new base model. I want to be completely wrong but I ask questions and [---] gets stuck in early [----]. π"
X Link 2025-11-18T19:08Z [--] followers, [----] engagements
"My predictions for ARC-AGI-2 for 2026: - GPT-5.5/o 35-40% - Grok5 45-50% - Gemini [---] pro 45-50% - GPT-6 70-80% - Grok6 70-80% - Gemini [---] pro 75-80%"
X Link 2025-11-19T14:04Z [--] followers, [---] engagements
"@OpenAI is basically forced to release something like GPT-5.5 + Pro as an answer to Gemini [---] Pro. I think [---] Pro is partially distilled from the internal [---] DeepThink and I'm sure GPT-5.5 will be distilled from the IMO model with some checkpoint of IMO used as GPT-5.5 Pro"
X Link 2025-11-19T15:42Z [--] followers, [--] engagements
"@mark_k @xai I think Grok5 has already been in training since October based on Elon's posts"
X Link 2025-11-19T15:54Z [--] followers, [---] engagements
"@mark_k @xai There are already a bunch of GB200s according to Elon and epoch ai mentioned it too. I dont remember the exact number but I saw something like 100-150k GB200s that is roughly 200-300k B200s. Thats enough for Grok5"
X Link 2025-11-19T15:59Z [--] followers, [--] engagements
"@GoogleDeepMind please take a newer common crawl checkpoint for Gemini [---] pro this time. You have a lot of time for preparing a new dataset. Gemini 3.0's knowledge cutoff is May/June [----]. π"
X Link 2025-11-19T16:54Z [--] followers, [--] engagements
"@Tuxsoia @mark_k Grok4.2 and Grok5 are completely different base models. Grok5 is in pre-training since October anyway. I think Grok4.20 it's an answer to Gemini [---] pro but Gemini seems to be bigger so Grok4.20 is not enough even with deeper post-training"
X Link 2025-11-20T16:12Z [--] followers, [--] engagements
"@Tuxsoia @mark_k Though it could be February but it depends on how good Grok4.2 will be. I don't see it be better than [---] pro especially when Google can release new checkpoints. π"
X Link 2025-11-20T16:15Z [--] followers, [--] engagements
"@elonmusk @grok It'd be super cool if Grok checked the images it finds because sometimes he attaches some weird stuff. π"
X Link 2025-11-22T17:34Z [--] followers, [--] engagements
"@ramez @MattyLitchh @AIZEN30XX @rand_longevity Why don't you think that exponentially increasing compute won't simulate the whole body billions of times including any brain cell Ofc then you need to test it on humans but you reduce a hell of a lot of not working branches. It's your biggest mistake"
X Link 2025-11-23T00:20Z [--] followers, [--] engagements
"@DaveShapi @MaxFRobespierre If you ask any model about the definition of AGI it will respond that none of the existing models are AGI. Even old models like GPT-3.5 in the end of [----] gave such definitions"
X Link 2025-11-29T00:08Z [--] followers, [--] engagements
"@davidpattersonx I didn't say anything about improvements in architecture and algorithm efficiency. Chips are already at the atomic level. They can't simply be made smaller. Moore's Law has virtually stopped"
X Link 2025-12-14T12:05Z [--] followers, [--] engagements
"@davidpattersonx AGI/ASI by the end of [----] all jobs replaced by [----]. I just want to see what you'll talk about then π
"
X Link 2025-12-23T04:29Z [--] followers, [---] engagements
"- there's no singularity in [----] - there's no AGI in [----] - Opus [---] is not AGI and [--] and [--] and [--] won't be"
X Link 2026-01-05T06:02Z [--] followers, [--] engagements
"@davidpattersonx I wonder when you start posting that you weren't right all along make excuses like companies don't adapt blame restrictions from the government or start calling some GPT-6/Gemini4.0/Grok6/Opus5.5 AGIs when they're clearly won't be. I think you start doing it by November.π"
X Link 2026-01-06T08:08Z [--] followers, [---] engagements
"@Celiksei @ItakGol But resources grow exponentially. Training compute grows x4-5 a year by Epoch AI. Intelligence [--] times cheaper each year since [----]. It was like this even before reasoning models in [----] so labs will figure out something new. Fox example Google's Titans architecture"
X Link 2026-02-08T11:42Z [--] followers, [--] engagements
"@Celiksei @ItakGol You're totally right that we won't be able to do it indefinitely but by [----] it's 99% feasible. From January [----] to January [----] we'll have x1000-3125 more compute. Just imagine it. With current compute continual learning is not possible but in 2028-2030 it will be"
X Link 2026-02-08T16:31Z [--] followers, [--] engagements
"LLM + world model + continual learning is AGI IMO. This year and in early [----] companies are gonna take low-hanging fruits from scaling pre-training and post-training with text data. In [----] when we'll hit diminishing returns they'll combine Google Titans + world models"
X Link 2026-02-11T00:30Z [--] followers, [--] engagements
"Will GPT-6 and its minor updates be Agent [--] Will Gemini [---] and [---] GA be Agent [--] Will Opus [---] be Agent [--] Will Grok [--] be Agent 1"
X Link 2026-02-11T15:26Z [--] followers, [--] engagements
"By AI 2027: Will GPT-6 and its minor updates be Agent [--] Will Gemini [---] and [---] GA be Agent [--] Will Opus [---] be Agent [--] Will Grok [--] be Agent [--] My guess is that Agent [--] is GPT-6.5 Gemini [---] pro Opus [--] Grok [--] and models this year will become Agent 0.8"
X Link 2026-02-11T15:37Z [--] followers, [--] engagements
"@natolambert @METR_Evals On what % do you watch I think that P80 is all that matters. P50 is like flipping a coin. Is GPT-5.3-codex and Opus [---] is somewhere between [----] and [----] hours So some Gemini [---] and GPT-5.5 can be [--] hours and by the end of year Gemini [---] pro and GPT-6 10-12 hours"
X Link 2026-02-11T17:26Z [--] followers, [---] engagements
"@mllichti @natolambert @METR_Evals I'm 1000% sure Opus [---] and GPT-5.3-codex are already at 6-7%"
X Link 2026-02-11T18:17Z [--] followers, [--] engagements
"Google decided to change its release cycles If yes they can release a new minor version each month while cooking Gemini [--] pro. GPT-5.3 Gemini [---] pro this month. Next month GPT-5.4 and [---] pro and so on It could be insane"
X Link 2026-02-12T00:39Z [--] followers, [---] engagements
"@scaling01 Gemini [---] Pro Opus [---] everywhere except coding and agency"
X Link 2026-02-12T10:00Z [--] followers, [---] engagements
"@rand_longevity Unfortunately we need real AGI for that. I don't think it's sooner than the end of [----]. Even with x4-5 compute year. Even with METR ( [--] days doubling ) and RLI ( [--] months ). And I think there won't be UBI until robots. And even with exponential growth we'll need a few years"
X Link 2026-02-12T13:11Z [--] followers, [----] engagements
"@bamabreak24 @rand_longevity My guess is [----] for first world and [----] for all. Even with AGI in [----]. It'll take 1-3 years to automate digitals jobs and with useful robots by [----] [--] years to manufacture enough ( with exponential growth )"
X Link 2026-02-12T13:34Z [--] followers, [---] engagements
"@Jlm9022 @bamabreak24 @rand_longevity After extrapolation of METR and RLI it looks like AGI next year actually. I do think we need continual learning + world models. It might happen next year but I somehow think it'll happen in 2028"
X Link 2026-02-12T21:09Z [--] followers, [--] engagements
"@Jlm9022 @bamabreak24 @rand_longevity Compute growth is x4-5 each year. We don't even have the full Stargate yet. Look at METR P80 and RLI. Extrapolate it to [----]. GPT-7 and Gemini [--] pro by the end of [----] may be AGI and I'm 100% sure we'll have AGI between [----] and 2030"
X Link 2026-02-12T21:10Z [--] followers, [--] engagements
"By METR and RLI AGI happens by the end of the next year. GPT-7 and Gemini [---] pro ( I expect them to be released between October-December [----] may have continual learning and world models. Though I'm not sure if we'll have enough compute for continual learning before Q3/4 2028"
X Link 2026-02-13T03:47Z [--] followers, [--] engagements
"@davidpattersonx Your definition of AUI is my definition of AGI then) My definition: A model that's on pair with every middle professional at any digital task. And it means it can completely automate a job. Without human in the loop. Bu your poin is AGI AND ASI by 2027"
X Link 2026-02-13T13:38Z [--] followers, [--] engagements
"@AudioBooksRU @davidpattersonx AGI . ASI 100%. 100% "
X Link 2026-02-13T13:52Z [--] followers, [--] engagements
"@AudioBooksRU @davidpattersonx @demishassabis . ASI. . "
X Link 2026-02-13T14:01Z [--] followers, [--] engagements
"@AudioBooksRU @davidpattersonx @demishassabis METR RLI ( Remote labor Index ) METR [--] RLI [--] AGI . 100% RLI [--] METR [--] . AGI"
X Link 2026-02-13T14:16Z [--] followers, [--] engagements
"@AudioBooksRU @davidpattersonx @demishassabis GPT Gemini . compute continual learning. 2028"
X Link 2026-02-13T14:18Z [--] followers, [--] engagements
"@AudioBooksRU @davidpattersonx @demishassabis [----]. [----]. 2030). ). [----] . [----] AGI "
X Link 2026-02-13T14:20Z [--] followers, [--] engagements
"@AudioBooksRU @davidpattersonx @demishassabis ASI AGI. AGI superhuman). )"
X Link 2026-02-13T14:22Z [--] followers, [--] engagements
"@AudioBooksRU @davidpattersonx @demishassabis P.S. GPT-7 Gemini [---] pro NVL144 Rubin . GPT-6 Gemini [---] pro. / 2027"
X Link 2026-02-13T14:26Z [--] followers, [--] engagements
"Gemini [---] pro and flash hallucinate a lot. It's so fucking annoying π«. In these models I start doubting about AGI with transformers π€¦β"
X Link 2026-02-13T19:53Z [--] followers, [--] engagements
"@scaling01 I think Gemini [---] pro GPT-5.5 and Opus [---] will be able to do 20-30% this Spring. The only reason they won't be able to do 90% without .md is huge amount of context needed to make errors and adapt. Maybe ( hope so ) it'll make companies implement Titans from Google"
X Link 2026-02-14T18:40Z [--] followers, [---] engagements
"Gemini [---] Pro Predictions: - 70-73% ARC-AGI-2 - 80% SWE-bench - [----] Codeforces Elo - 45.5% HLE (no tools) - 80-90m METR P80 - 2.7-3.2% RLI - 60% Terminal-Bench [---] - $7k Vending-Bench [--] - 81.5% MMMU-Pro"
X Link 2026-02-15T02:08Z [--] followers, [--] engagements
"@CodeByNZ @MoodiSadi How can you tell if your friend has subjective experience π"
X Link 2026-02-16T03:28Z [--] followers, [---] engagements
"I'm joining @OpenAI to bring agents to everyone. @OpenClaw is becoming a foundation: open independent and just getting started.π¦ https://steipete.me/posts/2026/openclaw https://steipete.me/posts/2026/openclaw"
X Link 2026-02-15T21:54Z 333.4K followers, 4M engagements
"300 IQ move. it'll be integrated into all models soon. Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become"
X Link 2026-02-15T22:44Z [--] followers, [--] engagements
"Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our product offerings. OpenClaw will live in a foundation as an open source project that OpenAI will continue to support. The future is going to be extremely multi-agent and it's important to us to support open source as part of that. https://twitter.com/i/web/status/2023150230905159801"
X Link 2026-02-15T21:39Z 4.4M followers, 13.8M engagements
"To all those people complaining "AI was trained on copyrighted content" well SO WERE YOU dumb ass. Every movie you watched was copyrighted. Every book you read was copyrighted. Every TV show you saw was copyrighted. Your brain "learned" from all those things and it shaped your knowledge and the way you think. Should all those copyright holders now SUE you for writing your own books articles or screenplays After all YOU were "trained on copyrighted content" https://twitter.com/i/web/status/2022893234716922184 https://twitter.com/i/web/status/2022893234716922184"
X Link 2026-02-15T04:38Z 352.7K followers, 119.6K engagements
"AI as an operating system Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our"
X Link 2026-02-15T21:59Z [--] followers, [--] engagements
"Gemini [---] Pro Predictions: - 70-73% ARC-AGI-2 - 80% SWE-bench - [----] Codeforces Elo - 45.5% HLE (no tools) - 80-90m METR P80 - 2.7-3.2% RLI - 60% Terminal-Bench [---] - $7k Vending-Bench [--] - 81.5% MMMU-Pro"
X Link 2026-02-15T02:08Z [--] followers, [--] engagements
"My brain cant process thisπ€―"
X Link 2026-02-14T03:09Z 68K followers, 3.4M engagements
"Even if model growth slowed to a halt today (it wont) we have 4-5yrs of harness growth + compute coming online before we asymptote"
X Link 2026-02-13T21:35Z 46.8K followers, [----] engagements
"GPT-5.2 derived a new result in theoretical physics. Were releasing the result in a preprint with researchers from @the_IAS @VanderbiltU @Cambridge_Uni and @Harvard. It shows that a gluon interaction many physicists expected would not occur can arise under specific conditions. https://openai.com/index/new-result-theoretical-physics/ https://openai.com/index/new-result-theoretical-physics/"
X Link 2026-02-13T19:19Z 4.6M followers, 4.3M engagements
"Gemini [---] pro and flash hallucinate a lot. It's so fucking annoying π«. In these models I start doubting about AGI with transformers π€¦β"
X Link 2026-02-13T19:53Z [--] followers, [--] engagements
"RT @kimmonismus: So all the leading scientists say that AGI will arrive in the next [--] years. And the world is not prepared"
X Link 2026-02-13T14:44Z [--] followers, [--] engagements
"So all the leading scientists say that AGI will arrive in the next [--] years. And the world is not prepared. @Yossi_Dahan_ @polynoamial ARC-4 is in the works to be released early [----]. ARC-5 is also planned. The final ARC will probably be 6-7. The point is to keep making benchmarks until it is no longer possible to propose something that humans can do and AI can't. AGI [----]. @Yossi_Dahan_ @polynoamial ARC-4 is in the works to be released early [----]. ARC-5 is also planned. The final ARC will probably be 6-7. The point is to keep making benchmarks until it is no longer possible to propose"
X Link 2026-02-13T08:27Z 103.7K followers, 62.5K engagements
"@Yossi_Dahan_ @polynoamial ARC-4 is in the works to be released early [----]. ARC-5 is also planned. The final ARC will probably be 6-7. The point is to keep making benchmarks until it is no longer possible to propose something that humans can do and AI can't. AGI 2030"
X Link 2026-02-12T23:13Z 606.6K followers, 204.5K engagements
"By METR and RLI AGI happens by the end of the next year. GPT-7 and Gemini [---] pro ( I expect them to be released between October-December [----] may have continual learning and world models. Though I'm not sure if we'll have enough compute for continual learning before Q3/4 2028"
X Link 2026-02-13T03:47Z [--] followers, [--] engagements
"RT @MillionInt: Its only AGI if it can improve itself continuously without us Until then it is not"
X Link 2026-02-13T03:16Z [--] followers, [--] engagements
"Its only AGI if it can improve itself continuously without us Until then it is not A different way to think about agi: What if agi was not a model but a process. The process of learning and recursively improves autonomously. What if instead of being a model that can one shoot any task agi is rather an algorithm that can learn to solve anything I think A different way to think about agi: What if agi was not a model but a process. The process of learning and recursively improves autonomously. What if instead of being a model that can one shoot any task agi is rather an algorithm that can learn"
X Link 2026-02-13T02:54Z 31.4K followers, 23.8K engagements
"A different way to think about agi: What if agi was not a model but a process. The process of learning and recursively improves autonomously. What if instead of being a model that can one shoot any task agi is rather an algorithm that can learn to solve anything I think variants of our current algorithm are powerful enough to learn any task given sufficient compute and time for models to interact with the real world. So I would argue agi is already here. Yes there is still a human in the loop but recursive self improvement has already started. Older generation models were already used to"
X Link 2026-02-13T00:00Z [----] followers, 37.6K engagements
"Lots of folks spread false narratives about how ARC-1 was created in response to LLMs or how ARC-2 was only created because ARC-1 was saturated. Setting the record straight: [--]. ARC-1 was designed 2017-2019 and released in [----] (pre LLMs). [--]. The coming of ARC-2 was announced in May [----] (pre ChatGPT). [--]. By mid-2024 there was still essentially no progress on ARC-1. [--]. All progress on ARC-1 & ARC-2 came from a new paradigm test-time adaptation models starting in late [----] and ramping up through [----]. [--]. Progress happened specifically because research moved away from what ARC was intended to"
X Link 2026-02-12T19:54Z 606.6K followers, 89.7K engagements
"RT @karpathy: Congrats on the launch @simile_ai (and I am excited to be involved as a small angel.) Simile is working on a really intere"
X Link 2026-02-12T20:32Z [--] followers, [---] engagements
"Congrats on the launch @simile_ai (and I am excited to be involved as a small angel.) Simile is working on a really interesting imo under-explored dimension of LLMs. Usually the LLMs you talk to have a single specific crafted personality. But in principle the native primordial form of a pretrained LLM is that it is a simulation engine trained over the text of a highly diverse population of people on the internet. Why not lean into that statistical power: Why simulate one "person" when you could try to simulate a population How do you build such a simulator How do you manage its entropy How"
X Link 2026-02-12T20:12Z 1.8M followers, 874.4K engagements
"Introducing Simile. Simulating human behavior is one of the most consequential and technically difficult problems of our time. We raised $100M from Index Hanabi A* BCV @karpathy @drfeifei @adamdangelo @rauchg @scottbelsky among others"
X Link 2026-02-12T19:00Z 18.8K followers, 2.2M engagements
"RT @GregKamradt: At 95% ARC-AGI-1 is effectively performance-saturated at this point. Models are becoming incredible. They'll continue t"
X Link 2026-02-12T17:44Z [--] followers, [--] engagements
"At 95% ARC-AGI-1 is effectively performance-saturated at this point. Models are becoming incredible. They'll continue to hill climb but the next satisfying milestone won't come till 100%. However ARC-AGI-1 still has useful life. Performance comes at a cost and ARC-AGI-1 will monitor the efficiency of models - intelligence per watt. My hypotheses for the next [--] months: - Labs one by one get verified at 95% on ARC-AGI-1 before May. - We won't see a 95% 2x order-of-magnitude cost reduction ($0.013/task) until June '27 (happy to make this bet with someone). - We're at the point where model"
X Link 2026-02-12T17:18Z 46.8K followers, 30.9K engagements
"Gemini [--] Deep Think (2/26) Semi Private Eval - ARC-AGI-1: 96.0% $7.17/task - ARC-AGI-2: 84.6% $13.62/task New ARC-AGI SOTA model from @GoogleDeepMind"
X Link 2026-02-12T16:31Z 34.6K followers, 256.7K engagements
"Weve upgraded our specialized reasoning mode Gemini [--] Deep Think to help solve modern science research and engineering challenges pushing the frontier of intelligence. π§ Watch how the Wang Lab at Duke University is using it to design new semiconductor materials. π§΅ https://twitter.com/i/web/status/2021981510400709092 https://twitter.com/i/web/status/2021981510400709092"
X Link 2026-02-12T16:15Z 1.3M followers, 3.1M engagements
"Google decided to change its release cycles If yes they can release a new minor version each month while cooking Gemini [--] pro. GPT-5.3 Gemini [---] pro this month. Next month GPT-5.4 and [---] pro and so on It could be insane"
X Link 2026-02-12T00:39Z [--] followers, [---] engagements
"Something Big Is Happening Think back to February [----]. If you were paying close attention you might have noticed a few people talking about a virus spreading overseas. But most of us weren't paying close attention. The stock http://x.com/i/article/2021095128832622592 http://x.com/i/article/2021095128832622592"
X Link 2026-02-10T16:16Z 318.3K followers, 83.8M engagements
"Are Google gonna release Gemini [---] pro now) If they name it [---] instead of GA it means they change their release cycle to ship new versions more often"
X Link 2026-02-11T22:55Z [--] followers, [---] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
/creator/x::Hunter171270