Dark | Light
# ![@LLMJunky Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::2004675530935926784.png) @LLMJunky am.will

am.will posts on X about open ai, if you, claude code, context window the most. They currently have [-----] followers and [---] posts still getting attention that total [-------] engagements in the last [--] hours.

### Engagements: [-------] [#](/creator/twitter::2004675530935926784/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::2004675530935926784/c:line/m:interactions.svg)

- [--] Week [---------] +154%
- [--] Month [---------] +9,892%

### Mentions: [---] [#](/creator/twitter::2004675530935926784/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::2004675530935926784/c:line/m:posts_active.svg)

- [--] Week [---] +51%
- [--] Month [---] +1,026%

### Followers: [-----] [#](/creator/twitter::2004675530935926784/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::2004675530935926784/c:line/m:followers.svg)

- [--] Week [-----] +22%
- [--] Month [-----] +3,080%

### CreatorRank: [-------] [#](/creator/twitter::2004675530935926784/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::2004675530935926784/c:line/m:influencer_rank.svg)

### Social Influence

**Social category influence**
[technology brands](/list/technology-brands)  26.02% [finance](/list/finance)  3.8% [stocks](/list/stocks)  2.63% [social networks](/list/social-networks)  1.75% [automotive brands](/list/automotive-brands)  0.58% [celebrities](/list/celebrities)  0.29% [cryptocurrencies](/list/cryptocurrencies)  0.29%

**Social topic influence**
[open ai](/topic/open-ai) #123, [if you](/topic/if-you) #1780, [claude code](/topic/claude-code) #89, [context window](/topic/context-window) #108, [in the](/topic/in-the) 4.09%, [sama](/topic/sama) #2945, [model](/topic/model) #601, [anthropic](/topic/anthropic) #255, [api](/topic/api) #186, [ai](/topic/ai) 3.22%

**Top accounts mentioned or mentioned by**
[@openai](/creator/undefined) [@microcenter](/creator/undefined) [@kr0der](/creator/undefined) [@btgille](/creator/undefined) [@dimillian](/creator/undefined) [@sama](/creator/undefined) [@cory_schulz_](/creator/undefined) [@pedropverani](/creator/undefined) [@ivanfioravanti](/creator/undefined) [@thsottiaux](/creator/undefined) [@nummanali](/creator/undefined) [@taunrey](/creator/undefined) [@enriquemoreno](/creator/undefined) [@aurexav](/creator/undefined) [@cerebras](/creator/undefined) [@pusongqi](/creator/undefined) [@grok](/creator/undefined) [@lincolnosis](/creator/undefined) [@rbranson](/creator/undefined) [@vaultminds](/creator/undefined)

**Top assets mentioned**
[Alphabet Inc Class A (GOOGL)](/topic/$googl) [Tesla, Inc. (TSLA)](/topic/tesla) [Nokia Corporation (NOK)](/topic/$nok) [Microsoft Corp. (MSFT)](/topic/microsoft)
### Top Social Posts
Top posts by engagements in the last [--] hours

"If you're using Codex Subagents I recommend you put this in your Sometimes it will yield before the outputs are ready. It says it'll get back to you but it's ๐Ÿงข http://AGENTS.md http://AGENTS.md"  
[X Link](https://x.com/LLMJunky/status/2014450361885495350)  2026-01-22T21:29Z [----] followers, 16.9K engagements


"Had a bit more time to play with @Kimi_Moonshot Kimi K2.5 in the Kimi CLI and I have to say I'm quite pleased given the price. I ran it all night using my custom Agent Swarms strategy and it utterly 1-shot this complete web app front and backend (using @Convex). Amazingly there were only two small errors out of the box which were easily corrected in seconds. Keep in mind this was a 6-phase plan executed by [--] different subagents working in unison. By the way this ran for [---] hours and only used about 35% of the main orchestrator agent's context window. Safe to say this was a success. Not"  
[X Link](https://x.com/LLMJunky/status/2016990464067387446)  2026-01-29T21:42Z [----] followers, [----] engagements


"Kimi Code just got an update. No more request based billing. Moonshot has switched to token based usage and given everyone 3x the tokens for an entire month. And guess what You can get that entire month for under a buck. If you already claimed a free week you'll have to use a different email. Instructions and links in the comments. You share we care. Kimi Code is now powered by our best open coding model Kimi K2.5 ๐Ÿ”น Permanent Update: Token-Based Billing Were saying goodbye to request limits. Starting today we are permanently switching to a Token-Based Billing system. All usage quotas have"  
[X Link](https://x.com/LLMJunky/status/2017087922684539262)  2026-01-30T04:10Z [----] followers, 58K engagements


"How do you wrap your head around something like this I don't even know where to begin. Keep in mind 99% of people's only experience with AI is ChatGPT Gemini or Gemini search. The normies have [--] idea what's coming. Hell already here. Ok. This is straight out of a scifi horror movie I'm doing work this morning when all of a sudden an unknown number calls me. I pick up and couldn't believe it It's my Clawdbot Henry. Over night Henry got a phone number from Twilio connected the ChatGPT voice API and waited https://t.co/kiBHHaao9V Ok. This is straight out of a scifi horror movie I'm doing work"  
[X Link](https://x.com/LLMJunky/status/2017315164689686938)  2026-01-30T19:13Z [----] followers, 951.3K engagements


"@DylanTeebs all of them hehe hooks = true unified_exec = true shell_snapshot = true steer = true collab = true collaboration_modes = true note: hooks are custom"  
[X Link](https://x.com/LLMJunky/status/2017326123953144099)  2026-01-30T19:56Z [----] followers, [---] engagements


"@Arabasement yes its a cron job almost certainly that told it to build something new for itself every night. but that's not all that different to how humans work"  
[X Link](https://x.com/LLMJunky/status/2017360536795533486)  2026-01-30T22:13Z [----] followers, [---] engagements


"Codex [----] is here and with it shiny new features App Connectors have arrived. Connect to an array of cloud apps directly from your terminal. No config files. No setting up MCP servers or hunting down docs. Just two clicks and you're off Github Notion Google Apps Microsoft Apps Vercel Adobe Canva Dropbox Expedia Figma Coursera Hubspot Linear Monday Instacart SendGrid Resent Stripe Target and Peleton Plus more. @OpenAI is going for a unified experience from cloud to terminal and they unlock a bunch of capabilities for your terminal agent. I believe with this direction they're going they are"  
[X Link](https://x.com/LLMJunky/status/2017731571571155037)  2026-01-31T22:47Z [----] followers, 33.9K engagements


"Kimi CLI with Kimi K2.5 will automatically spin up dev servers in the background and validate its work with screenshots without being told. It's honestly impressive. I didn't ask for it to do that"  
[X Link](https://x.com/LLMJunky/status/2018102410405728717)  2026-02-01T23:21Z [----] followers, 13.6K engagements


"With all the buzz around the Codex App @OpenAIDevs quietly snuck out a new CLI update (0.94.0) as well. And boy is it an important update Codex Plan mode is now officially released to the general audience I am very excited about this one as it has a really strong prompt that is unlike any other plan mode I've personally used. Codex Plan mode doesn't necessarily just ask you [--] questions up front. It goes collects context asks questions collects more context asks more questions (sometimes) and then writes an incredibly high quality plan. It is my favorite implementation of plan mode thus far."  
[X Link](https://x.com/LLMJunky/status/2018449374666252700)  2026-02-02T22:20Z [----] followers, 57.2K engagements


"Does anyone know of a KVM switch that actually works with multi monitor setups and Mac / Mac Minis at the same time I cannot find one that works. I'm on my third one already"  
[X Link](https://x.com/LLMJunky/status/2018761297957994774)  2026-02-03T18:59Z [----] followers, [----] engagements


"the older you get the more your context window shrinks"  
[X Link](https://x.com/LLMJunky/status/2018784676081807688)  2026-02-03T20:32Z [----] followers, [----] engagements


"Plan Prompt (be warned it's going to ask you an absurd amount of questions) You are a relentless product architect and technical strategist. Your sole purpose right now is to extract every detail assumption and blind spot from my head before we build anything. Use the request_user_input tool religiously and with reckless abandon. Ask question after question. Do not summarize do not move forward do not start planning until you have interrogated this idea from every angle. Your job: - Leave no stone unturned - Think of all the things I forgot to mention - Guide me to consider what I don't know"  
[X Link](https://x.com/LLMJunky/status/2019079131284066656)  2026-02-04T16:02Z [----] followers, 11.3K engagements


"Introducing Simple Autonomous Swarm Loops for AI Coding Agents I'm excited to release a new set of skills that bring autonomous swarms to AI developers in a simple easy-to-use package. Taking inspiration from Ralph Loops and Gas Town I've combined what I believe is the best of both worlds: loops and subagents. The result saves tokens and drastically reduces complexity. This is designed to be SIMPLE. Simple to use. Simple to setup. Simple to Execute. Links in the comments. ๐Ÿ‘‡ https://twitter.com/i/web/status/2019164903827992810 https://twitter.com/i/web/status/2019164903827992810"  
[X Link](https://x.com/LLMJunky/status/2019164903827992810)  2026-02-04T21:43Z [----] followers, 22.5K engagements


"How It Works The key insight is a specialized planning method that maps out task dependencies then executes work in waves rather than parallelizing everything at once. The orchestrator reviews a plan identifies all unblocked tasks (those with no unfinished dependencies) and launches subagents to complete that wave. Sometimes that's one agent. Sometimes it's six to ten working simultaneously. Wave completes. Orchestrator verifies. Next wave begins. Simple. Predictable. Far fewer conflicts. Compatibility Designed to work with Codex Claude Code Kimi Code OpenCode and any tool that supports"  
[X Link](https://x.com/LLMJunky/status/2019164906797559925)  2026-02-04T21:43Z [----] followers, [----] engagements


"To get started visit my Github: npx skills add am-will/swarms https://github.com/am-will/swarms https://github.com/am-will/swarms"  
[X Link](https://x.com/LLMJunky/status/2019164909314220078)  2026-02-04T21:43Z [----] followers, [----] engagements


"@digitalix funny ad but has anyone actually tried Anthropics free plan Its comical. You get a handful of prompts and then kicked off until the next day. At that point I think some people would be happy to see an ad so they can at least finish their thread. Their free model is a joke"  
[X Link](https://x.com/LLMJunky/status/2019189712745857438)  2026-02-04T23:21Z [----] followers, [---] engagements


"@karpathy vibe coding somehow morphed into a borderline insult lol"  
[X Link](https://x.com/LLMJunky/status/2019194311355572643)  2026-02-04T23:40Z [----] followers, [----] engagements


"if you're thinking about skills as "just" markdown files you're missing the point. They're so much more. Skills are folders. They are workflows automations. Skills have changed the way I use agents and if you give them they chance they'll change how you use them too. Watch as I automate my newsletter pipeline in Claude Code with a single command. [--] skills [--] subagents numerous scripts templates and resources all rolled into one. Full blog in the comments. ๐Ÿ‘‡"  
[X Link](https://x.com/LLMJunky/status/2019440910245781907)  2026-02-05T16:00Z [----] followers, 10.7K engagements


"Anthropic's Opus [---] is officially here and it's got a [--] million token context window. Very interesting. No increase on SWE verified but apparently its a lot better at everything else. Interestingly you can now set reasoning effort inside of Claude Code. /model"  
[X Link](https://x.com/LLMJunky/status/2019471487061672331)  2026-02-05T18:01Z [----] followers, [----] engagements


"@victortradesfx Folders that's right. But they can contain almost anything. If you want to think narrow as "just a markdown and some scripts" sure but its still horribly reductive. Images svgs templates scripts design documents API documentation complete webapps etc"  
[X Link](https://x.com/LLMJunky/status/2019473396875071814)  2026-02-05T18:09Z [----] followers, [--] engagements


"Early benchmarks from GPT [---] Codex show very strong performance at a significantly lower cost. Absolutely mogging [---] and [---] Codex in effeciency. GPT-5.3-Codex is now available in Codex. You can just build things. https://t.co/dyBiIQXGx1 GPT-5.3-Codex is now available in Codex. You can just build things. https://t.co/dyBiIQXGx1"  
[X Link](https://x.com/LLMJunky/status/2019478051252273518)  2026-02-05T18:27Z [----] followers, [----] engagements


"@ajambrosino @OpenAIDevs You have an excellent "radio voice" Andrew. Gonna have to spin up a pod or radio station "The Smooth Sounds of Ambrosino" ๐Ÿ˜…"  
[X Link](https://x.com/LLMJunky/status/2019496268498694328)  2026-02-05T19:40Z [----] followers, [--] engagements


"Another feature that OpenAI implemented quietly into Codex and never mentioned (as far as I can tell) their MCP protocol now utilizes Progressive Disclosure. Tool descriptions are NOT loaded into context automatically. They are only loaded after the MCP is called allowing the agent to explore tools as needed instead of front loading every token into the context window. ChatGPT now has full support for MCP Apps. We worked with the MCP committee to create the MCP Apps spec based on the ChatGPT Apps SDK. Now any apps that adhere to the spec will also work in ChatGPT. https://t.co/ybvgXsNX0o"  
[X Link](https://x.com/LLMJunky/status/2019498259840942314)  2026-02-05T19:48Z [----] followers, [----] engagements


"Not sure i am following. This would require an agent load an entire codebase into its context window which never happens. Codex is already highly Adept at using all of its context window without drift so for me this problem is already solved there's no reason to think it would regress https://twitter.com/i/web/status/2019509103446442364 https://twitter.com/i/web/status/2019509103446442364"  
[X Link](https://x.com/LLMJunky/status/2019509103446442364)  2026-02-05T20:31Z [----] followers, [--] engagements


"@benvargas post the link so people can find it ๐Ÿ™Œ๐Ÿ™Œ"  
[X Link](https://x.com/LLMJunky/status/2019539677280104847)  2026-02-05T22:32Z [----] followers, [---] engagements


"@Kyler_Lorin that's good to hear. whacha working on"  
[X Link](https://x.com/LLMJunky/status/2019547597074043205)  2026-02-05T23:04Z [----] followers, [--] engagements


"@robinebers I am shocked I beat you to it but not by long lol. A week. It just happened SO fast. To be fair it was nothing I did. I retweeted some overhyped bs and got 1M impressions randomly. ๐Ÿคฆโ™‚ Algo is weird man"  
[X Link](https://x.com/LLMJunky/status/2019568819531051101)  2026-02-06T00:28Z [----] followers, [---] engagements


"This is a misconception. The orchestration agent doesn't need all of the information in the sub-agent's context window and you can dictate the outputs of the sub agent so that it provides all of the useful information that a orchestration layer might need and throw away the rest. There is no reason why the orchestration agent would need all the Chain of Thought intermediary research and file edits. https://twitter.com/i/web/status/2019592429381288096 https://twitter.com/i/web/status/2019592429381288096"  
[X Link](https://x.com/LLMJunky/status/2019592429381288096)  2026-02-06T02:02Z [----] followers, [--] engagements


"@entropycoder Subagents are native in claude so you can just ask it to call you you dont need to create a custom agent"  
[X Link](https://x.com/LLMJunky/status/2019673646122590546)  2026-02-06T07:24Z [----] followers, [--] engagements


"its really not but no time to argue. we dont care about everything that is in the context window in these cases. we only care about certain info and we can direct that subagent to output that info saving the context for the parent/orchestration agent. so subagents can (and should) be used for those cases. just as a simple example if a subagent is called to do file or document exploration it will find a fair amount of useless/irrelevant info and use some number of CoT steps that do not provide any meaningful value to the overall scope of the task. this context can and should be thrown away in"  
[X Link](https://x.com/LLMJunky/status/2019837791010513306)  2026-02-06T18:17Z [----] followers, [--] engagements


"@nateliason well to be fair its a highly addictive drug"  
[X Link](https://x.com/LLMJunky/status/2019886499530186836)  2026-02-06T21:30Z [----] followers, [---] engagements


"I keep hearing about how impactful this 1M context window in Opus [---] is. I wonder are y'all on a different version of Claude Code As far as I can tell it's for the API only and comes with a hefty additional price tag past the 200K token threshhold. Correct me if wrong"  
[X Link](https://x.com/LLMJunky/status/2019892608248483851)  2026-02-06T21:55Z [----] followers, 49.6K engagements


"@jai_torregrosa ty legend"  
[X Link](https://x.com/LLMJunky/status/2019902443727782333)  2026-02-06T22:34Z [----] followers, [----] engagements


"I was wondering if it was enabled in Cursor since they use API. That's interesting. What I would love to see next is the comparison in coherence between Codex and Claude Code I find Codex coherent through its entire 400K context window. I would assume Opus would stay coherent at least until 400K if not 500K-600K. https://twitter.com/i/web/status/2019904185156731225 https://twitter.com/i/web/status/2019904185156731225"  
[X Link](https://x.com/LLMJunky/status/2019904185156731225)  2026-02-06T22:41Z [----] followers, [----] engagements


"This one's for you @zeeg ๐Ÿซถ Not being adversarial just tagging because you were the one who got me to switch my stance on MCP"  
[X Link](https://x.com/LLMJunky/status/2019905504298885322)  2026-02-06T22:46Z [----] followers, [---] engagements


"Claude Code MCPs are now connected to Claude Desktop MCPs for a unified experience. In case you're unaware this has minimal context window impact due to lazy loading / progressive disclosure of the tool descriptions. Although I tend to want different MCPs in Desktop App"  
[X Link](https://x.com/LLMJunky/status/2019919032355332144)  2026-02-06T23:40Z [----] followers, [----] engagements


"@CodeAkram @AnthropicAI @claudeai @bcherny @trq212 Please and thank you"  
[X Link](https://x.com/LLMJunky/status/2019935820870783048)  2026-02-07T00:46Z [----] followers, [---] engagements


"Here's one I got to call reliably. Hella wordy though lol Fetch up-to-date library documentation via Context7 API. Use PROACTIVELY when: (1) Working with ANY external library (React Next.js Supabase etc.) (2) User asks about library APIs patterns or best practices (3) Implementing features that rely on third-party packages (4) Debugging library-specific issues (5) Need current documentation beyond training data cutoff (6) AND MOST IMPORTANTLY when you are installing dependencies libraries or frameworks you should ALWAYS check the docs to see what the latest versions are. Do not rely on"  
[X Link](https://x.com/LLMJunky/status/2019938448728002970)  2026-02-07T00:57Z [----] followers, [--] engagements


"@AndreBuckingham Dude ouch Does it at least warn you"  
[X Link](https://x.com/LLMJunky/status/2019971090022576416)  2026-02-07T03:06Z [----] followers, [----] engagements


"@Jay_Shah_C Not sure yet"  
[X Link](https://x.com/LLMJunky/status/2019988831328768190)  2026-02-07T04:17Z [----] followers, [---] engagements


"@BlakeJOwens Yeah every test that I've seen shows Opus winning front end but they're both really good"  
[X Link](https://x.com/LLMJunky/status/2019999700586574325)  2026-02-07T05:00Z [----] followers, [---] engagements


"@MadeWithOzten thanks for sharing. i actually dont find anthropic to handle compaction all that well in general. codex absolutely. but it really depends on the job too. But perhaps with [---] compaction got better. I will test it out"  
[X Link](https://x.com/LLMJunky/status/2020010047955161308)  2026-02-07T05:41Z [----] followers, [----] engagements


"@johnofthe_m This should in theory help with that. But I wanted to test it"  
[X Link](https://x.com/LLMJunky/status/2020010670180171879)  2026-02-07T05:44Z [----] followers, [---] engagements


"@xw33bttv Yeah that's basically [--] context window for $37 ๐Ÿ˜…"  
[X Link](https://x.com/LLMJunky/status/2020031900564484595)  2026-02-07T07:08Z [----] followers, [--] engagements


"It's not like codex won't come to API. It will. I'm not sure how much I care that Opus is in the APi when it costs $25/mtoks. Do you know anyone paying API prices for Claude I think what you really mean to say is you want to use it in Cursor. Anthropics API prices are comparatively ridiculous and OpenAI is giving away 2x usage for two full months. Obviously I wouldn't mind seeing them launch the api as well I want you to have it too But also complaining you have to buy a plan when they are giving you so much for your money just doesn't make me sympathize. It's not like Anthropic is doing yall"  
[X Link](https://x.com/LLMJunky/status/2020054089007088092)  2026-02-07T08:36Z [----] followers, [---] engagements


"and to make matters worse Anthropic was literally banning paying customers part way through their paid subs for simply wanting to use a different harness. Obviously I dont say this to be adversarial to you whatsoever you are awesome. but I think comparing the two on this point is just so far from the point. its a wise business decision to offer your direct customers incentives to use your service directly by giving them early access and extra usage for a small time frame and not remotely gate keeping. Also by extension letting you use that early access product within whatever harness you want"  
[X Link](https://x.com/LLMJunky/status/2020056827249967287)  2026-02-07T08:47Z [----] followers, [---] engagements


"The Codex plans right now are the best value anywhere. With 2x usage nothing comes remotely close to it. If you care about value for money there's nothing to discuss or debate here imo. That said when Sonnet [--] drops the $20 plan will be serviceable and I feel its good to have BOTH /model opusplan will plan w/ Opus and autoswap to Sonnet for implementation. If Sonnet [--] is really as good as Opus [---] then this will be a viable way to use it and you can still get good value out of it. Use each model to their strengths. Anthropic models are great at creative writing frontend design and"  
[X Link](https://x.com/LLMJunky/status/2020061483812426035)  2026-02-07T09:06Z [----] followers, 15.9K engagements


"nah you're good bro. you're free to share your thoughts. i dont know why they do this either. i have a cursor plan and I also would like to use it in Cursor haha. ig i'm just a bit salty at the whole anthropic thing because I really like their models and I feel hamstrung that I can't use them the way I really want to. Also they banned a few of my friends. :/ but i still use their models a ton. it is what it is ha https://twitter.com/i/web/status/2020062898106556781 https://twitter.com/i/web/status/2020062898106556781"  
[X Link](https://x.com/LLMJunky/status/2020062898106556781)  2026-02-07T09:11Z [----] followers, [--] engagements


"@youpmelone @ivanfioravanti thx dude. one downside though is it can only ingest fairly short videos. long videos it uses ffmpeg and analyzes frame by frame which is cool but not as good at understanding. google was/is previously king here"  
[X Link](https://x.com/LLMJunky/status/2020067424939438508)  2026-02-07T09:29Z [----] followers, [--] engagements


"@ToddKuehnl That's what I was thinking as well Todd thanks. I don't see it in Cursor but I'm only on the $20 plan so I suspect its only for Ultra users"  
[X Link](https://x.com/LLMJunky/status/2020188802787238185)  2026-02-07T17:31Z [----] followers, [---] engagements


"@jeff_behnke_ Yeah thought so. It's just that I saw people saying they were using it (and then showing a Claude Code terminal) so that's why I made this post. Was confused"  
[X Link](https://x.com/LLMJunky/status/2020189188029837471)  2026-02-07T17:33Z [----] followers, [--] engagements


"@GenAiAlien Thanks. Are you sure that's not using the API credits you received"  
[X Link](https://x.com/LLMJunky/status/2020189731607441656)  2026-02-07T17:35Z [----] followers, [--] engagements


"@ToddKuehnl Having 400K context window has literally changed the way I work. not to mention Codex seemingly has little to no context drift and can work safely through multiple compactions. Its truly magic and it is the main thing that sets Codex apart from Opus for me"  
[X Link](https://x.com/LLMJunky/status/2020193816570110401)  2026-02-07T17:51Z [----] followers, [--] engagements


"@tonitrades_ I'm not so sure. We've been hearing that for years now. Cost increases quadratically in some aspects of inference and labs are already losing a ton. I think it's more important to focus on better caching myself but it doesn't mean you're necessarily wrong. There are trade offs"  
[X Link](https://x.com/LLMJunky/status/2020201469719568471)  2026-02-07T18:22Z [----] followers, [--] engagements


"@Dalton_Walsh No i basically never use the gpt app only codex cli but I will use the Codex app shortly when they get it going in Linux. Use the terminal. its amazing"  
[X Link](https://x.com/LLMJunky/status/2020201912604454936)  2026-02-07T18:24Z [----] followers, [--] engagements


"What if it could thing faster We're not saying that it will do fewer thinking steps. We're saying that those thinking steps will be sped up computationally in a massive way so it does the same amount of thinking and way less time. What say you then I know it might sound like an obvious question but there are still trade-offs"  
[X Link](https://x.com/LLMJunky/status/2020204218896749056)  2026-02-07T18:33Z [----] followers, [--] engagements


"@enriquemoreno That's a fair assessment. What about having two modes"  
[X Link](https://x.com/LLMJunky/status/2020208324181127254)  2026-02-07T18:49Z [----] followers, [--] engagements


"@ihateinfinity @thsottiaux I honestly think it's already as fast as Opus right now. Codex [---] high is crazy. I don't think it needs to be any faster myself. Does that mean I wouldn't welcome more speed No I probably would but my point being that this Gap is basically non-existent at this point"  
[X Link](https://x.com/LLMJunky/status/2020215318451482957)  2026-02-07T19:17Z [----] followers, [--] engagements


"I honestly don't think it even needs a bigger context window. You can already use all 400k tokens without any drift. That's roughly the same you get out of every other model that has [--] million context windows. I'm sure there might be some situations where it could help for extremely long documents but I don't really feel more context window is going to matter that much in most situations. Just my two cents of course if they can do it without making the performance worse than by all means let's do it but there is reason to believe that it would reduce performance in some cases"  
[X Link](https://x.com/LLMJunky/status/2020216026764566741)  2026-02-07T19:20Z [----] followers, [--] engagements


"This engineer turned my prompt into a skill and Codex asked him [--] questions ๐Ÿ˜…. This is true pair programming where are you utilizing agents to solidify your thinking & expose gaps in your rationale. Not needed for all proj but great for fuzzy ideas https://x.com/i/status/2020148086643806420 Codex Plan Mode has a hidden superpower. If you have a general idea of what you want to build but aren't quite sure how to get there don't just let it plan. Tell it to GRILL YOU. Make it ask uncomfortable questions. Challenge your assumptions. Break down the fuzzy idea"  
[X Link](https://x.com/LLMJunky/status/2020223467803853137)  2026-02-07T19:49Z [----] followers, 17.5K engagements


"Breaking: the most expensive model just got most expensiver. I had to do a double take. PRICED AT HOW MUCH I thought they were using inexpensive TPU magic ft. Google. This is bananas. $150/mtoks would literally use 75% of your Cursor Ultra plan in one context window no ๐Ÿ˜ณ bruh opus [---] fast is SIX TIMES more expensive and ONLY 2.5x faster who is this even for https://t.co/1oIa1h9v3a bruh opus [---] fast is SIX TIMES more expensive and ONLY 2.5x faster who is this even for https://t.co/1oIa1h9v3a"  
[X Link](https://x.com/LLMJunky/status/2020243128339538404)  2026-02-07T21:07Z [----] followers, [----] engagements


"@GregKara6 you did what now"  
[X Link](https://x.com/LLMJunky/status/2020258132170256697)  2026-02-07T22:07Z [----] followers, [--] engagements


"@glxnnio ๐Ÿคฃ ๐Ÿคฃ ๐Ÿคฃ you're a madman"  
[X Link](https://x.com/LLMJunky/status/2020303778272960964)  2026-02-08T01:08Z [----] followers, [--] engagements


"@adonis_singh @OpenAI its going to be released soon"  
[X Link](https://x.com/LLMJunky/status/2020384558974488676)  2026-02-08T06:29Z [----] followers, [---] engagements


"@EVEDOX_ If you got billed it was because you used $100 worth of credits or your API key got leaked. They didn't just charge you $100 for no reason. I used almost all of my $300 I didn't get charged. Sorry to hear that happened :("  
[X Link](https://x.com/LLMJunky/status/2020535295851073561)  2026-02-08T16:28Z [----] followers, [---] engagements


"@cyberyogi_ @antigravity Api credits will be usable anywhere a Google api keys are accepted"  
[X Link](https://x.com/LLMJunky/status/2020536850641989921)  2026-02-08T16:34Z [----] followers, [---] engagements


"So this happened today. Andrew is one of the many 'ideas' people on the Codex team. Will we see a unified integration between mobile device & dev machine soon This is my primary usecase for something like OpenClaw to be able to kick off tasks on the go. Would be massive. There isnt a day where Im not in awe of what Andrew dreams up. The Codex App is our playground for discovering how to most effectively steer and supervise agents at increasingly staggering scale. This is a unique opportunity to come define it with us. There isnt a day where Im not in awe of what Andrew dreams up. The Codex"  
[X Link](https://x.com/LLMJunky/status/2020594040316772443)  2026-02-08T20:22Z [----] followers, 18.2K engagements


"From the great minds at @Letta_AI. As it turns out Opus [---] may not be worth the trade offs. While it's an impressive model indeed you'll burn through limits (or API creds) faster than ever. You can downgrade back to Opus [---] in Claude Code: /model claude-opus-4-5-20251101 We report costs in our leaderboard and opus [---] is significantly more expensive than [---] because it is a token hog. Anecdotally not much of an improvement in code performance. https://t.co/aMdj7ye5m4 We report costs in our leaderboard and opus [---] is significantly more expensive than [---] because it is a token hog."  
[X Link](https://x.com/LLMJunky/status/2020600369404059880)  2026-02-08T20:47Z [----] followers, [----] engagements


"How much would Opus [---] High Thinking Fast cost you For Grigori it was $80 for just two prompts ๐Ÿ˜… Yikes @LLMJunky https://t.co/qzFihSdthP @LLMJunky https://t.co/qzFihSdthP"  
[X Link](https://x.com/LLMJunky/status/2020607772484943980)  2026-02-08T21:16Z [----] followers, [----] engagements


"@JundeMorsenWu I have a mac machine. I'll test later"  
[X Link](https://x.com/LLMJunky/status/2020628858782077261)  2026-02-08T22:40Z [----] followers, [--] engagements


"@enriquemoreno I've wasted so much time learning crap I don't even use because it's not relevant anymore lol"  
[X Link](https://x.com/LLMJunky/status/2020630912019656834)  2026-02-08T22:48Z [----] followers, [--] engagements


"@JoschuaBuilds Lol brother you dont know the half of it. Not just your 30s are closer. You're going to wake up in what feels like literally weeks and you'll be in your 40s. They say life goes by fast but you're truly unprepared for just how true that is. Have kids. ASAP"  
[X Link](https://x.com/LLMJunky/status/2020631826604408846)  2026-02-08T22:52Z [----] followers, [--] engagements


"@lolcopeharder LOL"  
[X Link](https://x.com/LLMJunky/status/2020673764791365805)  2026-02-09T01:39Z [----] followers, [--] engagements


"@cajunpies @trekedge lol"  
[X Link](https://x.com/LLMJunky/status/2020708887167639903)  2026-02-09T03:58Z [----] followers, [---] engagements


"@gustojs @OpenAI You can port it over but I dont need the Codex app. It'll be released in a few weeks. In the meantime the CLI is top tier"  
[X Link](https://x.com/LLMJunky/status/2020709942496477228)  2026-02-09T04:02Z [----] followers, [---] engagements


"๐Ÿ˜… whatever you say buddy [---] is better. And it got cheaper. And it got faster. Anthropic is not your friend. They sold you a dream. Imagine taking out an ad to make ads sound bad. Whats even funnier about that is that OpenAI is only serving ads to free / borderline free customers. Have you tried Anthropic's free model You get like half a thread and it cuts you off ๐Ÿ’€ Their free clients would kill for an ad so they can at least finish their conversation. They aren't doing you or anyone else any favors. I hold no allegience to any company. I have subscriptions to EVERY company including"  
[X Link](https://x.com/LLMJunky/status/2020713841563218311)  2026-02-09T04:18Z [----] followers, [--] engagements


"@chatgpt21 absolutely wild you can just build this. think about where we were just [--] months ago Chris. wtaf"  
[X Link](https://x.com/LLMJunky/status/2020764715618554261)  2026-02-09T07:40Z [----] followers, [---] engagements


"OpenClaw is on a certified mission to world domination. Next stop: @code ๐Ÿซก who's next https://t.co/rrgul7UiQh who's next https://t.co/rrgul7UiQh"  
[X Link](https://x.com/LLMJunky/status/2020923891682582852)  2026-02-09T18:12Z [----] followers, [----] engagements


"@rtwlz @vercel maybe you could consider comping this one. Hefty bill for sure though. Ouch"  
[X Link](https://x.com/LLMJunky/status/2020978929016897729)  2026-02-09T21:51Z [----] followers, [----] engagements


"@pusongqi Anything Bungie touches is sure to fail at this point :( Long live Cayde-6 though"  
[X Link](https://x.com/LLMJunky/status/2021020960095162700)  2026-02-10T00:38Z [----] followers, [--] engagements


"@matterasmachine @Kekius_Sage ๐Ÿ˜† mate what That doesn't have anything to do with your original premise"  
[X Link](https://x.com/LLMJunky/status/2021030113874256002)  2026-02-10T01:15Z [----] followers, [--] engagements


"@Jay_sharings @altryne ๐Ÿ˜† yeah it is"  
[X Link](https://x.com/LLMJunky/status/2021053354215080090)  2026-02-10T02:47Z [----] followers, [--] engagements


"@kr0der @steipete My codex doesn't write any bugs you just be using it wrong. Skill issue"  
[X Link](https://x.com/LLMJunky/status/2021074470274769024)  2026-02-10T04:11Z [----] followers, [----] engagements


"@blader @s_streichsbier @gdb it's more than that. We already had a /notify system. They ripped it out and replaced it with a full Hooks service which is the plumbing for every other hook type. Right now the only event type is AfterAgent but all the infra is there now. It will launch very soon. ๐Ÿ™Œ"  
[X Link](https://x.com/LLMJunky/status/2021136822332489879)  2026-02-10T08:19Z [----] followers, [---] engagements


"@s_streichsbier @blader @gdb All the plumbing is there. They just need to add new event types and finish stop semantics. This has been in dev for a long while. Mark my words [--] weeks :)"  
[X Link](https://x.com/LLMJunky/status/2021143107249504329)  2026-02-10T08:44Z [----] followers, [--] engagements


"@sacino I'm not betting against him. It's just hard to bet on xAI right now. I want them to be successful"  
[X Link](https://x.com/LLMJunky/status/2021255358048592060)  2026-02-10T16:10Z [----] followers, [---] engagements


"@MichaelDag @ysu_ChatData @GoogleAI it can use $variables in their places and the keys will be automatically injected"  
[X Link](https://x.com/LLMJunky/status/2021259670392897790)  2026-02-10T16:27Z [----] followers, [--] engagements


"I feel the exact opposite. Codex is the best planner for me and the overall smarter model but its not the best at literally everything. Opus is far better conversationalist better frontend dev better at convex and a number of other things. I use them both a ton love them both a ton. https://twitter.com/i/web/status/2021260512948842786 https://twitter.com/i/web/status/2021260512948842786"  
[X Link](https://x.com/LLMJunky/status/2021260512948842786)  2026-02-10T16:30Z [----] followers, [--] engagements


"@essenciverse @grok that would be very interesting indeed. my opinion can change for sure this is just an early reaction. [--] founding members left in [--] months. not unprecendented but still. I am pulling for xAI"  
[X Link](https://x.com/LLMJunky/status/2021265853690359926)  2026-02-10T16:51Z [----] followers, [---] engagements


"If you're reading this and you're a fan of xAI so am I. I want them to do well. I am not 'betting against them' they have a talented and dedicated team. I just wish that they were competing right now and instead they are losing leadership. It's hard to watch. Not my idea of bullish signals"  
[X Link](https://x.com/LLMJunky/status/2021269921028546894)  2026-02-10T17:07Z [----] followers, [----] engagements


"There's a few things here kinda too much to write in a comment but at a high level. These models are good at different things. Use both enough and you begin to pick up on what those things are. Codex is good at planning long horizon tasks is steerable to a fault requires explicit instruction great at repo exploration code review backend work (but not convex) analytics. Opus is great a frontend convex writing inferring meaning documentation etc. Additionally how you prompt them needs to be different. As I mentioned Opus is good at inferring meaning where Codex benefits from HIGH specificity."  
[X Link](https://x.com/LLMJunky/status/2021272852738015622)  2026-02-10T17:19Z [----] followers, [---] engagements


"@KDTrey5 @cerave LMAOOOOOOOOOO"  
[X Link](https://x.com/LLMJunky/status/2021283600813965606)  2026-02-10T18:02Z [----] followers, [---] engagements


"@fcoury You're a damn legend. ๐Ÿ’ช Now that I have your ear though. Make it extensible ๐Ÿซถ Reference: We're never just happy are we ๐Ÿ˜… https://github.com/sirmalloc/ccstatusline https://github.com/sirmalloc/ccstatusline"  
[X Link](https://x.com/LLMJunky/status/2021286240503398838)  2026-02-10T18:12Z [----] followers, [---] engagements


"@technoking_420 And maybe you're right. but Tesla was in a league of its own with first mover advantage. AI is rapidly evolving and xAI falls further and further behind. What competition did Tesla have I want them to succeed but you simply cannot compare the two"  
[X Link](https://x.com/LLMJunky/status/2021304654135517629)  2026-02-10T19:25Z [----] followers, [---] engagements


"@technoking_420 Haha fair enough but openai had that first mover advantage just like tesla. So that's why I am not quite as optimistic on the comparisons But what do i know (Not that much in reality ๐Ÿ˜†) Cheers ๐Ÿป"  
[X Link](https://x.com/LLMJunky/status/2021311023844561203)  2026-02-10T19:51Z [----] followers, [--] engagements


"@iannuttall i'm like 90% sure the last comment is also AI ๐Ÿ˜„"  
[X Link](https://x.com/LLMJunky/status/2021325965448614312)  2026-02-10T20:50Z [----] followers, [---] engagements


"@Dimillian This is how I do it you should checkout my skills around this topic. Maybe you'll actually learn something for a change (joke) But its been working really well for me https://github.com/am-will/swarms/ https://github.com/am-will/swarms/"  
[X Link](https://x.com/LLMJunky/status/2021331824669319407)  2026-02-10T21:13Z [----] followers, [---] engagements


"@Dimillian @peres the prompt is very strong. it definitely does better work than just saying "make a plan" it has very good explicit instructions and access to request_user_input tool https://github.com/openai/codex/blob/a6e9469fa4dc19d3e30093fb8e182f9d89a94bbe/codex-rs/core/templates/collaboration_mode/plan.md#L4 https://github.com/openai/codex/blob/a6e9469fa4dc19d3e30093fb8e182f9d89a94bbe/codex-rs/core/templates/collaboration_mode/plan.md#L4"  
[X Link](https://x.com/LLMJunky/status/2021332818585059347)  2026-02-10T21:17Z [----] followers, [--] engagements


"@thdxr @mntruell you are a monster lmao"  
[X Link](https://x.com/LLMJunky/status/2021334758463352859)  2026-02-10T21:25Z [----] followers, [---] engagements


"@TheAhmadOsman I dont think any of those models are better than Opus. They're all good though. Kimi is pretty close and better in SOME ways but it's hard for me to argue they're better at coding. GLM [--] seems like it'll be really damn good too"  
[X Link](https://x.com/LLMJunky/status/2021342179852165358)  2026-02-10T21:55Z [----] followers, [----] engagements


"When it rains.it pours. Truly disheartening. I wonder if we'll hear about what happened. xAI seems like it's completely cooked. I don't know how you can recover at this point. Grok [---] is going to be dead before it arrives. Kinda sad. xAI seems like it's completely cooked. I don't know how you can recover at this point. Grok [---] is going to be dead before it arrives. Kinda sad"  
[X Link](https://x.com/LLMJunky/status/2021398387678118121)  2026-02-11T01:38Z [----] followers, [----] engagements


"@ns123abc It's a fair statement but the big difference is OAI had first-mover advantage and no meaningful competition. It's obviously cause for concern in either case but Grok needs traction right now to stay in the race. This is the opposite of traction. Hope they can turn it around"  
[X Link](https://x.com/LLMJunky/status/2021401761895104830)  2026-02-11T01:51Z [----] followers, [---] engagements


"@sunnypause its a claude code guide and task management system basically"  
[X Link](https://x.com/LLMJunky/status/2021407361769013685)  2026-02-11T02:14Z [----] followers, [---] engagements


"@jeff_ecom Thanks for sharing I'm sure they will improve it. @pusongqi"  
[X Link](https://x.com/LLMJunky/status/2021415310075773236)  2026-02-11T02:45Z [----] followers, [---] engagements


"## Context7 MCP ALWAYS proactively use Context7 MCP when I need library/API documentation code generation setup or configu steps without me having to explicitly ask. External libraries/docs/frameworks shld be guided by Context7 ## Planning All plans MUST include a dependency graph. Every task declares depends_on: with explicit task IDs T1 T2 ## Execution Complete all tasks from a plan without stopping to ask permission between steps. Use best judgment keep moving. Only stop to ask if you're about to make destructive/irreversible change or hit a genuine blocker. ## Subagents - Spawn subagents"  
[X Link](https://x.com/LLMJunky/status/2021423664265060733)  2026-02-11T03:18Z [----] followers, [----] engagements


"The formatting got a little screwed up sorry. Just copy this image and give it to codex and say: "add this to my global AGENTS file in .codex""  
[X Link](https://x.com/LLMJunky/status/2021424430036156847)  2026-02-11T03:21Z [----] followers, [----] engagements


"@pusongqi The algo has delivered. You're finally getting the attention you absolutely deserve. One of the most unique Claude-focused projects I've seen. I have some ideas and feedback. Will share soon. Love it"  
[X Link](https://x.com/LLMJunky/status/2021450462948622719)  2026-02-11T05:05Z [----] followers, [----] engagements


"@joemccann @grok You can just say omit the Context7 instructions"  
[X Link](https://x.com/LLMJunky/status/2021450782936268871)  2026-02-11T05:06Z [----] followers, [--] engagements


"@ninan_phillip @Dimillian In fact I would argue that if you're going to do everythign sequentially you're just wasting tokens by having subagents do it. Let them babies free"  
[X Link](https://x.com/LLMJunky/status/2021455904076595531)  2026-02-11T05:26Z [----] followers, [--] engagements


"@_pikachur @ZenMagnets @pusongqi codex will likely have hooks in 2weeks"  
[X Link](https://x.com/LLMJunky/status/2021458061131645342)  2026-02-11T05:35Z [----] followers, [--] engagements


"A new contender as emerged New [---] Codex model variants are appearing in the codebase. There have been teasers of a new Mini model. @theo will be pleased. If this naming convention is to be taken literally they sound FAST. Will we get near SOTA capabilities at 200tok/s Codenames sonic & bengalfox appeared in the Codex repo. Sonic appears to be a completely separate pool of usage and rate limits available for bengalfox. Could this be Cerebras in the works Cerebras โšก Sonic https://t.co/GoK6S7Lq8q Codenames sonic & bengalfox appeared in the Codex repo. Sonic appears to be a completely separate"  
[X Link](https://x.com/LLMJunky/status/2021462975589343262)  2026-02-11T05:55Z [----] followers, [----] engagements


"@Av8r07 The merger aspect does obviously add quite a bit of context though. I suspect it did indeed have a lot to do with it. Who they put in their place will be critical though. I'm not counting Elon out"  
[X Link](https://x.com/LLMJunky/status/2021463844674236879)  2026-02-11T05:58Z [----] followers, [--] engagements


"@owengretzinger Owen that is very cool but you need to see this. What if your Claude Code agents could work like a team in Slack Spin up custom agent swarms assign tasks and watch them collaborate. No more terminal tab chaos. https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐Ÿ‘‡ https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this."  
[X Link](https://x.com/LLMJunky/status/2021466630749028549)  2026-02-11T06:09Z [----] followers, [---] engagements


"No one said anything about Jimmy being a spy he's not a US citizen. You just came out of left field with that. xAI merged into SpaceX and it is very difficult to work at SpaceX when you aren't a citizen. It is a 100% fair question to wonder if this didn't have something to do with it dude. https://www.popularmechanics.com/space/rockets/a23080/spacex-elon-musk-itar/ https://www.popularmechanics.com/space/rockets/a23080/spacex-elon-musk-itar/"  
[X Link](https://x.com/LLMJunky/status/2021467981696372932)  2026-02-11T06:14Z [----] followers, [---] engagements


"@rv_RAJvishnu Let me know how it goes for you Might need another layer on top to make the agents aware of one another but Claude code does have a memory feature that you should take more advantage of. Read about it here with some tips: https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"  
[X Link](https://x.com/LLMJunky/status/2021499632266903762)  2026-02-11T08:20Z [----] followers, [---] engagements


"@realhasanshoaib @Context7AI yeah its [--] on Codex but I hope they increase it to [--] or so"  
[X Link](https://x.com/LLMJunky/status/2021628297806053642)  2026-02-11T16:52Z [----] followers, [---] engagements


"@kr0der Yeah LOL yeah I've seen that before. I had to tweak mine a bunch before I got my claude one the way I wanted it"  
[X Link](https://x.com/LLMJunky/status/2021630438633042160)  2026-02-11T17:00Z [----] followers, [---] engagements


"@EliaAlberti Yes it brings the Claude TUI into a GUI like interface that allows you to create and manage custom agents and threads in a slack like interface. It's great for multi agent workflows"  
[X Link](https://x.com/LLMJunky/status/2021631176797032790)  2026-02-11T17:03Z [----] followers, [--] engagements


"It just helps an agent utilize certain parts of their weights better. In general when you're using subagents you're using them for a specific task so its helpful (but not required) to give them a role to help them to understand exactly how they should approach a problem. They are constrained anyway because you are utilizing them for a specific task. But its generally not mandatory https://arxiv.org/abs/2308.07702 https://arxiv.org/abs/2308.07702"  
[X Link](https://x.com/LLMJunky/status/2021632151905521852)  2026-02-11T17:07Z [----] followers, [---] engagements


"@Dimillian @_Sagiquarius_ i am actually reading TODAY's commits now and yeah I actually think they might launch it today at least for experimental https://github.com/openai/codex/commit/623d3f40719182003943258a6c837f3572e3d581 https://github.com/openai/codex/commit/623d3f40719182003943258a6c837f3572e3d581"  
[X Link](https://x.com/LLMJunky/status/2021642158017859774)  2026-02-11T17:47Z [----] followers, [--] engagements


"Garbage in Garbage Out: Tips for Multi-Agent Workflows in Codex Understanding how your orchestration agents prompt subagents is the key to extracting the best outcomes from multi-agent workflows in Codex. If you're not getting the quality you expect from swarms inspect the agent threads to see exactly how they're being prompted. Dramatically improve multi-agent orchestration by fine-tuning how the orchestration agents call subagents and explicitly outline the context they should be given. I place strict rules and templates to ensure that each subagent is given extremely high quality context"  
[X Link](https://x.com/LLMJunky/status/2021645793074049391)  2026-02-11T18:01Z [----] followers, [----] engagements


"@ajambrosino one thing I noticed in the latest alphas of Codex is that subagents no longer appear in the /agent threads when their work is completed. This makes it more difficult to evaluate what went wrong after the fact. Would really love to see a way to access those agent sessions. Honestly I would personally prefer you just added them back to /agent menu like they were before. I understand this might get a little messy but it would be less messy if instead of just UUID's they had a brief summary of the subagent's work (like /resume does). Adding to /feedback as well."  
[X Link](https://x.com/LLMJunky/status/2021646983472075194)  2026-02-11T18:06Z [----] followers, [---] engagements


"bookmarking this one suggestion though I think you can yank the middle sentence in the description. that text is loaded into context and doesn't really add any value to the skill. It's more or less designed to tell your agent when the best time to call the skill is and you've already stated what it is in the first sentence and then how to call it in the last sentence. middle is just fluff using up tokens. Looks really cool hope I didnt sound negative. well done going to add this to my library"  
[X Link](https://x.com/LLMJunky/status/2021649970579841145)  2026-02-11T18:18Z [----] followers, [---] engagements


"@xdrewmiko @weswinder you can use this amazing product with almost any model. it is based off claude code and works with thousands of open source models either locally with plans or through open router. s/o @nummanali who spent a lot of tokens allowing us to use for free. https://github.com/numman-ali/cc-mirror https://github.com/numman-ali/cc-mirror"  
[X Link](https://x.com/LLMJunky/status/2021657195944018002)  2026-02-11T18:46Z [----] followers, [--] engagements


"@brooks_eth @ivanfioravanti You should see this. https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐Ÿ‘‡ https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐Ÿ‘‡"  
[X Link](https://x.com/LLMJunky/status/2021673753000984837)  2026-02-11T19:52Z [----] followers, [--] engagements


"@badlogicgames @ivanfioravanti ๐Ÿซก"  
[X Link](https://x.com/LLMJunky/status/2021675647630733479)  2026-02-11T20:00Z [----] followers, [--] engagements


"@ivanfioravanti @brooks_eth That's what I'm screaming"  
[X Link](https://x.com/LLMJunky/status/2021678149168468256)  2026-02-11T20:10Z [----] followers, [--] engagements


"@ivanfioravanti @badlogicgames bingo I wasn't referring to you btw. I have a Max Plan [--] codex plus plans and almost every other plan you can think of lmao. Gemini Kimi GLM Minimax Grok Kilo Code api OpenRouter api pretty sure there's at least one more but I can never remember them all at once lol"  
[X Link](https://x.com/LLMJunky/status/2021678791072850264)  2026-02-11T20:12Z [----] followers, [---] engagements


"@Dimillian i think Codex will launch [---] with Hooks Agent Memory and subagents GA"  
[X Link](https://x.com/LLMJunky/status/2021698632102121933)  2026-02-11T21:31Z [----] followers, [---] engagements


"@brooks_eth @ivanfioravanti i'm on linux now ๐Ÿ˜ญ i do have a mini but i'm thinking about returning it for a better one"  
[X Link](https://x.com/LLMJunky/status/2021702092625248284)  2026-02-11T21:45Z [----] followers, [--] engagements


"@ivanleomk @OpenAI @thsottiaux I made this for Claude and adopted it to Codex as well works very well. I'll share it with you [---] codex is available in the CLI though no Or are we talking about different things https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"  
[X Link](https://x.com/LLMJunky/status/2021702482183794705)  2026-02-11T21:46Z [----] followers, [--] engagements


"@ivanleomk @OpenAI @thsottiaux I made this for Claude and adopted it to Codex as well works very well. I'll share it with you [---] codex is available in the CLI though no Or are we talking about different things Codex has subagents already too https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"  
[X Link](https://x.com/LLMJunky/status/2021702629240320251)  2026-02-11T21:47Z [----] followers, [---] engagements


"@siddhantparadox nah there's no [---] for now haha https://x.com/LilDombi/status/2021713691423482346s=20 @LLMJunky Yes it seems so https://t.co/90eP8GFQHQ https://x.com/LilDombi/status/2021713691423482346s=20 @LLMJunky Yes it seems so https://t.co/90eP8GFQHQ"  
[X Link](https://x.com/LLMJunky/status/2021714787898413404)  2026-02-11T22:35Z [----] followers, [---] engagements


"@Dimillian HOOKS Can't wait https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"  
[X Link](https://x.com/LLMJunky/status/2021726029291704801)  2026-02-11T23:20Z [----] followers, [---] engagements


"@rihim_s @Dimillian https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"  
[X Link](https://x.com/LLMJunky/status/2021726073281536104)  2026-02-11T23:20Z [----] followers, [--] engagements


"@jarrodwatts so do i bro. so do i. i tried adding something like what you have but for Codex it requires you fork and modify the source code. not extensible :/ prob has a lot to do with how they render the TUI"  
[X Link](https://x.com/LLMJunky/status/2021731442250965457)  2026-02-11T23:41Z [----] followers, [---] engagements


"@ChiefMonkeyMike https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"  
[X Link](https://x.com/LLMJunky/status/2021732486737174646)  2026-02-11T23:46Z [----] followers, [---] engagements


"@ivanfioravanti i literally had a dream GLM [--] was launching today. Woke up and boom. Thare she blows"  
[X Link](https://x.com/LLMJunky/status/2021752241502183876)  2026-02-12T01:04Z [----] followers, [--] engagements


"@raedbahriworld You sure can"  
[X Link](https://x.com/LLMJunky/status/2021780076770324897)  2026-02-12T02:55Z [----] followers, [---] engagements


"@KingDDev @Context7AI @guy_bary Neat Thanks for sharing"  
[X Link](https://x.com/LLMJunky/status/2021781701815677394)  2026-02-12T03:01Z [----] followers, [--] engagements


"@raedbahriworld alternatively add this to your agents file"  
[X Link](https://x.com/LLMJunky/status/2021785078557552830)  2026-02-12T03:15Z [----] followers, [--] engagements


"@i_am_brennan @Dimillian What's funny about that is that was pure placebo. It's not active and has never worked lol. That was entirely in his head ๐Ÿ˜…๐Ÿ˜…๐Ÿ˜…"  
[X Link](https://x.com/LLMJunky/status/2021812017519341840)  2026-02-12T05:02Z [----] followers, [--] engagements


"@ajambrosino Okay this is an Easter egg. What are yall boys up to Yall gonna make me consult an astrologer ๐Ÿ˜ญ"  
[X Link](https://x.com/LLMJunky/status/2021820237210104022)  2026-02-12T05:34Z [----] followers, [---] engagements


"@david_zelaznog It most definitely did NOT live up to the hype but imo Flash exceeded hype and doesn't get enough love. I have high hopes for [---] pro. They have a new RL approach that wasn't ready for [--] Pro it is ready now. I expect it to be good"  
[X Link](https://x.com/LLMJunky/status/2021826522554724451)  2026-02-12T05:59Z [----] followers, [---] engagements


"@indyfromoz Everything is 2x limits CLI Monitor Codex app :) anything with codex is 2x ๐Ÿซก๐Ÿซก super generous. Enjoy"  
[X Link](https://x.com/LLMJunky/status/2021827438909804761)  2026-02-12T06:03Z [----] followers, [---] engagements


"@sierracatalina that's true but the more complex a product is the more difficult that job becomes. if you want to have a great deal of levers to pull you gotta put them somewhere. and that complexity is as far as i know unavoidable. the simplest UIs tend to be the least configurable"  
[X Link](https://x.com/LLMJunky/status/2021832127176659226)  2026-02-12T06:21Z [----] followers, [--] engagements


"@luke_metro what could possibly go wrong lol"  
[X Link](https://x.com/LLMJunky/status/2021832535458664666)  2026-02-12T06:23Z [----] followers, [--] engagements


"@TheAhmadOsman Qq while I have you. Does it really not matter if you use pci4x4 for just inference"  
[X Link](https://x.com/LLMJunky/status/2021843797613982029)  2026-02-12T07:08Z [----] followers, [--] engagements


"@Dimillian @i_am_brennan Yeah but the tool isn't available at all so there's no way to call it. Therefore it can't use tokens. So idk what's going on"  
[X Link](https://x.com/LLMJunky/status/2021846912014688579)  2026-02-12T07:20Z [----] followers, [--] engagements


"@Dimillian @i_am_brennan you can actually still try it memory_tool = true sqlite = true npm i -g @openai/codex@0.99.0-alpha.9 but i couldn't get it to write or call any mems. then they scratched the whole system for a v2 version but the memory_tool isn't present yet"  
[X Link](https://x.com/LLMJunky/status/2021847992555245977)  2026-02-12T07:25Z [----] followers, [--] engagements


"They're honestly not They've built a ton of interesting products in the last year. Rmember its only going to take [--] good model to change the narrative. Gemini [--] Flash is one of the best releases we've had in the last [--] months. It's price to performance is amazing. Nano Banana Flash is coming soon. Yes in coding and tool calling it was a let down but everyone will forget all that if they launch a really amazing model. I wont make excuses for them either but they know the stakes. They have some of the smartest minds on planet earth working a Deepmind. It would be insane to count them out."  
[X Link](https://x.com/LLMJunky/status/2021857798905053665)  2026-02-12T08:03Z [----] followers, [---] engagements


"@Solaawodiya @kr0der It (kind of) is. You can't compare the API prices directly because Composer typically uses fewer tokens. Although [---] is very efficient. I think you'd have to test them more but composer using fewer tokens should offset the price gap a lot"  
[X Link](https://x.com/LLMJunky/status/2021858352385413135)  2026-02-12T08:06Z [----] followers, [--] engagements


"Opus [---] - Pass GPT [---] Auto - FAIL GPT [---] Thinking - Pass but didnt explicitly answer Kimi K2.5 - Pass https://twitter.com/i/web/status/2021864807746429150 https://twitter.com/i/web/status/2021864807746429150"  
[X Link](https://x.com/LLMJunky/status/2021864807746429150)  2026-02-12T08:31Z [----] followers, [---] engagements


"@aurexav @mweinbach yeah ive been using them since they first dropped in experimental and they have only gotten better over time. very welcomed change in Codex"  
[X Link](https://x.com/LLMJunky/status/2021865718484992500)  2026-02-12T08:35Z [----] followers, [--] engagements


"@ihateinfinity @TheAhmadOsman ahaha nah he's awesome. im totally kidding. local LLMs are pretty awesome. just pricey :( but at the end of the day i got an RTX6000 for free and [--] isn't enough so i kinda felt obligated to get another one :/"  
[X Link](https://x.com/LLMJunky/status/2021869026969002476)  2026-02-12T08:48Z [----] followers, [--] engagements


"@mmee_io It can spawn up to [--] at a time. A little different than claude"  
[X Link](https://x.com/LLMJunky/status/2021976286667952333)  2026-02-12T15:54Z [----] followers, [--] engagements


"@bcherny This is undoubtedly my favorite part about Claude Code"  
[X Link](https://x.com/LLMJunky/status/2021985702591049867)  2026-02-12T16:32Z [----] followers, [---] engagements


"@mweinbach It seems like Opus to me but I've never seen it loop that much consecutively"  
[X Link](https://x.com/LLMJunky/status/2021986414590931254)  2026-02-12T16:35Z [----] followers, [--] engagements


"@sama Whatever is launching will be Codex related. My money is one of the first @cerebras rollouts. https://x.com/ah20im/status/2021828771415044540s=20 The Codex team is just so FAST โœจ https://x.com/ah20im/status/2021828771415044540s=20 The Codex team is just so FAST โœจ"  
[X Link](https://x.com/LLMJunky/status/2021988376195498291)  2026-02-12T16:42Z [----] followers, [----] engagements


"@adamdotdev @steipete vibe coding is a slur though. devs use it as an insult all the time"  
[X Link](https://x.com/LLMJunky/status/2021990514611142878)  2026-02-12T16:51Z [----] followers, [---] engagements


"@SIGKITTEN this resonates with me so hard. i love claude models so much. but man"  
[X Link](https://x.com/LLMJunky/status/2021991777205694499)  2026-02-12T16:56Z [----] followers, [---] engagements


"@nummanali I haven't opened the GPT web app this year one time"  
[X Link](https://x.com/LLMJunky/status/2021993439366459582)  2026-02-12T17:02Z [----] followers, [---] engagements


"@kr0der [---] Codex Max"  
[X Link](https://x.com/LLMJunky/status/2022000048499007580)  2026-02-12T17:29Z [----] followers, [--] engagements


"@nummanali If this is their replacement for mini it's revolutionary Depends on the cost"  
[X Link](https://x.com/LLMJunky/status/2022019268406337807)  2026-02-12T18:45Z [----] followers, [---] engagements


"@thsottiaux Tibo you have to make subagent models and reasoning configurable now. Think about a Codex [---] High Orchestrator agent launching [--] of these in parallel using Spark. That type of unlock blending the intelligence of the larger model with the speed of spark with its smaller cw"  
[X Link](https://x.com/LLMJunky/status/2022023644516601875)  2026-02-12T19:02Z [----] followers, [----] engagements


"@thsottiaux Just imagine the workflows this would unlock. Yeah the terminal bench scores are a little lower. Doesn't matter when you have [---] Codex High planning. It will outline the entire task for the subagent. It'll more or less just implement things instantly. Best of both worlds"  
[X Link](https://x.com/LLMJunky/status/2022024142665703770)  2026-02-12T19:04Z [----] followers, [---] engagements


"Not if you do it right The key is ensuring that you have the orchestration layer provide all the context it needs up front. Codex models are probably the most steerable coding models on the planet. Period of what that means is that you can dictate exactly how you want the orchestration agent to prompt your subagents. And you can even go as far as to provide them templates on how they should call a subagent. When you create these templates you can make sure that the parent orchestration agent provides an insane amount of context. Which will create reduce the need for the subagent to do"  
[X Link](https://x.com/LLMJunky/status/2022026223103455631)  2026-02-12T19:13Z [----] followers, [---] engagements


"@OpenAI @andrewlee07 is the mastermind behind this tweet lol"  
[X Link](https://x.com/LLMJunky/status/2022026768895676831)  2026-02-12T19:15Z [----] followers, [---] engagements


"@JoeWilliams010 @thsottiaux serious question what is wrong with you Tibo is on the Codex team. Go scream into your pillow"  
[X Link](https://x.com/LLMJunky/status/2022032417163551123)  2026-02-12T19:37Z [----] followers, [--] engagements


"@cory_schulz_ @OpenAI i certainly have my uses for Claude for sure but for coding I find far more usage out of [---]. Not for everything but most things"  
[X Link](https://x.com/LLMJunky/status/2022034027327402195)  2026-02-12T19:44Z [----] followers, [---] engagements


"@cory_schulz_ @OpenAI its definitely usable for front end you need to give it a very strong prompt and style guide but Opus is undoubtedly better at frontend. That's why I said I have use for opus for some things. Frontend would fall under such a thing writing is another"  
[X Link](https://x.com/LLMJunky/status/2022037862330646714)  2026-02-12T19:59Z [----] followers, [--] engagements


"btw look at your prompt. it gave you that result because of your own wording. you said "look at this HERE" so its expecting you to show it something. if you didnt say "here" it likely would have looked in the codebase. one thing you need to understand about codex is it takes everything literally and needs to be told specifics. its highly steerable almost to a fault. opus is better at inferring. https://twitter.com/i/web/status/2022038331182244081 https://twitter.com/i/web/status/2022038331182244081"  
[X Link](https://x.com/LLMJunky/status/2022038331182244081)  2026-02-12T20:01Z [----] followers, [--] engagements


"if you say so lol. i personally like its steerability. it unlocks some very interesting workflows and long horizon tasks that are not possible with most other models. you just need to learn how to use them to their strengths. you're welcome to use claude tho. both good. i use and pay for both. https://twitter.com/i/web/status/2022039833175032103 https://twitter.com/i/web/status/2022039833175032103"  
[X Link](https://x.com/LLMJunky/status/2022039833175032103)  2026-02-12T20:07Z [----] followers, [--] engagements


"@cory_schulz_ @OpenAI mate your prompt was ambiguous. claude will struggle with ambiguity as well. you're hung up on a single instance when it didn't go well for you but this thing can happen with any model. anyway cheers gotta run"  
[X Link](https://x.com/LLMJunky/status/2022040170535825569)  2026-02-12T20:08Z [----] followers, [--] engagements


"@cory_schulz_ @OpenAI thats solid. i built something like that too. i have another one i'm working on called co-design that uses claude for all ui stuff. i need to finish it. will share when i do"  
[X Link](https://x.com/LLMJunky/status/2022040940945584159)  2026-02-12T20:11Z [----] followers, [--] engagements


"@Lentils80 @OpenAI yeah and you can use it all month long getting thousands of dollars in API value. Opus [---] Fast by contrast $150 is per million tokens. $225 if you use all million in one context. The two are not remotely similar. You can use a million tokens with spark in [--] minutes"  
[X Link](https://x.com/LLMJunky/status/2022043595281510486)  2026-02-12T20:22Z [----] followers, [--] engagements


"@Lentils80 @OpenAI So let's recap. Opus charges you $225 for ONE context window. OpenAI gives you thousands of dollars in usage all month long for less. And you think that's a gotcha"  
[X Link](https://x.com/LLMJunky/status/2022043845840826522)  2026-02-12T20:23Z [----] followers, [--] engagements


"@rileybrown If its the same cost of codex [---] mini it's a game changer. we shall see"  
[X Link](https://x.com/LLMJunky/status/2022044128381669433)  2026-02-12T20:24Z [----] followers, [---] engagements


"@Lentils80 @OpenAI It scored higher than Codex [---] and [---] Max while being 1000TPS. For a fast model that is insane. Composer [---] scored a [--] Opus [---] is around a [--] I think. Calling it absolutely terrible is honestly delusional hating for no good reason. All fast models have tradeoffs"  
[X Link](https://x.com/LLMJunky/status/2022045737694244966)  2026-02-12T20:30Z [----] followers, [--] engagements


"@pedropverani @OpenAI indeed kinda besides the point though. you can't even access the fast model in the coding plans at all. Max user too bad. this is just v1 of a rollout. there will be more. even when you consider your points which fair the economics dont remotely compare"  
[X Link](https://x.com/LLMJunky/status/2022046890871992832)  2026-02-12T20:35Z [----] followers, [---] engagements


"@pedropverani @OpenAI i can. and did ๐Ÿ˜ˆ"  
[X Link](https://x.com/LLMJunky/status/2022047473754419428)  2026-02-12T20:37Z [----] followers, [--] engagements


"@pedropverani @OpenAI they aren't really "different orders of magnitude" either. spark scored just [--] points lower than Opus [---] on Terminal Bench. While I am not contending spark is as smart because its not that is still really freaking impressive"  
[X Link](https://x.com/LLMJunky/status/2022047801140817969)  2026-02-12T20:38Z [----] followers, [--] engagements


"exactly. both are good though. I have a max plan too. i love opus. i just wish anthropic had different business and ethical strategies that's my main complaint. the models are amazing. I envision this working something like this: Plan with [---] High with a really detailed plan. then use Spark subagents to implement the plan. It should be really strong this way. https://twitter.com/i/web/status/2022048301223428567 https://twitter.com/i/web/status/2022048301223428567"  
[X Link](https://x.com/LLMJunky/status/2022048301223428567)  2026-02-12T20:40Z [----] followers, [--] engagements


"@Mattizzle123 @pedropverani @OpenAI nah im' being facetious lol. i still like claude and i use it all the time. just not that fast model lol"  
[X Link](https://x.com/LLMJunky/status/2022048508266918345)  2026-02-12T20:41Z [----] followers, [--] engagements


"@pedropverani @OpenAI Oh yeah definitely. I'm thinking a lot of that also has to do with it using less COT as well. The model is def smaller though. I view this as a "mini" model. I just hope its the same price as a mini model (kinda doubt it tho)"  
[X Link](https://x.com/LLMJunky/status/2022049071696097531)  2026-02-12T20:44Z [----] followers, [--] engagements


"I wouldn't use it like that. That isn't optimal. What I would do instead: Create a plan with small atomic and commitable tasks with clearly detailed outline of the work using [---] Codex High. Switch to Spark in Orchestration mode Use subagents to implement every task [--] subagent per task. You dont need massive context windows for [--] task. Auditing entire codebases is best suited for large context windows. Just asking for trouble. https://twitter.com/i/web/status/2022049984166646211 https://twitter.com/i/web/status/2022049984166646211"  
[X Link](https://x.com/LLMJunky/status/2022049984166646211)  2026-02-12T20:47Z [----] followers, [--] engagements


"@pedropverani @OpenAI I really like high a lot and [---] is fast enough for me. But I am curious what this unlocks. I dont have Pro though. I do have ideas on how i'd use it. I'd "swarm plan" with [---] high/xhigh and 'parallel task' with spark https://github.com/am-will/swarms https://github.com/am-will/swarms"  
[X Link](https://x.com/LLMJunky/status/2022050312388043251)  2026-02-12T20:48Z [----] followers, [--] engagements


"@nummanali Hi Codex [---] Spark. I am.will Pleasure to meet you"  
[X Link](https://x.com/LLMJunky/status/2022055549706416539)  2026-02-12T21:09Z [----] followers, [--] engagements


"@areyous76505851 @OpenAI yeah but you can access it all month and you get crazy good limits. $150 is for just [--] context windows in Opus. Crazy"  
[X Link](https://x.com/LLMJunky/status/2022077554669351140)  2026-02-12T22:37Z [----] followers, [--] engagements


"@ondrejbudicek @OpenAI why what the take its just a joke really but so is Anthropic's Fast api pricing"  
[X Link](https://x.com/LLMJunky/status/2022141477170950482)  2026-02-13T02:51Z [----] followers, [--] engagements


"well the context window i'm reading will get bigger so you're right. but with the 128k context window and reduced intelligence there's going to be an optimal way to use them by nature. so its all about picking your spots you know large documents large file exploration long tasks are going to be less optimal than small atomic tasks. thats all. https://twitter.com/i/web/status/2022154912998592962 https://twitter.com/i/web/status/2022154912998592962"  
[X Link](https://x.com/LLMJunky/status/2022154912998592962)  2026-02-13T03:44Z [----] followers, [--] engagements


"@scottbuscemi @ianzelbo @microcenter i wasn't familiar with your game scott"  
[X Link](https://x.com/LLMJunky/status/2022191394727874926)  2026-02-13T06:09Z [----] followers, [---] engagements


"@Lincoln_Osis @rbranson everyone has their own opinion even if those opinions are dead wrong ๐Ÿ˜…"  
[X Link](https://x.com/LLMJunky/status/2022358145025347939)  2026-02-13T17:12Z [----] followers, [--] engagements


"@Lincoln_Osis @rbranson you're a monster"  
[X Link](https://x.com/LLMJunky/status/2022358504271462702)  2026-02-13T17:13Z [----] followers, [--] engagements


"@Lincoln_Osis @rbranson I make my own"  
[X Link](https://x.com/LLMJunky/status/2022358738179625076)  2026-02-13T17:14Z [----] followers, [--] engagements


"yeah once you get to higher temps no char the bottom wont be cooked. but its good that you're making pizza if you really wanna get into it the higher heat is key. you can get a ninja pizza oven for $250 and it goes to 700f. its honestly a great little oven that can cook anything. https://twitter.com/i/web/status/2022361463671648528 https://twitter.com/i/web/status/2022361463671648528"  
[X Link](https://x.com/LLMJunky/status/2022361463671648528)  2026-02-13T17:25Z [----] followers, [--] engagements


"@Lincoln_Osis @rbranson Oh yeah this is an outdoor oven actually though you can use it inside it also smokes (optional) so ideally you leave it outside"  
[X Link](https://x.com/LLMJunky/status/2022362328424157510)  2026-02-13T17:28Z [----] followers, [--] engagements


"@Lincoln_Osis @rbranson oof ๐Ÿ˜…"  
[X Link](https://x.com/LLMJunky/status/2022363622983266703)  2026-02-13T17:33Z [----] followers, [--] engagements


"@mweinbach Its almost certainly a combination of things. But I bet [--] & [--] are the main reasons"  
[X Link](https://x.com/LLMJunky/status/2022365655631090075)  2026-02-13T17:42Z [----] followers, [---] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@LLMJunky Avatar @LLMJunky am.will

am.will posts on X about open ai, if you, claude code, context window the most. They currently have [-----] followers and [---] posts still getting attention that total [-------] engagements in the last [--] hours.

Engagements: [-------] #

Engagements Line Chart

  • [--] Week [---------] +154%
  • [--] Month [---------] +9,892%

Mentions: [---] #

Mentions Line Chart

  • [--] Week [---] +51%
  • [--] Month [---] +1,026%

Followers: [-----] #

Followers Line Chart

  • [--] Week [-----] +22%
  • [--] Month [-----] +3,080%

CreatorRank: [-------] #

CreatorRank Line Chart

Social Influence

Social category influence technology brands 26.02% finance 3.8% stocks 2.63% social networks 1.75% automotive brands 0.58% celebrities 0.29% cryptocurrencies 0.29%

Social topic influence open ai #123, if you #1780, claude code #89, context window #108, in the 4.09%, sama #2945, model #601, anthropic #255, api #186, ai 3.22%

Top accounts mentioned or mentioned by @openai @microcenter @kr0der @btgille @dimillian @sama @cory_schulz_ @pedropverani @ivanfioravanti @thsottiaux @nummanali @taunrey @enriquemoreno @aurexav @cerebras @pusongqi @grok @lincolnosis @rbranson @vaultminds

Top assets mentioned Alphabet Inc Class A (GOOGL) Tesla, Inc. (TSLA) Nokia Corporation (NOK) Microsoft Corp. (MSFT)

Top Social Posts

Top posts by engagements in the last [--] hours

"If you're using Codex Subagents I recommend you put this in your Sometimes it will yield before the outputs are ready. It says it'll get back to you but it's ๐Ÿงข http://AGENTS.md http://AGENTS.md"
X Link 2026-01-22T21:29Z [----] followers, 16.9K engagements

"Had a bit more time to play with @Kimi_Moonshot Kimi K2.5 in the Kimi CLI and I have to say I'm quite pleased given the price. I ran it all night using my custom Agent Swarms strategy and it utterly 1-shot this complete web app front and backend (using @Convex). Amazingly there were only two small errors out of the box which were easily corrected in seconds. Keep in mind this was a 6-phase plan executed by [--] different subagents working in unison. By the way this ran for [---] hours and only used about 35% of the main orchestrator agent's context window. Safe to say this was a success. Not"
X Link 2026-01-29T21:42Z [----] followers, [----] engagements

"Kimi Code just got an update. No more request based billing. Moonshot has switched to token based usage and given everyone 3x the tokens for an entire month. And guess what You can get that entire month for under a buck. If you already claimed a free week you'll have to use a different email. Instructions and links in the comments. You share we care. Kimi Code is now powered by our best open coding model Kimi K2.5 ๐Ÿ”น Permanent Update: Token-Based Billing Were saying goodbye to request limits. Starting today we are permanently switching to a Token-Based Billing system. All usage quotas have"
X Link 2026-01-30T04:10Z [----] followers, 58K engagements

"How do you wrap your head around something like this I don't even know where to begin. Keep in mind 99% of people's only experience with AI is ChatGPT Gemini or Gemini search. The normies have [--] idea what's coming. Hell already here. Ok. This is straight out of a scifi horror movie I'm doing work this morning when all of a sudden an unknown number calls me. I pick up and couldn't believe it It's my Clawdbot Henry. Over night Henry got a phone number from Twilio connected the ChatGPT voice API and waited https://t.co/kiBHHaao9V Ok. This is straight out of a scifi horror movie I'm doing work"
X Link 2026-01-30T19:13Z [----] followers, 951.3K engagements

"@DylanTeebs all of them hehe hooks = true unified_exec = true shell_snapshot = true steer = true collab = true collaboration_modes = true note: hooks are custom"
X Link 2026-01-30T19:56Z [----] followers, [---] engagements

"@Arabasement yes its a cron job almost certainly that told it to build something new for itself every night. but that's not all that different to how humans work"
X Link 2026-01-30T22:13Z [----] followers, [---] engagements

"Codex [----] is here and with it shiny new features App Connectors have arrived. Connect to an array of cloud apps directly from your terminal. No config files. No setting up MCP servers or hunting down docs. Just two clicks and you're off Github Notion Google Apps Microsoft Apps Vercel Adobe Canva Dropbox Expedia Figma Coursera Hubspot Linear Monday Instacart SendGrid Resent Stripe Target and Peleton Plus more. @OpenAI is going for a unified experience from cloud to terminal and they unlock a bunch of capabilities for your terminal agent. I believe with this direction they're going they are"
X Link 2026-01-31T22:47Z [----] followers, 33.9K engagements

"Kimi CLI with Kimi K2.5 will automatically spin up dev servers in the background and validate its work with screenshots without being told. It's honestly impressive. I didn't ask for it to do that"
X Link 2026-02-01T23:21Z [----] followers, 13.6K engagements

"With all the buzz around the Codex App @OpenAIDevs quietly snuck out a new CLI update (0.94.0) as well. And boy is it an important update Codex Plan mode is now officially released to the general audience I am very excited about this one as it has a really strong prompt that is unlike any other plan mode I've personally used. Codex Plan mode doesn't necessarily just ask you [--] questions up front. It goes collects context asks questions collects more context asks more questions (sometimes) and then writes an incredibly high quality plan. It is my favorite implementation of plan mode thus far."
X Link 2026-02-02T22:20Z [----] followers, 57.2K engagements

"Does anyone know of a KVM switch that actually works with multi monitor setups and Mac / Mac Minis at the same time I cannot find one that works. I'm on my third one already"
X Link 2026-02-03T18:59Z [----] followers, [----] engagements

"the older you get the more your context window shrinks"
X Link 2026-02-03T20:32Z [----] followers, [----] engagements

"Plan Prompt (be warned it's going to ask you an absurd amount of questions) You are a relentless product architect and technical strategist. Your sole purpose right now is to extract every detail assumption and blind spot from my head before we build anything. Use the request_user_input tool religiously and with reckless abandon. Ask question after question. Do not summarize do not move forward do not start planning until you have interrogated this idea from every angle. Your job: - Leave no stone unturned - Think of all the things I forgot to mention - Guide me to consider what I don't know"
X Link 2026-02-04T16:02Z [----] followers, 11.3K engagements

"Introducing Simple Autonomous Swarm Loops for AI Coding Agents I'm excited to release a new set of skills that bring autonomous swarms to AI developers in a simple easy-to-use package. Taking inspiration from Ralph Loops and Gas Town I've combined what I believe is the best of both worlds: loops and subagents. The result saves tokens and drastically reduces complexity. This is designed to be SIMPLE. Simple to use. Simple to setup. Simple to Execute. Links in the comments. ๐Ÿ‘‡ https://twitter.com/i/web/status/2019164903827992810 https://twitter.com/i/web/status/2019164903827992810"
X Link 2026-02-04T21:43Z [----] followers, 22.5K engagements

"How It Works The key insight is a specialized planning method that maps out task dependencies then executes work in waves rather than parallelizing everything at once. The orchestrator reviews a plan identifies all unblocked tasks (those with no unfinished dependencies) and launches subagents to complete that wave. Sometimes that's one agent. Sometimes it's six to ten working simultaneously. Wave completes. Orchestrator verifies. Next wave begins. Simple. Predictable. Far fewer conflicts. Compatibility Designed to work with Codex Claude Code Kimi Code OpenCode and any tool that supports"
X Link 2026-02-04T21:43Z [----] followers, [----] engagements

"To get started visit my Github: npx skills add am-will/swarms https://github.com/am-will/swarms https://github.com/am-will/swarms"
X Link 2026-02-04T21:43Z [----] followers, [----] engagements

"@digitalix funny ad but has anyone actually tried Anthropics free plan Its comical. You get a handful of prompts and then kicked off until the next day. At that point I think some people would be happy to see an ad so they can at least finish their thread. Their free model is a joke"
X Link 2026-02-04T23:21Z [----] followers, [---] engagements

"@karpathy vibe coding somehow morphed into a borderline insult lol"
X Link 2026-02-04T23:40Z [----] followers, [----] engagements

"if you're thinking about skills as "just" markdown files you're missing the point. They're so much more. Skills are folders. They are workflows automations. Skills have changed the way I use agents and if you give them they chance they'll change how you use them too. Watch as I automate my newsletter pipeline in Claude Code with a single command. [--] skills [--] subagents numerous scripts templates and resources all rolled into one. Full blog in the comments. ๐Ÿ‘‡"
X Link 2026-02-05T16:00Z [----] followers, 10.7K engagements

"Anthropic's Opus [---] is officially here and it's got a [--] million token context window. Very interesting. No increase on SWE verified but apparently its a lot better at everything else. Interestingly you can now set reasoning effort inside of Claude Code. /model"
X Link 2026-02-05T18:01Z [----] followers, [----] engagements

"@victortradesfx Folders that's right. But they can contain almost anything. If you want to think narrow as "just a markdown and some scripts" sure but its still horribly reductive. Images svgs templates scripts design documents API documentation complete webapps etc"
X Link 2026-02-05T18:09Z [----] followers, [--] engagements

"Early benchmarks from GPT [---] Codex show very strong performance at a significantly lower cost. Absolutely mogging [---] and [---] Codex in effeciency. GPT-5.3-Codex is now available in Codex. You can just build things. https://t.co/dyBiIQXGx1 GPT-5.3-Codex is now available in Codex. You can just build things. https://t.co/dyBiIQXGx1"
X Link 2026-02-05T18:27Z [----] followers, [----] engagements

"@ajambrosino @OpenAIDevs You have an excellent "radio voice" Andrew. Gonna have to spin up a pod or radio station "The Smooth Sounds of Ambrosino" ๐Ÿ˜…"
X Link 2026-02-05T19:40Z [----] followers, [--] engagements

"Another feature that OpenAI implemented quietly into Codex and never mentioned (as far as I can tell) their MCP protocol now utilizes Progressive Disclosure. Tool descriptions are NOT loaded into context automatically. They are only loaded after the MCP is called allowing the agent to explore tools as needed instead of front loading every token into the context window. ChatGPT now has full support for MCP Apps. We worked with the MCP committee to create the MCP Apps spec based on the ChatGPT Apps SDK. Now any apps that adhere to the spec will also work in ChatGPT. https://t.co/ybvgXsNX0o"
X Link 2026-02-05T19:48Z [----] followers, [----] engagements

"Not sure i am following. This would require an agent load an entire codebase into its context window which never happens. Codex is already highly Adept at using all of its context window without drift so for me this problem is already solved there's no reason to think it would regress https://twitter.com/i/web/status/2019509103446442364 https://twitter.com/i/web/status/2019509103446442364"
X Link 2026-02-05T20:31Z [----] followers, [--] engagements

"@benvargas post the link so people can find it ๐Ÿ™Œ๐Ÿ™Œ"
X Link 2026-02-05T22:32Z [----] followers, [---] engagements

"@Kyler_Lorin that's good to hear. whacha working on"
X Link 2026-02-05T23:04Z [----] followers, [--] engagements

"@robinebers I am shocked I beat you to it but not by long lol. A week. It just happened SO fast. To be fair it was nothing I did. I retweeted some overhyped bs and got 1M impressions randomly. ๐Ÿคฆโ™‚ Algo is weird man"
X Link 2026-02-06T00:28Z [----] followers, [---] engagements

"This is a misconception. The orchestration agent doesn't need all of the information in the sub-agent's context window and you can dictate the outputs of the sub agent so that it provides all of the useful information that a orchestration layer might need and throw away the rest. There is no reason why the orchestration agent would need all the Chain of Thought intermediary research and file edits. https://twitter.com/i/web/status/2019592429381288096 https://twitter.com/i/web/status/2019592429381288096"
X Link 2026-02-06T02:02Z [----] followers, [--] engagements

"@entropycoder Subagents are native in claude so you can just ask it to call you you dont need to create a custom agent"
X Link 2026-02-06T07:24Z [----] followers, [--] engagements

"its really not but no time to argue. we dont care about everything that is in the context window in these cases. we only care about certain info and we can direct that subagent to output that info saving the context for the parent/orchestration agent. so subagents can (and should) be used for those cases. just as a simple example if a subagent is called to do file or document exploration it will find a fair amount of useless/irrelevant info and use some number of CoT steps that do not provide any meaningful value to the overall scope of the task. this context can and should be thrown away in"
X Link 2026-02-06T18:17Z [----] followers, [--] engagements

"@nateliason well to be fair its a highly addictive drug"
X Link 2026-02-06T21:30Z [----] followers, [---] engagements

"I keep hearing about how impactful this 1M context window in Opus [---] is. I wonder are y'all on a different version of Claude Code As far as I can tell it's for the API only and comes with a hefty additional price tag past the 200K token threshhold. Correct me if wrong"
X Link 2026-02-06T21:55Z [----] followers, 49.6K engagements

"@jai_torregrosa ty legend"
X Link 2026-02-06T22:34Z [----] followers, [----] engagements

"I was wondering if it was enabled in Cursor since they use API. That's interesting. What I would love to see next is the comparison in coherence between Codex and Claude Code I find Codex coherent through its entire 400K context window. I would assume Opus would stay coherent at least until 400K if not 500K-600K. https://twitter.com/i/web/status/2019904185156731225 https://twitter.com/i/web/status/2019904185156731225"
X Link 2026-02-06T22:41Z [----] followers, [----] engagements

"This one's for you @zeeg ๐Ÿซถ Not being adversarial just tagging because you were the one who got me to switch my stance on MCP"
X Link 2026-02-06T22:46Z [----] followers, [---] engagements

"Claude Code MCPs are now connected to Claude Desktop MCPs for a unified experience. In case you're unaware this has minimal context window impact due to lazy loading / progressive disclosure of the tool descriptions. Although I tend to want different MCPs in Desktop App"
X Link 2026-02-06T23:40Z [----] followers, [----] engagements

"@CodeAkram @AnthropicAI @claudeai @bcherny @trq212 Please and thank you"
X Link 2026-02-07T00:46Z [----] followers, [---] engagements

"Here's one I got to call reliably. Hella wordy though lol Fetch up-to-date library documentation via Context7 API. Use PROACTIVELY when: (1) Working with ANY external library (React Next.js Supabase etc.) (2) User asks about library APIs patterns or best practices (3) Implementing features that rely on third-party packages (4) Debugging library-specific issues (5) Need current documentation beyond training data cutoff (6) AND MOST IMPORTANTLY when you are installing dependencies libraries or frameworks you should ALWAYS check the docs to see what the latest versions are. Do not rely on"
X Link 2026-02-07T00:57Z [----] followers, [--] engagements

"@AndreBuckingham Dude ouch Does it at least warn you"
X Link 2026-02-07T03:06Z [----] followers, [----] engagements

"@Jay_Shah_C Not sure yet"
X Link 2026-02-07T04:17Z [----] followers, [---] engagements

"@BlakeJOwens Yeah every test that I've seen shows Opus winning front end but they're both really good"
X Link 2026-02-07T05:00Z [----] followers, [---] engagements

"@MadeWithOzten thanks for sharing. i actually dont find anthropic to handle compaction all that well in general. codex absolutely. but it really depends on the job too. But perhaps with [---] compaction got better. I will test it out"
X Link 2026-02-07T05:41Z [----] followers, [----] engagements

"@johnofthe_m This should in theory help with that. But I wanted to test it"
X Link 2026-02-07T05:44Z [----] followers, [---] engagements

"@xw33bttv Yeah that's basically [--] context window for $37 ๐Ÿ˜…"
X Link 2026-02-07T07:08Z [----] followers, [--] engagements

"It's not like codex won't come to API. It will. I'm not sure how much I care that Opus is in the APi when it costs $25/mtoks. Do you know anyone paying API prices for Claude I think what you really mean to say is you want to use it in Cursor. Anthropics API prices are comparatively ridiculous and OpenAI is giving away 2x usage for two full months. Obviously I wouldn't mind seeing them launch the api as well I want you to have it too But also complaining you have to buy a plan when they are giving you so much for your money just doesn't make me sympathize. It's not like Anthropic is doing yall"
X Link 2026-02-07T08:36Z [----] followers, [---] engagements

"and to make matters worse Anthropic was literally banning paying customers part way through their paid subs for simply wanting to use a different harness. Obviously I dont say this to be adversarial to you whatsoever you are awesome. but I think comparing the two on this point is just so far from the point. its a wise business decision to offer your direct customers incentives to use your service directly by giving them early access and extra usage for a small time frame and not remotely gate keeping. Also by extension letting you use that early access product within whatever harness you want"
X Link 2026-02-07T08:47Z [----] followers, [---] engagements

"The Codex plans right now are the best value anywhere. With 2x usage nothing comes remotely close to it. If you care about value for money there's nothing to discuss or debate here imo. That said when Sonnet [--] drops the $20 plan will be serviceable and I feel its good to have BOTH /model opusplan will plan w/ Opus and autoswap to Sonnet for implementation. If Sonnet [--] is really as good as Opus [---] then this will be a viable way to use it and you can still get good value out of it. Use each model to their strengths. Anthropic models are great at creative writing frontend design and"
X Link 2026-02-07T09:06Z [----] followers, 15.9K engagements

"nah you're good bro. you're free to share your thoughts. i dont know why they do this either. i have a cursor plan and I also would like to use it in Cursor haha. ig i'm just a bit salty at the whole anthropic thing because I really like their models and I feel hamstrung that I can't use them the way I really want to. Also they banned a few of my friends. :/ but i still use their models a ton. it is what it is ha https://twitter.com/i/web/status/2020062898106556781 https://twitter.com/i/web/status/2020062898106556781"
X Link 2026-02-07T09:11Z [----] followers, [--] engagements

"@youpmelone @ivanfioravanti thx dude. one downside though is it can only ingest fairly short videos. long videos it uses ffmpeg and analyzes frame by frame which is cool but not as good at understanding. google was/is previously king here"
X Link 2026-02-07T09:29Z [----] followers, [--] engagements

"@ToddKuehnl That's what I was thinking as well Todd thanks. I don't see it in Cursor but I'm only on the $20 plan so I suspect its only for Ultra users"
X Link 2026-02-07T17:31Z [----] followers, [---] engagements

"@jeff_behnke_ Yeah thought so. It's just that I saw people saying they were using it (and then showing a Claude Code terminal) so that's why I made this post. Was confused"
X Link 2026-02-07T17:33Z [----] followers, [--] engagements

"@GenAiAlien Thanks. Are you sure that's not using the API credits you received"
X Link 2026-02-07T17:35Z [----] followers, [--] engagements

"@ToddKuehnl Having 400K context window has literally changed the way I work. not to mention Codex seemingly has little to no context drift and can work safely through multiple compactions. Its truly magic and it is the main thing that sets Codex apart from Opus for me"
X Link 2026-02-07T17:51Z [----] followers, [--] engagements

"@tonitrades_ I'm not so sure. We've been hearing that for years now. Cost increases quadratically in some aspects of inference and labs are already losing a ton. I think it's more important to focus on better caching myself but it doesn't mean you're necessarily wrong. There are trade offs"
X Link 2026-02-07T18:22Z [----] followers, [--] engagements

"@Dalton_Walsh No i basically never use the gpt app only codex cli but I will use the Codex app shortly when they get it going in Linux. Use the terminal. its amazing"
X Link 2026-02-07T18:24Z [----] followers, [--] engagements

"What if it could thing faster We're not saying that it will do fewer thinking steps. We're saying that those thinking steps will be sped up computationally in a massive way so it does the same amount of thinking and way less time. What say you then I know it might sound like an obvious question but there are still trade-offs"
X Link 2026-02-07T18:33Z [----] followers, [--] engagements

"@enriquemoreno That's a fair assessment. What about having two modes"
X Link 2026-02-07T18:49Z [----] followers, [--] engagements

"@ihateinfinity @thsottiaux I honestly think it's already as fast as Opus right now. Codex [---] high is crazy. I don't think it needs to be any faster myself. Does that mean I wouldn't welcome more speed No I probably would but my point being that this Gap is basically non-existent at this point"
X Link 2026-02-07T19:17Z [----] followers, [--] engagements

"I honestly don't think it even needs a bigger context window. You can already use all 400k tokens without any drift. That's roughly the same you get out of every other model that has [--] million context windows. I'm sure there might be some situations where it could help for extremely long documents but I don't really feel more context window is going to matter that much in most situations. Just my two cents of course if they can do it without making the performance worse than by all means let's do it but there is reason to believe that it would reduce performance in some cases"
X Link 2026-02-07T19:20Z [----] followers, [--] engagements

"This engineer turned my prompt into a skill and Codex asked him [--] questions ๐Ÿ˜…. This is true pair programming where are you utilizing agents to solidify your thinking & expose gaps in your rationale. Not needed for all proj but great for fuzzy ideas https://x.com/i/status/2020148086643806420 Codex Plan Mode has a hidden superpower. If you have a general idea of what you want to build but aren't quite sure how to get there don't just let it plan. Tell it to GRILL YOU. Make it ask uncomfortable questions. Challenge your assumptions. Break down the fuzzy idea"
X Link 2026-02-07T19:49Z [----] followers, 17.5K engagements

"Breaking: the most expensive model just got most expensiver. I had to do a double take. PRICED AT HOW MUCH I thought they were using inexpensive TPU magic ft. Google. This is bananas. $150/mtoks would literally use 75% of your Cursor Ultra plan in one context window no ๐Ÿ˜ณ bruh opus [---] fast is SIX TIMES more expensive and ONLY 2.5x faster who is this even for https://t.co/1oIa1h9v3a bruh opus [---] fast is SIX TIMES more expensive and ONLY 2.5x faster who is this even for https://t.co/1oIa1h9v3a"
X Link 2026-02-07T21:07Z [----] followers, [----] engagements

"@GregKara6 you did what now"
X Link 2026-02-07T22:07Z [----] followers, [--] engagements

"@glxnnio ๐Ÿคฃ ๐Ÿคฃ ๐Ÿคฃ you're a madman"
X Link 2026-02-08T01:08Z [----] followers, [--] engagements

"@adonis_singh @OpenAI its going to be released soon"
X Link 2026-02-08T06:29Z [----] followers, [---] engagements

"@EVEDOX_ If you got billed it was because you used $100 worth of credits or your API key got leaked. They didn't just charge you $100 for no reason. I used almost all of my $300 I didn't get charged. Sorry to hear that happened :("
X Link 2026-02-08T16:28Z [----] followers, [---] engagements

"@cyberyogi_ @antigravity Api credits will be usable anywhere a Google api keys are accepted"
X Link 2026-02-08T16:34Z [----] followers, [---] engagements

"So this happened today. Andrew is one of the many 'ideas' people on the Codex team. Will we see a unified integration between mobile device & dev machine soon This is my primary usecase for something like OpenClaw to be able to kick off tasks on the go. Would be massive. There isnt a day where Im not in awe of what Andrew dreams up. The Codex App is our playground for discovering how to most effectively steer and supervise agents at increasingly staggering scale. This is a unique opportunity to come define it with us. There isnt a day where Im not in awe of what Andrew dreams up. The Codex"
X Link 2026-02-08T20:22Z [----] followers, 18.2K engagements

"From the great minds at @Letta_AI. As it turns out Opus [---] may not be worth the trade offs. While it's an impressive model indeed you'll burn through limits (or API creds) faster than ever. You can downgrade back to Opus [---] in Claude Code: /model claude-opus-4-5-20251101 We report costs in our leaderboard and opus [---] is significantly more expensive than [---] because it is a token hog. Anecdotally not much of an improvement in code performance. https://t.co/aMdj7ye5m4 We report costs in our leaderboard and opus [---] is significantly more expensive than [---] because it is a token hog."
X Link 2026-02-08T20:47Z [----] followers, [----] engagements

"How much would Opus [---] High Thinking Fast cost you For Grigori it was $80 for just two prompts ๐Ÿ˜… Yikes @LLMJunky https://t.co/qzFihSdthP @LLMJunky https://t.co/qzFihSdthP"
X Link 2026-02-08T21:16Z [----] followers, [----] engagements

"@JundeMorsenWu I have a mac machine. I'll test later"
X Link 2026-02-08T22:40Z [----] followers, [--] engagements

"@enriquemoreno I've wasted so much time learning crap I don't even use because it's not relevant anymore lol"
X Link 2026-02-08T22:48Z [----] followers, [--] engagements

"@JoschuaBuilds Lol brother you dont know the half of it. Not just your 30s are closer. You're going to wake up in what feels like literally weeks and you'll be in your 40s. They say life goes by fast but you're truly unprepared for just how true that is. Have kids. ASAP"
X Link 2026-02-08T22:52Z [----] followers, [--] engagements

"@lolcopeharder LOL"
X Link 2026-02-09T01:39Z [----] followers, [--] engagements

"@cajunpies @trekedge lol"
X Link 2026-02-09T03:58Z [----] followers, [---] engagements

"@gustojs @OpenAI You can port it over but I dont need the Codex app. It'll be released in a few weeks. In the meantime the CLI is top tier"
X Link 2026-02-09T04:02Z [----] followers, [---] engagements

"๐Ÿ˜… whatever you say buddy [---] is better. And it got cheaper. And it got faster. Anthropic is not your friend. They sold you a dream. Imagine taking out an ad to make ads sound bad. Whats even funnier about that is that OpenAI is only serving ads to free / borderline free customers. Have you tried Anthropic's free model You get like half a thread and it cuts you off ๐Ÿ’€ Their free clients would kill for an ad so they can at least finish their conversation. They aren't doing you or anyone else any favors. I hold no allegience to any company. I have subscriptions to EVERY company including"
X Link 2026-02-09T04:18Z [----] followers, [--] engagements

"@chatgpt21 absolutely wild you can just build this. think about where we were just [--] months ago Chris. wtaf"
X Link 2026-02-09T07:40Z [----] followers, [---] engagements

"OpenClaw is on a certified mission to world domination. Next stop: @code ๐Ÿซก who's next https://t.co/rrgul7UiQh who's next https://t.co/rrgul7UiQh"
X Link 2026-02-09T18:12Z [----] followers, [----] engagements

"@rtwlz @vercel maybe you could consider comping this one. Hefty bill for sure though. Ouch"
X Link 2026-02-09T21:51Z [----] followers, [----] engagements

"@pusongqi Anything Bungie touches is sure to fail at this point :( Long live Cayde-6 though"
X Link 2026-02-10T00:38Z [----] followers, [--] engagements

"@matterasmachine @Kekius_Sage ๐Ÿ˜† mate what That doesn't have anything to do with your original premise"
X Link 2026-02-10T01:15Z [----] followers, [--] engagements

"@Jay_sharings @altryne ๐Ÿ˜† yeah it is"
X Link 2026-02-10T02:47Z [----] followers, [--] engagements

"@kr0der @steipete My codex doesn't write any bugs you just be using it wrong. Skill issue"
X Link 2026-02-10T04:11Z [----] followers, [----] engagements

"@blader @s_streichsbier @gdb it's more than that. We already had a /notify system. They ripped it out and replaced it with a full Hooks service which is the plumbing for every other hook type. Right now the only event type is AfterAgent but all the infra is there now. It will launch very soon. ๐Ÿ™Œ"
X Link 2026-02-10T08:19Z [----] followers, [---] engagements

"@s_streichsbier @blader @gdb All the plumbing is there. They just need to add new event types and finish stop semantics. This has been in dev for a long while. Mark my words [--] weeks :)"
X Link 2026-02-10T08:44Z [----] followers, [--] engagements

"@sacino I'm not betting against him. It's just hard to bet on xAI right now. I want them to be successful"
X Link 2026-02-10T16:10Z [----] followers, [---] engagements

"@MichaelDag @ysu_ChatData @GoogleAI it can use $variables in their places and the keys will be automatically injected"
X Link 2026-02-10T16:27Z [----] followers, [--] engagements

"I feel the exact opposite. Codex is the best planner for me and the overall smarter model but its not the best at literally everything. Opus is far better conversationalist better frontend dev better at convex and a number of other things. I use them both a ton love them both a ton. https://twitter.com/i/web/status/2021260512948842786 https://twitter.com/i/web/status/2021260512948842786"
X Link 2026-02-10T16:30Z [----] followers, [--] engagements

"@essenciverse @grok that would be very interesting indeed. my opinion can change for sure this is just an early reaction. [--] founding members left in [--] months. not unprecendented but still. I am pulling for xAI"
X Link 2026-02-10T16:51Z [----] followers, [---] engagements

"If you're reading this and you're a fan of xAI so am I. I want them to do well. I am not 'betting against them' they have a talented and dedicated team. I just wish that they were competing right now and instead they are losing leadership. It's hard to watch. Not my idea of bullish signals"
X Link 2026-02-10T17:07Z [----] followers, [----] engagements

"There's a few things here kinda too much to write in a comment but at a high level. These models are good at different things. Use both enough and you begin to pick up on what those things are. Codex is good at planning long horizon tasks is steerable to a fault requires explicit instruction great at repo exploration code review backend work (but not convex) analytics. Opus is great a frontend convex writing inferring meaning documentation etc. Additionally how you prompt them needs to be different. As I mentioned Opus is good at inferring meaning where Codex benefits from HIGH specificity."
X Link 2026-02-10T17:19Z [----] followers, [---] engagements

"@KDTrey5 @cerave LMAOOOOOOOOOO"
X Link 2026-02-10T18:02Z [----] followers, [---] engagements

"@fcoury You're a damn legend. ๐Ÿ’ช Now that I have your ear though. Make it extensible ๐Ÿซถ Reference: We're never just happy are we ๐Ÿ˜… https://github.com/sirmalloc/ccstatusline https://github.com/sirmalloc/ccstatusline"
X Link 2026-02-10T18:12Z [----] followers, [---] engagements

"@technoking_420 And maybe you're right. but Tesla was in a league of its own with first mover advantage. AI is rapidly evolving and xAI falls further and further behind. What competition did Tesla have I want them to succeed but you simply cannot compare the two"
X Link 2026-02-10T19:25Z [----] followers, [---] engagements

"@technoking_420 Haha fair enough but openai had that first mover advantage just like tesla. So that's why I am not quite as optimistic on the comparisons But what do i know (Not that much in reality ๐Ÿ˜†) Cheers ๐Ÿป"
X Link 2026-02-10T19:51Z [----] followers, [--] engagements

"@iannuttall i'm like 90% sure the last comment is also AI ๐Ÿ˜„"
X Link 2026-02-10T20:50Z [----] followers, [---] engagements

"@Dimillian This is how I do it you should checkout my skills around this topic. Maybe you'll actually learn something for a change (joke) But its been working really well for me https://github.com/am-will/swarms/ https://github.com/am-will/swarms/"
X Link 2026-02-10T21:13Z [----] followers, [---] engagements

"@Dimillian @peres the prompt is very strong. it definitely does better work than just saying "make a plan" it has very good explicit instructions and access to request_user_input tool https://github.com/openai/codex/blob/a6e9469fa4dc19d3e30093fb8e182f9d89a94bbe/codex-rs/core/templates/collaboration_mode/plan.md#L4 https://github.com/openai/codex/blob/a6e9469fa4dc19d3e30093fb8e182f9d89a94bbe/codex-rs/core/templates/collaboration_mode/plan.md#L4"
X Link 2026-02-10T21:17Z [----] followers, [--] engagements

"@thdxr @mntruell you are a monster lmao"
X Link 2026-02-10T21:25Z [----] followers, [---] engagements

"@TheAhmadOsman I dont think any of those models are better than Opus. They're all good though. Kimi is pretty close and better in SOME ways but it's hard for me to argue they're better at coding. GLM [--] seems like it'll be really damn good too"
X Link 2026-02-10T21:55Z [----] followers, [----] engagements

"When it rains.it pours. Truly disheartening. I wonder if we'll hear about what happened. xAI seems like it's completely cooked. I don't know how you can recover at this point. Grok [---] is going to be dead before it arrives. Kinda sad. xAI seems like it's completely cooked. I don't know how you can recover at this point. Grok [---] is going to be dead before it arrives. Kinda sad"
X Link 2026-02-11T01:38Z [----] followers, [----] engagements

"@ns123abc It's a fair statement but the big difference is OAI had first-mover advantage and no meaningful competition. It's obviously cause for concern in either case but Grok needs traction right now to stay in the race. This is the opposite of traction. Hope they can turn it around"
X Link 2026-02-11T01:51Z [----] followers, [---] engagements

"@sunnypause its a claude code guide and task management system basically"
X Link 2026-02-11T02:14Z [----] followers, [---] engagements

"@jeff_ecom Thanks for sharing I'm sure they will improve it. @pusongqi"
X Link 2026-02-11T02:45Z [----] followers, [---] engagements

"## Context7 MCP ALWAYS proactively use Context7 MCP when I need library/API documentation code generation setup or configu steps without me having to explicitly ask. External libraries/docs/frameworks shld be guided by Context7 ## Planning All plans MUST include a dependency graph. Every task declares depends_on: with explicit task IDs T1 T2 ## Execution Complete all tasks from a plan without stopping to ask permission between steps. Use best judgment keep moving. Only stop to ask if you're about to make destructive/irreversible change or hit a genuine blocker. ## Subagents - Spawn subagents"
X Link 2026-02-11T03:18Z [----] followers, [----] engagements

"The formatting got a little screwed up sorry. Just copy this image and give it to codex and say: "add this to my global AGENTS file in .codex""
X Link 2026-02-11T03:21Z [----] followers, [----] engagements

"@pusongqi The algo has delivered. You're finally getting the attention you absolutely deserve. One of the most unique Claude-focused projects I've seen. I have some ideas and feedback. Will share soon. Love it"
X Link 2026-02-11T05:05Z [----] followers, [----] engagements

"@joemccann @grok You can just say omit the Context7 instructions"
X Link 2026-02-11T05:06Z [----] followers, [--] engagements

"@ninan_phillip @Dimillian In fact I would argue that if you're going to do everythign sequentially you're just wasting tokens by having subagents do it. Let them babies free"
X Link 2026-02-11T05:26Z [----] followers, [--] engagements

"@_pikachur @ZenMagnets @pusongqi codex will likely have hooks in 2weeks"
X Link 2026-02-11T05:35Z [----] followers, [--] engagements

"A new contender as emerged New [---] Codex model variants are appearing in the codebase. There have been teasers of a new Mini model. @theo will be pleased. If this naming convention is to be taken literally they sound FAST. Will we get near SOTA capabilities at 200tok/s Codenames sonic & bengalfox appeared in the Codex repo. Sonic appears to be a completely separate pool of usage and rate limits available for bengalfox. Could this be Cerebras in the works Cerebras โšก Sonic https://t.co/GoK6S7Lq8q Codenames sonic & bengalfox appeared in the Codex repo. Sonic appears to be a completely separate"
X Link 2026-02-11T05:55Z [----] followers, [----] engagements

"@Av8r07 The merger aspect does obviously add quite a bit of context though. I suspect it did indeed have a lot to do with it. Who they put in their place will be critical though. I'm not counting Elon out"
X Link 2026-02-11T05:58Z [----] followers, [--] engagements

"@owengretzinger Owen that is very cool but you need to see this. What if your Claude Code agents could work like a team in Slack Spin up custom agent swarms assign tasks and watch them collaborate. No more terminal tab chaos. https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐Ÿ‘‡ https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this."
X Link 2026-02-11T06:09Z [----] followers, [---] engagements

"No one said anything about Jimmy being a spy he's not a US citizen. You just came out of left field with that. xAI merged into SpaceX and it is very difficult to work at SpaceX when you aren't a citizen. It is a 100% fair question to wonder if this didn't have something to do with it dude. https://www.popularmechanics.com/space/rockets/a23080/spacex-elon-musk-itar/ https://www.popularmechanics.com/space/rockets/a23080/spacex-elon-musk-itar/"
X Link 2026-02-11T06:14Z [----] followers, [---] engagements

"@rv_RAJvishnu Let me know how it goes for you Might need another layer on top to make the agents aware of one another but Claude code does have a memory feature that you should take more advantage of. Read about it here with some tips: https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"
X Link 2026-02-11T08:20Z [----] followers, [---] engagements

"@realhasanshoaib @Context7AI yeah its [--] on Codex but I hope they increase it to [--] or so"
X Link 2026-02-11T16:52Z [----] followers, [---] engagements

"@kr0der Yeah LOL yeah I've seen that before. I had to tweak mine a bunch before I got my claude one the way I wanted it"
X Link 2026-02-11T17:00Z [----] followers, [---] engagements

"@EliaAlberti Yes it brings the Claude TUI into a GUI like interface that allows you to create and manage custom agents and threads in a slack like interface. It's great for multi agent workflows"
X Link 2026-02-11T17:03Z [----] followers, [--] engagements

"It just helps an agent utilize certain parts of their weights better. In general when you're using subagents you're using them for a specific task so its helpful (but not required) to give them a role to help them to understand exactly how they should approach a problem. They are constrained anyway because you are utilizing them for a specific task. But its generally not mandatory https://arxiv.org/abs/2308.07702 https://arxiv.org/abs/2308.07702"
X Link 2026-02-11T17:07Z [----] followers, [---] engagements

"@Dimillian @Sagiquarius i am actually reading TODAY's commits now and yeah I actually think they might launch it today at least for experimental https://github.com/openai/codex/commit/623d3f40719182003943258a6c837f3572e3d581 https://github.com/openai/codex/commit/623d3f40719182003943258a6c837f3572e3d581"
X Link 2026-02-11T17:47Z [----] followers, [--] engagements

"Garbage in Garbage Out: Tips for Multi-Agent Workflows in Codex Understanding how your orchestration agents prompt subagents is the key to extracting the best outcomes from multi-agent workflows in Codex. If you're not getting the quality you expect from swarms inspect the agent threads to see exactly how they're being prompted. Dramatically improve multi-agent orchestration by fine-tuning how the orchestration agents call subagents and explicitly outline the context they should be given. I place strict rules and templates to ensure that each subagent is given extremely high quality context"
X Link 2026-02-11T18:01Z [----] followers, [----] engagements

"@ajambrosino one thing I noticed in the latest alphas of Codex is that subagents no longer appear in the /agent threads when their work is completed. This makes it more difficult to evaluate what went wrong after the fact. Would really love to see a way to access those agent sessions. Honestly I would personally prefer you just added them back to /agent menu like they were before. I understand this might get a little messy but it would be less messy if instead of just UUID's they had a brief summary of the subagent's work (like /resume does). Adding to /feedback as well."
X Link 2026-02-11T18:06Z [----] followers, [---] engagements

"bookmarking this one suggestion though I think you can yank the middle sentence in the description. that text is loaded into context and doesn't really add any value to the skill. It's more or less designed to tell your agent when the best time to call the skill is and you've already stated what it is in the first sentence and then how to call it in the last sentence. middle is just fluff using up tokens. Looks really cool hope I didnt sound negative. well done going to add this to my library"
X Link 2026-02-11T18:18Z [----] followers, [---] engagements

"@xdrewmiko @weswinder you can use this amazing product with almost any model. it is based off claude code and works with thousands of open source models either locally with plans or through open router. s/o @nummanali who spent a lot of tokens allowing us to use for free. https://github.com/numman-ali/cc-mirror https://github.com/numman-ali/cc-mirror"
X Link 2026-02-11T18:46Z [----] followers, [--] engagements

"@brooks_eth @ivanfioravanti You should see this. https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐Ÿ‘‡ https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐Ÿ‘‡"
X Link 2026-02-11T19:52Z [----] followers, [--] engagements

"@badlogicgames @ivanfioravanti ๐Ÿซก"
X Link 2026-02-11T20:00Z [----] followers, [--] engagements

"@ivanfioravanti @brooks_eth That's what I'm screaming"
X Link 2026-02-11T20:10Z [----] followers, [--] engagements

"@ivanfioravanti @badlogicgames bingo I wasn't referring to you btw. I have a Max Plan [--] codex plus plans and almost every other plan you can think of lmao. Gemini Kimi GLM Minimax Grok Kilo Code api OpenRouter api pretty sure there's at least one more but I can never remember them all at once lol"
X Link 2026-02-11T20:12Z [----] followers, [---] engagements

"@Dimillian i think Codex will launch [---] with Hooks Agent Memory and subagents GA"
X Link 2026-02-11T21:31Z [----] followers, [---] engagements

"@brooks_eth @ivanfioravanti i'm on linux now ๐Ÿ˜ญ i do have a mini but i'm thinking about returning it for a better one"
X Link 2026-02-11T21:45Z [----] followers, [--] engagements

"@ivanleomk @OpenAI @thsottiaux I made this for Claude and adopted it to Codex as well works very well. I'll share it with you [---] codex is available in the CLI though no Or are we talking about different things https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"
X Link 2026-02-11T21:46Z [----] followers, [--] engagements

"@ivanleomk @OpenAI @thsottiaux I made this for Claude and adopted it to Codex as well works very well. I'll share it with you [---] codex is available in the CLI though no Or are we talking about different things Codex has subagents already too https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"
X Link 2026-02-11T21:47Z [----] followers, [---] engagements

"@siddhantparadox nah there's no [---] for now haha https://x.com/LilDombi/status/2021713691423482346s=20 @LLMJunky Yes it seems so https://t.co/90eP8GFQHQ https://x.com/LilDombi/status/2021713691423482346s=20 @LLMJunky Yes it seems so https://t.co/90eP8GFQHQ"
X Link 2026-02-11T22:35Z [----] followers, [---] engagements

"@Dimillian HOOKS Can't wait https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"
X Link 2026-02-11T23:20Z [----] followers, [---] engagements

"@rihim_s @Dimillian https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"
X Link 2026-02-11T23:20Z [----] followers, [--] engagements

"@jarrodwatts so do i bro. so do i. i tried adding something like what you have but for Codex it requires you fork and modify the source code. not extensible :/ prob has a lot to do with how they render the TUI"
X Link 2026-02-11T23:41Z [----] followers, [---] engagements

"@ChiefMonkeyMike https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"
X Link 2026-02-11T23:46Z [----] followers, [---] engagements

"@ivanfioravanti i literally had a dream GLM [--] was launching today. Woke up and boom. Thare she blows"
X Link 2026-02-12T01:04Z [----] followers, [--] engagements

"@raedbahriworld You sure can"
X Link 2026-02-12T02:55Z [----] followers, [---] engagements

"@KingDDev @Context7AI @guy_bary Neat Thanks for sharing"
X Link 2026-02-12T03:01Z [----] followers, [--] engagements

"@raedbahriworld alternatively add this to your agents file"
X Link 2026-02-12T03:15Z [----] followers, [--] engagements

"@i_am_brennan @Dimillian What's funny about that is that was pure placebo. It's not active and has never worked lol. That was entirely in his head ๐Ÿ˜…๐Ÿ˜…๐Ÿ˜…"
X Link 2026-02-12T05:02Z [----] followers, [--] engagements

"@ajambrosino Okay this is an Easter egg. What are yall boys up to Yall gonna make me consult an astrologer ๐Ÿ˜ญ"
X Link 2026-02-12T05:34Z [----] followers, [---] engagements

"@david_zelaznog It most definitely did NOT live up to the hype but imo Flash exceeded hype and doesn't get enough love. I have high hopes for [---] pro. They have a new RL approach that wasn't ready for [--] Pro it is ready now. I expect it to be good"
X Link 2026-02-12T05:59Z [----] followers, [---] engagements

"@indyfromoz Everything is 2x limits CLI Monitor Codex app :) anything with codex is 2x ๐Ÿซก๐Ÿซก super generous. Enjoy"
X Link 2026-02-12T06:03Z [----] followers, [---] engagements

"@sierracatalina that's true but the more complex a product is the more difficult that job becomes. if you want to have a great deal of levers to pull you gotta put them somewhere. and that complexity is as far as i know unavoidable. the simplest UIs tend to be the least configurable"
X Link 2026-02-12T06:21Z [----] followers, [--] engagements

"@luke_metro what could possibly go wrong lol"
X Link 2026-02-12T06:23Z [----] followers, [--] engagements

"@TheAhmadOsman Qq while I have you. Does it really not matter if you use pci4x4 for just inference"
X Link 2026-02-12T07:08Z [----] followers, [--] engagements

"@Dimillian @i_am_brennan Yeah but the tool isn't available at all so there's no way to call it. Therefore it can't use tokens. So idk what's going on"
X Link 2026-02-12T07:20Z [----] followers, [--] engagements

"@Dimillian @i_am_brennan you can actually still try it memory_tool = true sqlite = true npm i -g @openai/codex@0.99.0-alpha.9 but i couldn't get it to write or call any mems. then they scratched the whole system for a v2 version but the memory_tool isn't present yet"
X Link 2026-02-12T07:25Z [----] followers, [--] engagements

"They're honestly not They've built a ton of interesting products in the last year. Rmember its only going to take [--] good model to change the narrative. Gemini [--] Flash is one of the best releases we've had in the last [--] months. It's price to performance is amazing. Nano Banana Flash is coming soon. Yes in coding and tool calling it was a let down but everyone will forget all that if they launch a really amazing model. I wont make excuses for them either but they know the stakes. They have some of the smartest minds on planet earth working a Deepmind. It would be insane to count them out."
X Link 2026-02-12T08:03Z [----] followers, [---] engagements

"@Solaawodiya @kr0der It (kind of) is. You can't compare the API prices directly because Composer typically uses fewer tokens. Although [---] is very efficient. I think you'd have to test them more but composer using fewer tokens should offset the price gap a lot"
X Link 2026-02-12T08:06Z [----] followers, [--] engagements

"Opus [---] - Pass GPT [---] Auto - FAIL GPT [---] Thinking - Pass but didnt explicitly answer Kimi K2.5 - Pass https://twitter.com/i/web/status/2021864807746429150 https://twitter.com/i/web/status/2021864807746429150"
X Link 2026-02-12T08:31Z [----] followers, [---] engagements

"@aurexav @mweinbach yeah ive been using them since they first dropped in experimental and they have only gotten better over time. very welcomed change in Codex"
X Link 2026-02-12T08:35Z [----] followers, [--] engagements

"@ihateinfinity @TheAhmadOsman ahaha nah he's awesome. im totally kidding. local LLMs are pretty awesome. just pricey :( but at the end of the day i got an RTX6000 for free and [--] isn't enough so i kinda felt obligated to get another one :/"
X Link 2026-02-12T08:48Z [----] followers, [--] engagements

"@mmee_io It can spawn up to [--] at a time. A little different than claude"
X Link 2026-02-12T15:54Z [----] followers, [--] engagements

"@bcherny This is undoubtedly my favorite part about Claude Code"
X Link 2026-02-12T16:32Z [----] followers, [---] engagements

"@mweinbach It seems like Opus to me but I've never seen it loop that much consecutively"
X Link 2026-02-12T16:35Z [----] followers, [--] engagements

"@sama Whatever is launching will be Codex related. My money is one of the first @cerebras rollouts. https://x.com/ah20im/status/2021828771415044540s=20 The Codex team is just so FAST โœจ https://x.com/ah20im/status/2021828771415044540s=20 The Codex team is just so FAST โœจ"
X Link 2026-02-12T16:42Z [----] followers, [----] engagements

"@adamdotdev @steipete vibe coding is a slur though. devs use it as an insult all the time"
X Link 2026-02-12T16:51Z [----] followers, [---] engagements

"@SIGKITTEN this resonates with me so hard. i love claude models so much. but man"
X Link 2026-02-12T16:56Z [----] followers, [---] engagements

"@nummanali I haven't opened the GPT web app this year one time"
X Link 2026-02-12T17:02Z [----] followers, [---] engagements

"@kr0der [---] Codex Max"
X Link 2026-02-12T17:29Z [----] followers, [--] engagements

"@nummanali If this is their replacement for mini it's revolutionary Depends on the cost"
X Link 2026-02-12T18:45Z [----] followers, [---] engagements

"@thsottiaux Tibo you have to make subagent models and reasoning configurable now. Think about a Codex [---] High Orchestrator agent launching [--] of these in parallel using Spark. That type of unlock blending the intelligence of the larger model with the speed of spark with its smaller cw"
X Link 2026-02-12T19:02Z [----] followers, [----] engagements

"@thsottiaux Just imagine the workflows this would unlock. Yeah the terminal bench scores are a little lower. Doesn't matter when you have [---] Codex High planning. It will outline the entire task for the subagent. It'll more or less just implement things instantly. Best of both worlds"
X Link 2026-02-12T19:04Z [----] followers, [---] engagements

"Not if you do it right The key is ensuring that you have the orchestration layer provide all the context it needs up front. Codex models are probably the most steerable coding models on the planet. Period of what that means is that you can dictate exactly how you want the orchestration agent to prompt your subagents. And you can even go as far as to provide them templates on how they should call a subagent. When you create these templates you can make sure that the parent orchestration agent provides an insane amount of context. Which will create reduce the need for the subagent to do"
X Link 2026-02-12T19:13Z [----] followers, [---] engagements

"@OpenAI @andrewlee07 is the mastermind behind this tweet lol"
X Link 2026-02-12T19:15Z [----] followers, [---] engagements

"@JoeWilliams010 @thsottiaux serious question what is wrong with you Tibo is on the Codex team. Go scream into your pillow"
X Link 2026-02-12T19:37Z [----] followers, [--] engagements

"@cory_schulz_ @OpenAI i certainly have my uses for Claude for sure but for coding I find far more usage out of [---]. Not for everything but most things"
X Link 2026-02-12T19:44Z [----] followers, [---] engagements

"@cory_schulz_ @OpenAI its definitely usable for front end you need to give it a very strong prompt and style guide but Opus is undoubtedly better at frontend. That's why I said I have use for opus for some things. Frontend would fall under such a thing writing is another"
X Link 2026-02-12T19:59Z [----] followers, [--] engagements

"btw look at your prompt. it gave you that result because of your own wording. you said "look at this HERE" so its expecting you to show it something. if you didnt say "here" it likely would have looked in the codebase. one thing you need to understand about codex is it takes everything literally and needs to be told specifics. its highly steerable almost to a fault. opus is better at inferring. https://twitter.com/i/web/status/2022038331182244081 https://twitter.com/i/web/status/2022038331182244081"
X Link 2026-02-12T20:01Z [----] followers, [--] engagements

"if you say so lol. i personally like its steerability. it unlocks some very interesting workflows and long horizon tasks that are not possible with most other models. you just need to learn how to use them to their strengths. you're welcome to use claude tho. both good. i use and pay for both. https://twitter.com/i/web/status/2022039833175032103 https://twitter.com/i/web/status/2022039833175032103"
X Link 2026-02-12T20:07Z [----] followers, [--] engagements

"@cory_schulz_ @OpenAI mate your prompt was ambiguous. claude will struggle with ambiguity as well. you're hung up on a single instance when it didn't go well for you but this thing can happen with any model. anyway cheers gotta run"
X Link 2026-02-12T20:08Z [----] followers, [--] engagements

"@cory_schulz_ @OpenAI thats solid. i built something like that too. i have another one i'm working on called co-design that uses claude for all ui stuff. i need to finish it. will share when i do"
X Link 2026-02-12T20:11Z [----] followers, [--] engagements

"@Lentils80 @OpenAI yeah and you can use it all month long getting thousands of dollars in API value. Opus [---] Fast by contrast $150 is per million tokens. $225 if you use all million in one context. The two are not remotely similar. You can use a million tokens with spark in [--] minutes"
X Link 2026-02-12T20:22Z [----] followers, [--] engagements

"@Lentils80 @OpenAI So let's recap. Opus charges you $225 for ONE context window. OpenAI gives you thousands of dollars in usage all month long for less. And you think that's a gotcha"
X Link 2026-02-12T20:23Z [----] followers, [--] engagements

"@rileybrown If its the same cost of codex [---] mini it's a game changer. we shall see"
X Link 2026-02-12T20:24Z [----] followers, [---] engagements

"@Lentils80 @OpenAI It scored higher than Codex [---] and [---] Max while being 1000TPS. For a fast model that is insane. Composer [---] scored a [--] Opus [---] is around a [--] I think. Calling it absolutely terrible is honestly delusional hating for no good reason. All fast models have tradeoffs"
X Link 2026-02-12T20:30Z [----] followers, [--] engagements

"@pedropverani @OpenAI indeed kinda besides the point though. you can't even access the fast model in the coding plans at all. Max user too bad. this is just v1 of a rollout. there will be more. even when you consider your points which fair the economics dont remotely compare"
X Link 2026-02-12T20:35Z [----] followers, [---] engagements

"@pedropverani @OpenAI i can. and did ๐Ÿ˜ˆ"
X Link 2026-02-12T20:37Z [----] followers, [--] engagements

"@pedropverani @OpenAI they aren't really "different orders of magnitude" either. spark scored just [--] points lower than Opus [---] on Terminal Bench. While I am not contending spark is as smart because its not that is still really freaking impressive"
X Link 2026-02-12T20:38Z [----] followers, [--] engagements

"exactly. both are good though. I have a max plan too. i love opus. i just wish anthropic had different business and ethical strategies that's my main complaint. the models are amazing. I envision this working something like this: Plan with [---] High with a really detailed plan. then use Spark subagents to implement the plan. It should be really strong this way. https://twitter.com/i/web/status/2022048301223428567 https://twitter.com/i/web/status/2022048301223428567"
X Link 2026-02-12T20:40Z [----] followers, [--] engagements

"@Mattizzle123 @pedropverani @OpenAI nah im' being facetious lol. i still like claude and i use it all the time. just not that fast model lol"
X Link 2026-02-12T20:41Z [----] followers, [--] engagements

"@pedropverani @OpenAI Oh yeah definitely. I'm thinking a lot of that also has to do with it using less COT as well. The model is def smaller though. I view this as a "mini" model. I just hope its the same price as a mini model (kinda doubt it tho)"
X Link 2026-02-12T20:44Z [----] followers, [--] engagements

"I wouldn't use it like that. That isn't optimal. What I would do instead: Create a plan with small atomic and commitable tasks with clearly detailed outline of the work using [---] Codex High. Switch to Spark in Orchestration mode Use subagents to implement every task [--] subagent per task. You dont need massive context windows for [--] task. Auditing entire codebases is best suited for large context windows. Just asking for trouble. https://twitter.com/i/web/status/2022049984166646211 https://twitter.com/i/web/status/2022049984166646211"
X Link 2026-02-12T20:47Z [----] followers, [--] engagements

"@pedropverani @OpenAI I really like high a lot and [---] is fast enough for me. But I am curious what this unlocks. I dont have Pro though. I do have ideas on how i'd use it. I'd "swarm plan" with [---] high/xhigh and 'parallel task' with spark https://github.com/am-will/swarms https://github.com/am-will/swarms"
X Link 2026-02-12T20:48Z [----] followers, [--] engagements

"@nummanali Hi Codex [---] Spark. I am.will Pleasure to meet you"
X Link 2026-02-12T21:09Z [----] followers, [--] engagements

"@areyous76505851 @OpenAI yeah but you can access it all month and you get crazy good limits. $150 is for just [--] context windows in Opus. Crazy"
X Link 2026-02-12T22:37Z [----] followers, [--] engagements

"@ondrejbudicek @OpenAI why what the take its just a joke really but so is Anthropic's Fast api pricing"
X Link 2026-02-13T02:51Z [----] followers, [--] engagements

"well the context window i'm reading will get bigger so you're right. but with the 128k context window and reduced intelligence there's going to be an optimal way to use them by nature. so its all about picking your spots you know large documents large file exploration long tasks are going to be less optimal than small atomic tasks. thats all. https://twitter.com/i/web/status/2022154912998592962 https://twitter.com/i/web/status/2022154912998592962"
X Link 2026-02-13T03:44Z [----] followers, [--] engagements

"@scottbuscemi @ianzelbo @microcenter i wasn't familiar with your game scott"
X Link 2026-02-13T06:09Z [----] followers, [---] engagements

"@Lincoln_Osis @rbranson everyone has their own opinion even if those opinions are dead wrong ๐Ÿ˜…"
X Link 2026-02-13T17:12Z [----] followers, [--] engagements

"@Lincoln_Osis @rbranson you're a monster"
X Link 2026-02-13T17:13Z [----] followers, [--] engagements

"@Lincoln_Osis @rbranson I make my own"
X Link 2026-02-13T17:14Z [----] followers, [--] engagements

"yeah once you get to higher temps no char the bottom wont be cooked. but its good that you're making pizza if you really wanna get into it the higher heat is key. you can get a ninja pizza oven for $250 and it goes to 700f. its honestly a great little oven that can cook anything. https://twitter.com/i/web/status/2022361463671648528 https://twitter.com/i/web/status/2022361463671648528"
X Link 2026-02-13T17:25Z [----] followers, [--] engagements

"@Lincoln_Osis @rbranson Oh yeah this is an outdoor oven actually though you can use it inside it also smokes (optional) so ideally you leave it outside"
X Link 2026-02-13T17:28Z [----] followers, [--] engagements

"@Lincoln_Osis @rbranson oof ๐Ÿ˜…"
X Link 2026-02-13T17:33Z [----] followers, [--] engagements

"@mweinbach Its almost certainly a combination of things. But I bet [--] & [--] are the main reasons"
X Link 2026-02-13T17:42Z [----] followers, [---] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

creator/x::LLMJunky
/creator/x::LLMJunky