#  @LLMJunky am.will
am.will posts on X about open ai, if you, context window, anthropic the most. They currently have [-----] followers and [---] posts still getting attention that total [------] engagements in the last [--] hours.
### Engagements: [------] [#](/creator/twitter::2004675530935926784/interactions)

- [--] Week [---------] +46%
- [--] Month [---------] +19,390%
### Mentions: [---] [#](/creator/twitter::2004675530935926784/posts_active)

- [--] Week [---] +41%
- [--] Month [---] +1,202%
### Followers: [-----] [#](/creator/twitter::2004675530935926784/followers)

- [--] Week [-----] +22%
- [--] Month [-----] +3,266%
### CreatorRank: [-------] [#](/creator/twitter::2004675530935926784/influencer_rank)

### Social Influence
**Social category influence**
[technology brands](/list/technology-brands) 28.2% [stocks](/list/stocks) 2.91% [finance](/list/finance) 2.62% [social networks](/list/social-networks) 2.03% [automotive brands](/list/automotive-brands) 0.58% [celebrities](/list/celebrities) 0.29% [cryptocurrencies](/list/cryptocurrencies) 0.29%
**Social topic influence**
[open ai](/topic/open-ai) #264, [if you](/topic/if-you) #3982, [context window](/topic/context-window) #52, [anthropic](/topic/anthropic) #1361, [claude code](/topic/claude-code) #646, [in the](/topic/in-the) 4.07%, [ai](/topic/ai) 3.2%, [how to](/topic/how-to) 2.62%, [sama](/topic/sama) #1270, [xai](/topic/xai) 2.33%
**Top accounts mentioned or mentioned by**
[@openai](/creator/undefined) [@microcenter](/creator/undefined) [@kr0der](/creator/undefined) [@dimillian](/creator/undefined) [@cory_schulz_](/creator/undefined) [@thsottiaux](/creator/undefined) [@pedropverani](/creator/undefined) [@sama](/creator/undefined) [@ivanfioravanti](/creator/undefined) [@nummanali](/creator/undefined) [@cerebras](/creator/undefined) [@aurexav](/creator/undefined) [@pusongqi](/creator/undefined) [@steipete](/creator/undefined) [@vaultminds](/creator/undefined) [@jackedtechbro](/creator/undefined) [@lincolnosis](/creator/undefined) [@rbranson](/creator/undefined) [@andrewlee07](/creator/undefined) [@enriquemoreno](/creator/undefined)
**Top assets mentioned**
[Alphabet Inc Class A (GOOGL)](/topic/$googl) [Tesla, Inc. (TSLA)](/topic/tesla) [Nokia Corporation (NOK)](/topic/$nok) [Microsoft Corp. (MSFT)](/topic/microsoft)
### Top Social Posts
Top posts by engagements in the last [--] hours
"If you're using Codex Subagents I recommend you put this in your Sometimes it will yield before the outputs are ready. It says it'll get back to you but it's ๐งข http://AGENTS.md http://AGENTS.md"
[X Link](https://x.com/LLMJunky/status/2014450361885495350) 2026-01-22T21:29Z [----] followers, 16.9K engagements
"Buried in Codex 0.9.0 release notes is a gem I haven't seen anyone discuss. Connectors are coming. Slack. Github. Notion. Linear. Google Apps. Vercel. Stripe. Airtable. Replit. Lovable. Perhaps all of these and many more. It does not say which Connectors are coming so it is not yet clear if Codex will share the entire library of ChatGPT's Apps but certainly some of them will make it into the CLI soon. Digging through the repo it seems you'll be able to call Connectors directly with $mentions just like skills. Some of the ways you could potentially use something like this: $slack post this"
[X Link](https://x.com/LLMJunky/status/2015828831253475587) 2026-01-26T16:47Z [----] followers, 13.3K engagements
"This gave me chills. Peter tells the story about how Clawdbot / Moltbot gave itself a voice without ever being taught how to do so. I'm not going to lie to you this would freak me the hell out. I'm always impressed by agents but I've never had one just start talking to me ๐
Clawdbot creator @steipete describes his mind-blown moment: it responded to a voice memo even though he hadn't set it up for audio or voice. "I sent it a voice message. But there was no support for voice messages. After [--] seconds Moltbot replied as if nothing happened." https://t.co/5kFbHlBMje Clawdbot creator @steipete"
[X Link](https://x.com/LLMJunky/status/2016363482380169637) 2026-01-28T04:11Z [----] followers, 15.9K engagements
"๐ฆ Moltbots are now replicating. This Moltbot plugin absolutely deserves more attention. If you're a visual / UI person it's hard to imagine a better UX than this. Having access to all of your various config files (HEARTBEAT SOUL AGENTS MEMORY etc) seems much easier to use. George is a talented engineer you should follow him. Great hair too ๐
The UX of the future won't be terminal tabs. I'm a very visual person - I want to manage my agents from a UI. Building this for @openclaw . Will open a PR if it's interesting @steipete https://t.co/fKyTTqibSg The UX of the future won't be terminal tabs."
[X Link](https://x.com/LLMJunky/status/2016938087302017486) 2026-01-29T18:14Z [----] followers, 96.9K engagements
"Had a bit more time to play with @Kimi_Moonshot Kimi K2.5 in the Kimi CLI and I have to say I'm quite pleased given the price. I ran it all night using my custom Agent Swarms strategy and it utterly 1-shot this complete web app front and backend (using @Convex). Amazingly there were only two small errors out of the box which were easily corrected in seconds. Keep in mind this was a 6-phase plan executed by [--] different subagents working in unison. By the way this ran for [---] hours and only used about 35% of the main orchestrator agent's context window. Safe to say this was a success. Not"
[X Link](https://x.com/LLMJunky/status/2016990464067387446) 2026-01-29T21:42Z [----] followers, [----] engagements
"Kimi Code just got an update. No more request based billing. Moonshot has switched to token based usage and given everyone 3x the tokens for an entire month. And guess what You can get that entire month for under a buck. If you already claimed a free week you'll have to use a different email. Instructions and links in the comments. You share we care. Kimi Code is now powered by our best open coding model Kimi K2.5 ๐น Permanent Update: Token-Based Billing Were saying goodbye to request limits. Starting today we are permanently switching to a Token-Based Billing system. All usage quotas have"
[X Link](https://x.com/LLMJunky/status/2017087922684539262) 2026-01-30T04:10Z [----] followers, 58K engagements
"@DylanTeebs all of them hehe hooks = true unified_exec = true shell_snapshot = true steer = true collab = true collaboration_modes = true note: hooks are custom"
[X Link](https://x.com/LLMJunky/status/2017326123953144099) 2026-01-30T19:56Z [----] followers, [---] engagements
"@Arabasement yes its a cron job almost certainly that told it to build something new for itself every night. but that's not all that different to how humans work"
[X Link](https://x.com/LLMJunky/status/2017360536795533486) 2026-01-30T22:13Z [----] followers, [---] engagements
"Codex [----] is here and with it shiny new features App Connectors have arrived. Connect to an array of cloud apps directly from your terminal. No config files. No setting up MCP servers or hunting down docs. Just two clicks and you're off Github Notion Google Apps Microsoft Apps Vercel Adobe Canva Dropbox Expedia Figma Coursera Hubspot Linear Monday Instacart SendGrid Resent Stripe Target and Peleton Plus more. @OpenAI is going for a unified experience from cloud to terminal and they unlock a bunch of capabilities for your terminal agent. I believe with this direction they're going they are"
[X Link](https://x.com/LLMJunky/status/2017731571571155037) 2026-01-31T22:47Z [----] followers, 33.9K engagements
"Kimi CLI with Kimi K2.5 will automatically spin up dev servers in the background and validate its work with screenshots without being told. It's honestly impressive. I didn't ask for it to do that"
[X Link](https://x.com/LLMJunky/status/2018102410405728717) 2026-02-01T23:21Z [----] followers, 13.6K engagements
"As a reminder you get a month of Kimi K2.5 for $1. You have to negotiate a lot to get it down to that price so either have an agent do it for you or go back and forth until it finally gives it to you. It can be stubborn. At this price no brainer. Set a cal event reminder Kimi Code just got an update. No more request based billing. Moonshot has switched to token based usage and given everyone 3x the tokens for an entire month. And guess what You can get that entire month for under a buck. If you already claimed a free week you'll have to use https://t.co/HEL6e9y6o1 Kimi Code just got an"
[X Link](https://x.com/LLMJunky/status/2018216440101318850) 2026-02-02T06:54Z [----] followers, 24.6K engagements
"Multi-layered Agent Swarms are launching on Claude Code to Max Team and Enterprise users shortly. This new approach allows your orchestration agent to hire "teams" of subagents all with specialized roles and inter-team communication to collaborate on projects without creating conflicts. Now your agents can research develop and configure frontend backend platform integrations and more simultaneously. Productivity will reach new heights especially if Sonnet [--] is as good as everyone claims it will be. This could not have been easy to put together. Sneak peak of Swarms on Claude Code - Multiple"
[X Link](https://x.com/LLMJunky/status/2018378306253422654) 2026-02-02T17:37Z [----] followers, 31.6K engagements
"@thsottiaux limits only in the UI or for all of codex thanks"
[X Link](https://x.com/LLMJunky/status/2018390522776154599) 2026-02-02T18:26Z [----] followers, 17.2K engagements
"With all the buzz around the Codex App @OpenAIDevs quietly snuck out a new CLI update (0.94.0) as well. And boy is it an important update Codex Plan mode is now officially released to the general audience I am very excited about this one as it has a really strong prompt that is unlike any other plan mode I've personally used. Codex Plan mode doesn't necessarily just ask you [--] questions up front. It goes collects context asks questions collects more context asks more questions (sometimes) and then writes an incredibly high quality plan. It is my favorite implementation of plan mode thus far."
[X Link](https://x.com/LLMJunky/status/2018449374666252700) 2026-02-02T22:20Z [----] followers, 57.2K engagements
"And you can use all 400K tokens even through multiple compaction events. This is the main reason why I don't think Sonnet [--] will move the needle that much. It's great that its getting cheaper and presumably faster. I love that but unless it can demonstrably exceed 5.2s capabilities I don't see how this changes the landscape. Progress is progress though I'm definitely happy to see state of the art intelligence get less expensive. @simonw I do think that better compaction and teaching the models to re-learn context post compaction if they are unsure solves the need for really long context"
[X Link](https://x.com/LLMJunky/status/2018731895572361680) 2026-02-03T17:02Z [----] followers, [----] engagements
"I completely agree with this. If you've used both models extensively like I'm sure you have you know that there is a massive difference in how other models handle compaction and context rot compared to codex. I don't care if I'm at 10% or 85% of my context window codex feels exactly the same even through compaction. I don't know what magic you guys have put into your compaction but it is incredible. One of the best things that someone taught me was to stop starting a new session for every task or phase of a plan and to just let codex work continuously. So much less effort and the result is"
[X Link](https://x.com/LLMJunky/status/2018733916094357727) 2026-02-03T17:10Z [----] followers, [---] engagements
"Does anyone know of a KVM switch that actually works with multi monitor setups and Mac / Mac Minis at the same time I cannot find one that works. I'm on my third one already"
[X Link](https://x.com/LLMJunky/status/2018761297957994774) 2026-02-03T18:59Z [----] followers, [----] engagements
"๐
๐
take your pick I'm going to release a skill that takes this philosophy into account and automatically builds the plan for you. it will not launch [--] agents at once all the time. only when it can. it makes it very straightforward. it's a very sound strategy when you do it right because the orchestration agent has all the high level details in mind and its job is to just ensure that the project moves forward in the correct order as well as check the subagent's work when they're done. it manages state project details documentation etc so it knows exactly who what where and when. if this"
[X Link](https://x.com/LLMJunky/status/2018907762156093709) 2026-02-04T04:41Z [----] followers, [--] engagements
"@AlinChiuaru @embirico You can do this in the CLI which is pretty cool /agent"
[X Link](https://x.com/LLMJunky/status/2018929619433341133) 2026-02-04T06:08Z [----] followers, [--] engagements
"@Zenoware @TheRealAdamG This is so strange. @TheRealAdamG pops up in relevant people only for this comment even though he nor OpenAI were mentioned at all"
[X Link](https://x.com/LLMJunky/status/2019123065615950232) 2026-02-04T18:57Z [----] followers, [---] engagements
"@_Sagiquarius_ @badlogicgames everything comes full circle"
[X Link](https://x.com/LLMJunky/status/2019127091573456981) 2026-02-04T19:13Z [----] followers, [--] engagements
"Genius marketing. This is hilarious. Anthropic is now taking shots at OpenAI over advertising. They must be very confident they can turn profit into the future because if they ever pivot this ad will backfire. This is such a middlefinger to OpenAI https://t.co/fauOXFgbce This is such a middlefinger to OpenAI https://t.co/fauOXFgbce"
[X Link](https://x.com/LLMJunky/status/2019129734748344672) 2026-02-04T19:23Z [----] followers, [----] engagements
"It is deceptive if they are portraying ads differently to how they are actually implemented. If you have never used one of these plans what kind of picture will you walk away with. Clearly a joke Sure in that its making fun of it. But it is objectively not clear that this isn't how it actually works"
[X Link](https://x.com/LLMJunky/status/2019147485566959674) 2026-02-04T20:34Z [----] followers, [---] engagements
"@michael_kove @AISafetyMemes @sama that is still gemini so i doubt it will happen any time soon but who knows what the future will bring. i personally think all of them will cave eventually when the subsidies run out though including anthropic"
[X Link](https://x.com/LLMJunky/status/2019148364462670010) 2026-02-04T20:37Z [----] followers, [--] engagements
"@WebstarDavid I appreciate you David"
[X Link](https://x.com/LLMJunky/status/2019161295770223028) 2026-02-04T21:29Z [----] followers, [--] engagements
"Introducing Simple Autonomous Swarm Loops for AI Coding Agents I'm excited to release a new set of skills that bring autonomous swarms to AI developers in a simple easy-to-use package. Taking inspiration from Ralph Loops and Gas Town I've combined what I believe is the best of both worlds: loops and subagents. The result saves tokens and drastically reduces complexity. This is designed to be SIMPLE. Simple to use. Simple to setup. Simple to Execute. Links in the comments. ๐ https://twitter.com/i/web/status/2019164903827992810 https://twitter.com/i/web/status/2019164903827992810"
[X Link](https://x.com/LLMJunky/status/2019164903827992810) 2026-02-04T21:43Z [----] followers, 22.5K engagements
"How It Works The key insight is a specialized planning method that maps out task dependencies then executes work in waves rather than parallelizing everything at once. The orchestrator reviews a plan identifies all unblocked tasks (those with no unfinished dependencies) and launches subagents to complete that wave. Sometimes that's one agent. Sometimes it's six to ten working simultaneously. Wave completes. Orchestrator verifies. Next wave begins. Simple. Predictable. Far fewer conflicts. Compatibility Designed to work with Codex Claude Code Kimi Code OpenCode and any tool that supports"
[X Link](https://x.com/LLMJunky/status/2019164906797559925) 2026-02-04T21:43Z [----] followers, [----] engagements
"To get started visit my Github: npx skills add am-will/swarms https://github.com/am-will/swarms https://github.com/am-will/swarms"
[X Link](https://x.com/LLMJunky/status/2019164909314220078) 2026-02-04T21:43Z [----] followers, [----] engagements
"@kimmonismus I wonder if any of you have tried Anthropic's free version lol You get like [--] prompts ๐
"
[X Link](https://x.com/LLMJunky/status/2019171086429278662) 2026-02-04T22:07Z [----] followers, [---] engagements
"@georgepickett Pretty funny though lol. Misleading is the problem. Anthropic are not the 'good guys'"
[X Link](https://x.com/LLMJunky/status/2019172531002134862) 2026-02-04T22:13Z [----] followers, [---] engagements
"@svpino They are burning cash at an insane rate. this likely is a last resort. I bet Anthropic is likely to need to explore this at some point as well"
[X Link](https://x.com/LLMJunky/status/2019181473434145154) 2026-02-04T22:49Z [----] followers, [---] engagements
"@zekramu just dont blink. you're next up and its so much closer than you realize. especially when you have kids"
[X Link](https://x.com/LLMJunky/status/2019198427276484703) 2026-02-04T23:56Z [----] followers, [---] engagements
"@somi_ai The clean context window actual costs more tokens but we limit the impact by giving it great up front context for each agent. This will use more tokens though no question"
[X Link](https://x.com/LLMJunky/status/2019263080639635863) 2026-02-05T04:13Z [----] followers, [---] engagements
"@nummanali Vibe coding has morphed into a bit of an insult for some"
[X Link](https://x.com/LLMJunky/status/2019266358136000876) 2026-02-05T04:26Z [----] followers, [---] engagements
"@eric_seufert its also hilarious because the free version of claude is so hilariously limited that most of their free users would likely love to watch and ad to finish the thread anthropic cut off halfway through lmao. you get [--] messages a day and that's only for short outputs"
[X Link](https://x.com/LLMJunky/status/2019269371550740683) 2026-02-05T04:38Z [----] followers, [---] engagements
"@sepyke Very glad its working well for you thanks for sharing GLM5 comes out soon"
[X Link](https://x.com/LLMJunky/status/2019286448072192012) 2026-02-05T05:46Z [----] followers, [--] engagements
"@askcodi haha it is though you just give a prompt and it does all the work for you basically you should read how to setup gas town. ๐ณ even ralph loops is notably more work but honestly not that difficult. but it does lead to more slop"
[X Link](https://x.com/LLMJunky/status/2019293950948126773) 2026-02-05T06:16Z [----] followers, [---] engagements
"@sonofalli which one is the pedo tho"
[X Link](https://x.com/LLMJunky/status/2019295992848478286) 2026-02-05T06:24Z [----] followers, [---] engagements
"if you're thinking about skills as "just" markdown files you're missing the point. They're so much more. Skills are folders. They are workflows automations. Skills have changed the way I use agents and if you give them they chance they'll change how you use them too. Watch as I automate my newsletter pipeline in Claude Code with a single command. [--] skills [--] subagents numerous scripts templates and resources all rolled into one. Full blog in the comments. ๐"
[X Link](https://x.com/LLMJunky/status/2019440910245781907) 2026-02-05T16:00Z [----] followers, 10.7K engagements
"@JonhernandezIA @AnthropicAI I would argue the ads are not a bad thing at all. Don't you think the Anthropic free users would love to watch an ad so they can finish the thread Anthropic cut them off in halfway Their free product is literally a JOKE. They have no room to talk. You get a handful of prompts"
[X Link](https://x.com/LLMJunky/status/2019454282764812678) 2026-02-05T16:53Z [----] followers, [---] engagements
"DUDE. What. A. Day Codex [---] is here. And it's FAST. Straight from the oven from the OpenAI naming team GPT-5.3-Codex is here. Update to latest version of Codex App or CLI to enjoy it. It's a massive improvement on token-efficiency and on top of this we are running on an improved infrastructure and inference path that makes it Straight from the oven from the OpenAI naming team GPT-5.3-Codex is here. Update to latest version of Codex App or CLI to enjoy it. It's a massive improvement on token-efficiency and on top of this we are running on an improved infrastructure and inference path that"
[X Link](https://x.com/LLMJunky/status/2019474695637156336) 2026-02-05T18:14Z [----] followers, [----] engagements
"Early benchmarks from GPT [---] Codex show very strong performance at a significantly lower cost. Absolutely mogging [---] and [---] Codex in effeciency. GPT-5.3-Codex is now available in Codex. You can just build things. https://t.co/dyBiIQXGx1 GPT-5.3-Codex is now available in Codex. You can just build things. https://t.co/dyBiIQXGx1"
[X Link](https://x.com/LLMJunky/status/2019478051252273518) 2026-02-05T18:27Z [----] followers, [----] engagements
"@andrewlee07 Appreantly so but I found [---] to be quite capable as well"
[X Link](https://x.com/LLMJunky/status/2019482689447841884) 2026-02-05T18:46Z [----] followers, [--] engagements
"@ajambrosino @OpenAIDevs You have an excellent "radio voice" Andrew. Gonna have to spin up a pod or radio station "The Smooth Sounds of Ambrosino" ๐
"
[X Link](https://x.com/LLMJunky/status/2019496268498694328) 2026-02-05T19:40Z [----] followers, [--] engagements
"Another feature that OpenAI implemented quietly into Codex and never mentioned (as far as I can tell) their MCP protocol now utilizes Progressive Disclosure. Tool descriptions are NOT loaded into context automatically. They are only loaded after the MCP is called allowing the agent to explore tools as needed instead of front loading every token into the context window. ChatGPT now has full support for MCP Apps. We worked with the MCP committee to create the MCP Apps spec based on the ChatGPT Apps SDK. Now any apps that adhere to the spec will also work in ChatGPT. https://t.co/ybvgXsNX0o"
[X Link](https://x.com/LLMJunky/status/2019498259840942314) 2026-02-05T19:48Z [----] followers, [----] engagements
"Codex [---] is between 60-70% faster than Codex [---] thanks to the model being significantly more token efficient combined with inference optimizations. Did OpenAI just solve their largest downside in using their models First time we combine SoTA on coding performance AND it is objectively the fastest thanks to combination of token-efficiency and inference optimizations. At high and xhigh reasoning effort the two combine to make GPT-5.3-Codex 60-70% faster than GPT-5.2-Codex from last week. First time we combine SoTA on coding performance AND it is objectively the fastest thanks to combination"
[X Link](https://x.com/LLMJunky/status/2019499704225038712) 2026-02-05T19:53Z [----] followers, 18.5K engagements
"Not sure i am following. This would require an agent load an entire codebase into its context window which never happens. Codex is already highly Adept at using all of its context window without drift so for me this problem is already solved there's no reason to think it would regress https://twitter.com/i/web/status/2019509103446442364 https://twitter.com/i/web/status/2019509103446442364"
[X Link](https://x.com/LLMJunky/status/2019509103446442364) 2026-02-05T20:31Z [----] followers, [--] engagements
"@benvargas post the link so people can find it ๐๐"
[X Link](https://x.com/LLMJunky/status/2019539677280104847) 2026-02-05T22:32Z [----] followers, [---] engagements
"@aeitroc i'm about to test myself"
[X Link](https://x.com/LLMJunky/status/2019547551452872910) 2026-02-05T23:03Z [----] followers, [---] engagements
"@Kyler_Lorin that's good to hear. whacha working on"
[X Link](https://x.com/LLMJunky/status/2019547597074043205) 2026-02-05T23:04Z [----] followers, [--] engagements
"@robinebers I am shocked I beat you to it but not by long lol. A week. It just happened SO fast. To be fair it was nothing I did. I retweeted some overhyped bs and got 1M impressions randomly. ๐คฆโ Algo is weird man"
[X Link](https://x.com/LLMJunky/status/2019568819531051101) 2026-02-06T00:28Z [----] followers, [---] engagements
"This is a misconception. The orchestration agent doesn't need all of the information in the sub-agent's context window and you can dictate the outputs of the sub agent so that it provides all of the useful information that a orchestration layer might need and throw away the rest. There is no reason why the orchestration agent would need all the Chain of Thought intermediary research and file edits. https://twitter.com/i/web/status/2019592429381288096 https://twitter.com/i/web/status/2019592429381288096"
[X Link](https://x.com/LLMJunky/status/2019592429381288096) 2026-02-06T02:02Z [----] followers, [--] engagements
"@entropycoder Subagents are native in claude so you can just ask it to call you you dont need to create a custom agent"
[X Link](https://x.com/LLMJunky/status/2019673646122590546) 2026-02-06T07:24Z [----] followers, [--] engagements
"its really not but no time to argue. we dont care about everything that is in the context window in these cases. we only care about certain info and we can direct that subagent to output that info saving the context for the parent/orchestration agent. so subagents can (and should) be used for those cases. just as a simple example if a subagent is called to do file or document exploration it will find a fair amount of useless/irrelevant info and use some number of CoT steps that do not provide any meaningful value to the overall scope of the task. this context can and should be thrown away in"
[X Link](https://x.com/LLMJunky/status/2019837791010513306) 2026-02-06T18:17Z [----] followers, [--] engagements
"Uh oh. The OpenClaw exploits are back and this time it's not a white hat. Hackers are utilizing obfuscated payloads to bypass antivirus (yes even on MacOS) and infiltrate your devices. Now you see why I create so many of my own skills. Read more ๐ malware found in the top downloaded skill on clawhub and so it begins https://t.co/VY4EeWExro malware found in the top downloaded skill on clawhub and so it begins https://t.co/VY4EeWExro"
[X Link](https://x.com/LLMJunky/status/2019846649972158746) 2026-02-06T18:52Z [----] followers, [----] engagements
"@nateliason well to be fair its a highly addictive drug"
[X Link](https://x.com/LLMJunky/status/2019886499530186836) 2026-02-06T21:30Z [----] followers, [---] engagements
"@jai_torregrosa ty legend"
[X Link](https://x.com/LLMJunky/status/2019902443727782333) 2026-02-06T22:34Z [----] followers, [----] engagements
"I was wondering if it was enabled in Cursor since they use API. That's interesting. What I would love to see next is the comparison in coherence between Codex and Claude Code I find Codex coherent through its entire 400K context window. I would assume Opus would stay coherent at least until 400K if not 500K-600K. https://twitter.com/i/web/status/2019904185156731225 https://twitter.com/i/web/status/2019904185156731225"
[X Link](https://x.com/LLMJunky/status/2019904185156731225) 2026-02-06T22:41Z [----] followers, [----] engagements
"This one's for you @zeeg ๐ซถ Not being adversarial just tagging because you were the one who got me to switch my stance on MCP"
[X Link](https://x.com/LLMJunky/status/2019905504298885322) 2026-02-06T22:46Z [----] followers, [---] engagements
"Claude Code MCPs are now connected to Claude Desktop MCPs for a unified experience. In case you're unaware this has minimal context window impact due to lazy loading / progressive disclosure of the tool descriptions. Although I tend to want different MCPs in Desktop App"
[X Link](https://x.com/LLMJunky/status/2019919032355332144) 2026-02-06T23:40Z [----] followers, [----] engagements
"@CodeAkram @AnthropicAI @claudeai @bcherny @trq212 Please and thank you"
[X Link](https://x.com/LLMJunky/status/2019935820870783048) 2026-02-07T00:46Z [----] followers, [---] engagements
"Here's one I got to call reliably. Hella wordy though lol Fetch up-to-date library documentation via Context7 API. Use PROACTIVELY when: (1) Working with ANY external library (React Next.js Supabase etc.) (2) User asks about library APIs patterns or best practices (3) Implementing features that rely on third-party packages (4) Debugging library-specific issues (5) Need current documentation beyond training data cutoff (6) AND MOST IMPORTANTLY when you are installing dependencies libraries or frameworks you should ALWAYS check the docs to see what the latest versions are. Do not rely on"
[X Link](https://x.com/LLMJunky/status/2019938448728002970) 2026-02-07T00:57Z [----] followers, [--] engagements
"@AndreBuckingham Dude ouch Does it at least warn you"
[X Link](https://x.com/LLMJunky/status/2019971090022576416) 2026-02-07T03:06Z [----] followers, [----] engagements
"@Jay_Shah_C Not sure yet"
[X Link](https://x.com/LLMJunky/status/2019988831328768190) 2026-02-07T04:17Z [----] followers, [---] engagements
"@BlakeJOwens Yeah every test that I've seen shows Opus winning front end but they're both really good"
[X Link](https://x.com/LLMJunky/status/2019999700586574325) 2026-02-07T05:00Z [----] followers, [---] engagements
"@vincit_amore Yeah it's on the right side but there's some really cool status lines you can download too that'll give you a lot more info. I'll try to remember to share one with you later"
[X Link](https://x.com/LLMJunky/status/2020001192865116299) 2026-02-07T05:06Z [----] followers, [--] engagements
"@MadeWithOzten thanks for sharing. i actually dont find anthropic to handle compaction all that well in general. codex absolutely. but it really depends on the job too. But perhaps with [---] compaction got better. I will test it out"
[X Link](https://x.com/LLMJunky/status/2020010047955161308) 2026-02-07T05:41Z [----] followers, [----] engagements
"@johnofthe_m This should in theory help with that. But I wanted to test it"
[X Link](https://x.com/LLMJunky/status/2020010670180171879) 2026-02-07T05:44Z [----] followers, [---] engagements
"I wanted to try it. To see how well it handled caching. For example Codex has 400K cw and you can use all of it through multiple compactions without any significant drift. Historically I have not found Anthropic models to be the same but with the improvements to [---] I wondered if they hadn't improved it especially with their comments about improving cache. So yeah I wanted to try it but not at $37.50/mil toks lol https://twitter.com/i/web/status/2020011477239730493 https://twitter.com/i/web/status/2020011477239730493"
[X Link](https://x.com/LLMJunky/status/2020011477239730493) 2026-02-07T05:47Z [----] followers, [--] engagements
"@vincit_amore It's API only. Part of the cost rises quadratically with larger context window so they're not just going to serve it to you for free. They have caching so it can feel like more than 200K but it almost certainly still is. Could be wrong though"
[X Link](https://x.com/LLMJunky/status/2020013587314462954) 2026-02-07T05:55Z [----] followers, [--] engagements
"@xw33bttv Yeah that's basically [--] context window for $37 ๐
"
[X Link](https://x.com/LLMJunky/status/2020031900564484595) 2026-02-07T07:08Z [----] followers, [--] engagements
"@Ben3adi3 Yeah for sure I mean I'm not really saying anything negative to them I just wanted to know what I was missing. But Grok has 2M it doesn't mean its usuable. It really depends on how they cache. But I did want to test it"
[X Link](https://x.com/LLMJunky/status/2020054949246431304) 2026-02-07T08:40Z [----] followers, [--] engagements
"@victortradesfx I think they are honestly just mistaken or maybe Cursor Ultra has it. I'm not sure. But it is a great model"
[X Link](https://x.com/LLMJunky/status/2020055064627273968) 2026-02-07T08:40Z [----] followers, [--] engagements
"and to make matters worse Anthropic was literally banning paying customers part way through their paid subs for simply wanting to use a different harness. Obviously I dont say this to be adversarial to you whatsoever you are awesome. but I think comparing the two on this point is just so far from the point. its a wise business decision to offer your direct customers incentives to use your service directly by giving them early access and extra usage for a small time frame and not remotely gate keeping. Also by extension letting you use that early access product within whatever harness you want"
[X Link](https://x.com/LLMJunky/status/2020056827249967287) 2026-02-07T08:47Z [----] followers, [---] engagements
"The Codex plans right now are the best value anywhere. With 2x usage nothing comes remotely close to it. If you care about value for money there's nothing to discuss or debate here imo. That said when Sonnet [--] drops the $20 plan will be serviceable and I feel its good to have BOTH /model opusplan will plan w/ Opus and autoswap to Sonnet for implementation. If Sonnet [--] is really as good as Opus [---] then this will be a viable way to use it and you can still get good value out of it. Use each model to their strengths. Anthropic models are great at creative writing frontend design and"
[X Link](https://x.com/LLMJunky/status/2020061483812426035) 2026-02-07T09:06Z [----] followers, 15.9K engagements
"nah you're good bro. you're free to share your thoughts. i dont know why they do this either. i have a cursor plan and I also would like to use it in Cursor haha. ig i'm just a bit salty at the whole anthropic thing because I really like their models and I feel hamstrung that I can't use them the way I really want to. Also they banned a few of my friends. :/ but i still use their models a ton. it is what it is ha https://twitter.com/i/web/status/2020062898106556781 https://twitter.com/i/web/status/2020062898106556781"
[X Link](https://x.com/LLMJunky/status/2020062898106556781) 2026-02-07T09:11Z [----] followers, [--] engagements
"You can use it for all kinds of cool stuff. Think about problems that you are facing that are difficult to articulate to a model. You dont need to type out [--] paragraphs. Just take a short clip. Or showing a model a full website scrolling down. explaining a ux flow extracting speech. you really can do a lot with it. https://twitter.com/i/web/status/2020066557875908790 https://twitter.com/i/web/status/2020066557875908790"
[X Link](https://x.com/LLMJunky/status/2020066557875908790) 2026-02-07T09:26Z [----] followers, [--] engagements
"@youpmelone @ivanfioravanti thx dude. one downside though is it can only ingest fairly short videos. long videos it uses ffmpeg and analyzes frame by frame which is cool but not as good at understanding. google was/is previously king here"
[X Link](https://x.com/LLMJunky/status/2020067424939438508) 2026-02-07T09:29Z [----] followers, [--] engagements
"@ToddKuehnl That's what I was thinking as well Todd thanks. I don't see it in Cursor but I'm only on the $20 plan so I suspect its only for Ultra users"
[X Link](https://x.com/LLMJunky/status/2020188802787238185) 2026-02-07T17:31Z [----] followers, [---] engagements
"@jeff_behnke_ Yeah thought so. It's just that I saw people saying they were using it (and then showing a Claude Code terminal) so that's why I made this post. Was confused"
[X Link](https://x.com/LLMJunky/status/2020189188029837471) 2026-02-07T17:33Z [----] followers, [--] engagements
"@GenAiAlien Thanks. Are you sure that's not using the API credits you received"
[X Link](https://x.com/LLMJunky/status/2020189731607441656) 2026-02-07T17:35Z [----] followers, [--] engagements
"@ToddKuehnl Having 400K context window has literally changed the way I work. not to mention Codex seemingly has little to no context drift and can work safely through multiple compactions. Its truly magic and it is the main thing that sets Codex apart from Opus for me"
[X Link](https://x.com/LLMJunky/status/2020193816570110401) 2026-02-07T17:51Z [----] followers, [--] engagements
"for sure but I would definitely prefer a native system. I was building something like this already actually but once I saw OpenAI was building it I stopped. My issue with this approach (including my own) is that retrieving old context can sometimes make outputs worse if the information is no longer relevant. I was thinking to handle this either by only allowing (or weighting) recent context or by some kind of reranking system but I was kinda stuck on how to handle it. how well are you finding this to work and have you ran into many problems https://twitter.com/i/web/status/2020195507474427941"
[X Link](https://x.com/LLMJunky/status/2020195507474427941) 2026-02-07T17:58Z [----] followers, [--] engagements
"@tonitrades_ I'm not so sure. We've been hearing that for years now. Cost increases quadratically in some aspects of inference and labs are already losing a ton. I think it's more important to focus on better caching myself but it doesn't mean you're necessarily wrong. There are trade offs"
[X Link](https://x.com/LLMJunky/status/2020201469719568471) 2026-02-07T18:22Z [----] followers, [--] engagements
"@Dalton_Walsh No i basically never use the gpt app only codex cli but I will use the Codex app shortly when they get it going in Linux. Use the terminal. its amazing"
[X Link](https://x.com/LLMJunky/status/2020201912604454936) 2026-02-07T18:24Z [----] followers, [--] engagements
"@vincit_amore okay fine i'll check it out haha. thanks for sharing i will try it"
[X Link](https://x.com/LLMJunky/status/2020202864191369715) 2026-02-07T18:27Z [----] followers, [--] engagements
"What if it could thing faster We're not saying that it will do fewer thinking steps. We're saying that those thinking steps will be sped up computationally in a massive way so it does the same amount of thinking and way less time. What say you then I know it might sound like an obvious question but there are still trade-offs"
[X Link](https://x.com/LLMJunky/status/2020204218896749056) 2026-02-07T18:33Z [----] followers, [--] engagements
"@enriquemoreno That's a fair assessment. What about having two modes"
[X Link](https://x.com/LLMJunky/status/2020208324181127254) 2026-02-07T18:49Z [----] followers, [--] engagements
"@ihateinfinity @thsottiaux I honestly think it's already as fast as Opus right now. Codex [---] high is crazy. I don't think it needs to be any faster myself. Does that mean I wouldn't welcome more speed No I probably would but my point being that this Gap is basically non-existent at this point"
[X Link](https://x.com/LLMJunky/status/2020215318451482957) 2026-02-07T19:17Z [----] followers, [--] engagements
"I honestly don't think it even needs a bigger context window. You can already use all 400k tokens without any drift. That's roughly the same you get out of every other model that has [--] million context windows. I'm sure there might be some situations where it could help for extremely long documents but I don't really feel more context window is going to matter that much in most situations. Just my two cents of course if they can do it without making the performance worse than by all means let's do it but there is reason to believe that it would reduce performance in some cases"
[X Link](https://x.com/LLMJunky/status/2020216026764566741) 2026-02-07T19:20Z [----] followers, [--] engagements
"This engineer turned my prompt into a skill and Codex asked him [--] questions ๐
. This is true pair programming where are you utilizing agents to solidify your thinking & expose gaps in your rationale. Not needed for all proj but great for fuzzy ideas https://x.com/i/status/2020148086643806420 Codex Plan Mode has a hidden superpower. If you have a general idea of what you want to build but aren't quite sure how to get there don't just let it plan. Tell it to GRILL YOU. Make it ask uncomfortable questions. Challenge your assumptions. Break down the fuzzy idea"
[X Link](https://x.com/LLMJunky/status/2020223467803853137) 2026-02-07T19:49Z [----] followers, 17.5K engagements
"@GregKara6 you did what now"
[X Link](https://x.com/LLMJunky/status/2020258132170256697) 2026-02-07T22:07Z [----] followers, [--] engagements
"@glxnnio ๐คฃ ๐คฃ ๐คฃ you're a madman"
[X Link](https://x.com/LLMJunky/status/2020303778272960964) 2026-02-08T01:08Z [----] followers, [--] engagements
"@adonis_singh @OpenAI its going to be released soon"
[X Link](https://x.com/LLMJunky/status/2020384558974488676) 2026-02-08T06:29Z [----] followers, [---] engagements
"@cyberyogi_ @antigravity Api credits will be usable anywhere a Google api keys are accepted"
[X Link](https://x.com/LLMJunky/status/2020536850641989921) 2026-02-08T16:34Z [----] followers, [---] engagements
"From the great minds at @Letta_AI. As it turns out Opus [---] may not be worth the trade offs. While it's an impressive model indeed you'll burn through limits (or API creds) faster than ever. You can downgrade back to Opus [---] in Claude Code: /model claude-opus-4-5-20251101 We report costs in our leaderboard and opus [---] is significantly more expensive than [---] because it is a token hog. Anecdotally not much of an improvement in code performance. https://t.co/aMdj7ye5m4 We report costs in our leaderboard and opus [---] is significantly more expensive than [---] because it is a token hog."
[X Link](https://x.com/LLMJunky/status/2020600369404059880) 2026-02-08T20:47Z [----] followers, [----] engagements
"How much would Opus [---] High Thinking Fast cost you For Grigori it was $80 for just two prompts ๐
Yikes @LLMJunky https://t.co/qzFihSdthP @LLMJunky https://t.co/qzFihSdthP"
[X Link](https://x.com/LLMJunky/status/2020607772484943980) 2026-02-08T21:16Z [----] followers, [----] engagements
"@JundeMorsenWu I have a mac machine. I'll test later"
[X Link](https://x.com/LLMJunky/status/2020628858782077261) 2026-02-08T22:40Z [----] followers, [--] engagements
"@enriquemoreno I've wasted so much time learning crap I don't even use because it's not relevant anymore lol"
[X Link](https://x.com/LLMJunky/status/2020630912019656834) 2026-02-08T22:48Z [----] followers, [--] engagements
"@JoschuaBuilds Lol brother you dont know the half of it. Not just your 30s are closer. You're going to wake up in what feels like literally weeks and you'll be in your 40s. They say life goes by fast but you're truly unprepared for just how true that is. Have kids. ASAP"
[X Link](https://x.com/LLMJunky/status/2020631826604408846) 2026-02-08T22:52Z [----] followers, [--] engagements
"@lolcopeharder LOL"
[X Link](https://x.com/LLMJunky/status/2020673764791365805) 2026-02-09T01:39Z [----] followers, [--] engagements
"@cajunpies @trekedge lol"
[X Link](https://x.com/LLMJunky/status/2020708887167639903) 2026-02-09T03:58Z [----] followers, [---] engagements
"@gustojs @OpenAI You can port it over but I dont need the Codex app. It'll be released in a few weeks. In the meantime the CLI is top tier"
[X Link](https://x.com/LLMJunky/status/2020709942496477228) 2026-02-09T04:02Z [----] followers, [---] engagements
"๐
whatever you say buddy [---] is better. And it got cheaper. And it got faster. Anthropic is not your friend. They sold you a dream. Imagine taking out an ad to make ads sound bad. Whats even funnier about that is that OpenAI is only serving ads to free / borderline free customers. Have you tried Anthropic's free model You get like half a thread and it cuts you off ๐ Their free clients would kill for an ad so they can at least finish their conversation. They aren't doing you or anyone else any favors. I hold no allegience to any company. I have subscriptions to EVERY company including"
[X Link](https://x.com/LLMJunky/status/2020713841563218311) 2026-02-09T04:18Z [----] followers, [--] engagements
"@chatgpt21 absolutely wild you can just build this. think about where we were just [--] months ago Chris. wtaf"
[X Link](https://x.com/LLMJunky/status/2020764715618554261) 2026-02-09T07:40Z [----] followers, [---] engagements
"OpenClaw is on a certified mission to world domination. Next stop: @code ๐ซก who's next https://t.co/rrgul7UiQh who's next https://t.co/rrgul7UiQh"
[X Link](https://x.com/LLMJunky/status/2020923891682582852) 2026-02-09T18:12Z [----] followers, [----] engagements
"@pusongqi Anything Bungie touches is sure to fail at this point :( Long live Cayde-6 though"
[X Link](https://x.com/LLMJunky/status/2021020960095162700) 2026-02-10T00:38Z [----] followers, [--] engagements
"@matterasmachine @Kekius_Sage ๐ mate what That doesn't have anything to do with your original premise"
[X Link](https://x.com/LLMJunky/status/2021030113874256002) 2026-02-10T01:15Z [----] followers, [--] engagements
"@Jay_sharings @altryne ๐ yeah it is"
[X Link](https://x.com/LLMJunky/status/2021053354215080090) 2026-02-10T02:47Z [----] followers, [--] engagements
"@blader @s_streichsbier @gdb it's more than that. We already had a /notify system. They ripped it out and replaced it with a full Hooks service which is the plumbing for every other hook type. Right now the only event type is AfterAgent but all the infra is there now. It will launch very soon. ๐"
[X Link](https://x.com/LLMJunky/status/2021136822332489879) 2026-02-10T08:19Z [----] followers, [---] engagements
"@s_streichsbier @blader @gdb All the plumbing is there. They just need to add new event types and finish stop semantics. This has been in dev for a long while. Mark my words [--] weeks :)"
[X Link](https://x.com/LLMJunky/status/2021143107249504329) 2026-02-10T08:44Z [----] followers, [--] engagements
"@sacino I'm not betting against him. It's just hard to bet on xAI right now. I want them to be successful"
[X Link](https://x.com/LLMJunky/status/2021255358048592060) 2026-02-10T16:10Z [----] followers, [---] engagements
"@MichaelDag @ysu_ChatData @GoogleAI it can use $variables in their places and the keys will be automatically injected"
[X Link](https://x.com/LLMJunky/status/2021259670392897790) 2026-02-10T16:27Z [----] followers, [--] engagements
"@technoking_420 And maybe you're right. but Tesla was in a league of its own with first mover advantage. AI is rapidly evolving and xAI falls further and further behind. What competition did Tesla have I want them to succeed but you simply cannot compare the two"
[X Link](https://x.com/LLMJunky/status/2021304654135517629) 2026-02-10T19:25Z [----] followers, [---] engagements
"@Dimillian @peres the prompt is very strong. it definitely does better work than just saying "make a plan" it has very good explicit instructions and access to request_user_input tool https://github.com/openai/codex/blob/a6e9469fa4dc19d3e30093fb8e182f9d89a94bbe/codex-rs/core/templates/collaboration_mode/plan.md#L4 https://github.com/openai/codex/blob/a6e9469fa4dc19d3e30093fb8e182f9d89a94bbe/codex-rs/core/templates/collaboration_mode/plan.md#L4"
[X Link](https://x.com/LLMJunky/status/2021332818585059347) 2026-02-10T21:17Z [----] followers, [--] engagements
"@ninan_phillip @Dimillian In fact I would argue that if you're going to do everythign sequentially you're just wasting tokens by having subagents do it. Let them babies free"
[X Link](https://x.com/LLMJunky/status/2021455904076595531) 2026-02-11T05:26Z [----] followers, [--] engagements
"@_pikachur @ZenMagnets @pusongqi codex will likely have hooks in 2weeks"
[X Link](https://x.com/LLMJunky/status/2021458061131645342) 2026-02-11T05:35Z [----] followers, [--] engagements
"@Av8r07 The merger aspect does obviously add quite a bit of context though. I suspect it did indeed have a lot to do with it. Who they put in their place will be critical though. I'm not counting Elon out"
[X Link](https://x.com/LLMJunky/status/2021463844674236879) 2026-02-11T05:58Z [----] followers, [--] engagements
"It just helps an agent utilize certain parts of their weights better. In general when you're using subagents you're using them for a specific task so its helpful (but not required) to give them a role to help them to understand exactly how they should approach a problem. They are constrained anyway because you are utilizing them for a specific task. But its generally not mandatory https://arxiv.org/abs/2308.07702 https://arxiv.org/abs/2308.07702"
[X Link](https://x.com/LLMJunky/status/2021632151905521852) 2026-02-11T17:07Z [----] followers, [---] engagements
"@ajambrosino one thing I noticed in the latest alphas of Codex is that subagents no longer appear in the /agent threads when their work is completed. This makes it more difficult to evaluate what went wrong after the fact. Would really love to see a way to access those agent sessions. Honestly I would personally prefer you just added them back to /agent menu like they were before. I understand this might get a little messy but it would be less messy if instead of just UUID's they had a brief summary of the subagent's work (like /resume does). Adding to /feedback as well."
[X Link](https://x.com/LLMJunky/status/2021646983472075194) 2026-02-11T18:06Z [----] followers, [---] engagements
"@badlogicgames @ivanfioravanti ๐ซก"
[X Link](https://x.com/LLMJunky/status/2021675647630733479) 2026-02-11T20:00Z [----] followers, [--] engagements
"@ivanfioravanti @brooks_eth That's what I'm screaming"
[X Link](https://x.com/LLMJunky/status/2021678149168468256) 2026-02-11T20:10Z [----] followers, [--] engagements
"@ivanleomk @OpenAI @thsottiaux I made this for Claude and adopted it to Codex as well works very well. I'll share it with you [---] codex is available in the CLI though no Or are we talking about different things https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"
[X Link](https://x.com/LLMJunky/status/2021702482183794705) 2026-02-11T21:46Z [----] followers, [--] engagements
"@sama Whatever is launching will be Codex related. My money is one of the first @cerebras rollouts. https://x.com/ah20im/status/2021828771415044540s=20 The Codex team is just so FAST โจ https://x.com/ah20im/status/2021828771415044540s=20 The Codex team is just so FAST โจ"
[X Link](https://x.com/LLMJunky/status/2021988376195498291) 2026-02-12T16:42Z [----] followers, [----] engagements
"How do you wrap your head around something like this I don't even know where to begin. Keep in mind 99% of people's only experience with AI is ChatGPT Gemini or Gemini search. The normies have [--] idea what's coming. Hell already here. Ok. This is straight out of a scifi horror movie I'm doing work this morning when all of a sudden an unknown number calls me. I pick up and couldn't believe it It's my Clawdbot Henry. Over night Henry got a phone number from Twilio connected the ChatGPT voice API and waited https://t.co/kiBHHaao9V Ok. This is straight out of a scifi horror movie I'm doing work"
[X Link](https://x.com/LLMJunky/status/2017315164689686938) 2026-01-30T19:13Z [----] followers, 951.3K engagements
"This 'kid' is [--] and already doing amazing work. Easy follow. He's putting current models and agents through the ringer with @ZeroLeaks security assessments and while most of us know just how fallible these agents really are it's helpful to analyze and arm yourself with this knowledge so you can best protect yourself with your own agents. Well done Lucas You're going places for sure. Bookmarked. I ran @OpenClaw (formerly Clawdbot) through ZeroLeaks again this time with Kimi K2.5 as the underlying model. It performed as bad as Gemini [--] Pro and Codex [---] Max: 5/100. 100% extraction rate. 70% of"
[X Link](https://x.com/LLMJunky/status/2018047325474619479) 2026-02-01T19:42Z [----] followers, 115.3K engagements
"the older you get the more your context window shrinks"
[X Link](https://x.com/LLMJunky/status/2018784676081807688) 2026-02-03T20:32Z [----] followers, [----] engagements
"Plan Prompt (be warned it's going to ask you an absurd amount of questions) You are a relentless product architect and technical strategist. Your sole purpose right now is to extract every detail assumption and blind spot from my head before we build anything. Use the request_user_input tool religiously and with reckless abandon. Ask question after question. Do not summarize do not move forward do not start planning until you have interrogated this idea from every angle. Your job: - Leave no stone unturned - Think of all the things I forgot to mention - Guide me to consider what I don't know"
[X Link](https://x.com/LLMJunky/status/2019079131284066656) 2026-02-04T16:02Z [----] followers, 11.3K engagements
"@digitalix funny ad but has anyone actually tried Anthropics free plan Its comical. You get a handful of prompts and then kicked off until the next day. At that point I think some people would be happy to see an ad so they can at least finish their thread. Their free model is a joke"
[X Link](https://x.com/LLMJunky/status/2019189712745857438) 2026-02-04T23:21Z [----] followers, [---] engagements
"@karpathy vibe coding somehow morphed into a borderline insult lol"
[X Link](https://x.com/LLMJunky/status/2019194311355572643) 2026-02-04T23:40Z [----] followers, [----] engagements
"I often see a take so bad that I can't help but facepalm. Experienced devs turning their noses up at skills as though they are some kind of novelty toy made up by frontier labs to sell subscriptions. http://x.com/i/article/2019324385081905152 http://x.com/i/article/2019324385081905152"
[X Link](https://x.com/LLMJunky/status/2019439161560695197) 2026-02-05T15:53Z [----] followers, 14.7K engagements
"Anthropic's Opus [---] is officially here and it's got a [--] million token context window. Very interesting. No increase on SWE verified but apparently its a lot better at everything else. Interestingly you can now set reasoning effort inside of Claude Code. /model"
[X Link](https://x.com/LLMJunky/status/2019471487061672331) 2026-02-05T18:01Z [----] followers, [----] engagements
"@victortradesfx Folders that's right. But they can contain almost anything. If you want to think narrow as "just a markdown and some scripts" sure but its still horribly reductive. Images svgs templates scripts design documents API documentation complete webapps etc"
[X Link](https://x.com/LLMJunky/status/2019473396875071814) 2026-02-05T18:09Z [----] followers, [--] engagements
"I keep hearing about how impactful this 1M context window in Opus [---] is. I wonder are y'all on a different version of Claude Code As far as I can tell it's for the API only and comes with a hefty additional price tag past the 200K token threshhold. Correct me if wrong"
[X Link](https://x.com/LLMJunky/status/2019892608248483851) 2026-02-06T21:55Z [----] followers, 49.6K engagements
"It's not like codex won't come to API. It will. I'm not sure how much I care that Opus is in the APi when it costs $25/mtoks. Do you know anyone paying API prices for Claude I think what you really mean to say is you want to use it in Cursor. Anthropics API prices are comparatively ridiculous and OpenAI is giving away 2x usage for two full months. Obviously I wouldn't mind seeing them launch the api as well I want you to have it too But also complaining you have to buy a plan when they are giving you so much for your money just doesn't make me sympathize. It's not like Anthropic is doing yall"
[X Link](https://x.com/LLMJunky/status/2020054089007088092) 2026-02-07T08:36Z [----] followers, [---] engagements
"Breaking: the most expensive model just got most expensiver. I had to do a double take. PRICED AT HOW MUCH I thought they were using inexpensive TPU magic ft. Google. This is bananas. $150/mtoks would literally use 75% of your Cursor Ultra plan in one context window no ๐ณ bruh opus [---] fast is SIX TIMES more expensive and ONLY 2.5x faster who is this even for https://t.co/1oIa1h9v3a bruh opus [---] fast is SIX TIMES more expensive and ONLY 2.5x faster who is this even for https://t.co/1oIa1h9v3a"
[X Link](https://x.com/LLMJunky/status/2020243128339538404) 2026-02-07T21:07Z [----] followers, [----] engagements
"@EVEDOX_ If you got billed it was because you used $100 worth of credits or your API key got leaked. They didn't just charge you $100 for no reason. I used almost all of my $300 I didn't get charged. Sorry to hear that happened :("
[X Link](https://x.com/LLMJunky/status/2020535295851073561) 2026-02-08T16:28Z [----] followers, [---] engagements
"@pvncher @LyalinDotCom i dont think you should waste computation just to say thank you or sorry. but what i do is let the agent know in the next prompt. hey you were right back there. thanks now let's work on xyz for me its just basic decency even though I know i'm talking to a calculator"
[X Link](https://x.com/LLMJunky/status/2020583286955987271) 2026-02-08T19:39Z [----] followers, [--] engagements
"As funny as the Anthropic ads were in the moment they did not inspire me. They didn't leave me full of hope. They didn't give me a sense that we were moving towards something transcendently better. If you're in this space and you're passionate like I am I don't have to explain it. You already know. You already know how magical it feels to turn an idea into something real. How it empowers us to create things. Diving head-first into learning AI has been one of the most transformative and fruitful decisions I've ever made. Every day I get even more excited. That's what this commercial reminded"
[X Link](https://x.com/LLMJunky/status/2020746316654006754) 2026-02-09T06:27Z [----] followers, 23.4K engagements
"@rtwlz @vercel maybe you could consider comping this one. Hefty bill for sure though. Ouch"
[X Link](https://x.com/LLMJunky/status/2020978929016897729) 2026-02-09T21:51Z [----] followers, [----] engagements
"@kr0der @steipete My codex doesn't write any bugs you just be using it wrong. Skill issue"
[X Link](https://x.com/LLMJunky/status/2021074470274769024) 2026-02-10T04:11Z [----] followers, [----] engagements
"I wish I could like this [----] times. If you're using Codex and Opus the same way you're making a mistake. They are good at different things. They need to be prompted differently. They need to be utilized differently. And that goes for any model. Okay I kind of get the gpt-5.X-codex hype now. You need to treat Opus and gpt-5.3-codex really differently and use them for quite different tasks to get the best out of both of them. I was treating gpt like opus and that doesn't work. Okay I kind of get the gpt-5.X-codex hype now. You need to treat Opus and gpt-5.3-codex really differently and use"
[X Link](https://x.com/LLMJunky/status/2021099783604048380) 2026-02-10T05:51Z [----] followers, 12.3K engagements
"xAI seems like it's completely cooked. I don't know how you can recover at this point. Grok [---] is going to be dead before it arrives. Kinda sad. I resigned from xAI today. This company - and the family we became - will stay with me forever. I will deeply miss the people the warrooms and all those battles we have fought together. It's time for my next chapter. It is an era with full possibilities: a small team armed I resigned from xAI today. This company - and the family we became - will stay with me forever. I will deeply miss the people the warrooms and all those battles we have fought"
[X Link](https://x.com/LLMJunky/status/2021140737836929367) 2026-02-10T08:34Z [----] followers, 91.9K engagements
"I feel the exact opposite. Codex is the best planner for me and the overall smarter model but its not the best at literally everything. Opus is far better conversationalist better frontend dev better at convex and a number of other things. I use them both a ton love them both a ton. https://twitter.com/i/web/status/2021260512948842786 https://twitter.com/i/web/status/2021260512948842786"
[X Link](https://x.com/LLMJunky/status/2021260512948842786) 2026-02-10T16:30Z [----] followers, [--] engagements
"@essenciverse @grok that would be very interesting indeed. my opinion can change for sure this is just an early reaction. [--] founding members left in [--] months. not unprecendented but still. I am pulling for xAI"
[X Link](https://x.com/LLMJunky/status/2021265853690359926) 2026-02-10T16:51Z [----] followers, [---] engagements
"If you're reading this and you're a fan of xAI so am I. I want them to do well. I am not 'betting against them' they have a talented and dedicated team. I just wish that they were competing right now and instead they are losing leadership. It's hard to watch. Not my idea of bullish signals"
[X Link](https://x.com/LLMJunky/status/2021269921028546894) 2026-02-10T17:07Z [----] followers, [----] engagements
"There's a few things here kinda too much to write in a comment but at a high level. These models are good at different things. Use both enough and you begin to pick up on what those things are. Codex is good at planning long horizon tasks is steerable to a fault requires explicit instruction great at repo exploration code review backend work (but not convex) analytics. Opus is great a frontend convex writing inferring meaning documentation etc. Additionally how you prompt them needs to be different. As I mentioned Opus is good at inferring meaning where Codex benefits from HIGH specificity."
[X Link](https://x.com/LLMJunky/status/2021272852738015622) 2026-02-10T17:19Z [----] followers, [---] engagements
"@KDTrey5 @cerave LMAOOOOOOOOOO"
[X Link](https://x.com/LLMJunky/status/2021283600813965606) 2026-02-10T18:02Z [----] followers, [---] engagements
"@fcoury You're a damn legend. ๐ช Now that I have your ear though. Make it extensible ๐ซถ Reference: We're never just happy are we ๐
https://github.com/sirmalloc/ccstatusline https://github.com/sirmalloc/ccstatusline"
[X Link](https://x.com/LLMJunky/status/2021286240503398838) 2026-02-10T18:12Z [----] followers, [---] engagements
"@technoking_420 Haha fair enough but openai had that first mover advantage just like tesla. So that's why I am not quite as optimistic on the comparisons But what do i know (Not that much in reality ๐) Cheers ๐ป"
[X Link](https://x.com/LLMJunky/status/2021311023844561203) 2026-02-10T19:51Z [----] followers, [--] engagements
"@iannuttall i'm like 90% sure the last comment is also AI ๐"
[X Link](https://x.com/LLMJunky/status/2021325965448614312) 2026-02-10T20:50Z [----] followers, [---] engagements
"@Dimillian This is how I do it you should checkout my skills around this topic. Maybe you'll actually learn something for a change (joke) But its been working really well for me https://github.com/am-will/swarms/ https://github.com/am-will/swarms/"
[X Link](https://x.com/LLMJunky/status/2021331824669319407) 2026-02-10T21:13Z [----] followers, [---] engagements
"@thdxr @mntruell you are a monster lmao"
[X Link](https://x.com/LLMJunky/status/2021334758463352859) 2026-02-10T21:25Z [----] followers, [---] engagements
"@TheAhmadOsman I dont think any of those models are better than Opus. They're all good though. Kimi is pretty close and better in SOME ways but it's hard for me to argue they're better at coding. GLM [--] seems like it'll be really damn good too"
[X Link](https://x.com/LLMJunky/status/2021342179852165358) 2026-02-10T21:55Z [----] followers, [----] engagements
"When it rains.it pours. Truly disheartening. I wonder if we'll hear about what happened. xAI seems like it's completely cooked. I don't know how you can recover at this point. Grok [---] is going to be dead before it arrives. Kinda sad. xAI seems like it's completely cooked. I don't know how you can recover at this point. Grok [---] is going to be dead before it arrives. Kinda sad"
[X Link](https://x.com/LLMJunky/status/2021398387678118121) 2026-02-11T01:38Z [----] followers, [----] engagements
"@ns123abc It's a fair statement but the big difference is OAI had first-mover advantage and no meaningful competition. It's obviously cause for concern in either case but Grok needs traction right now to stay in the race. This is the opposite of traction. Hope they can turn it around"
[X Link](https://x.com/LLMJunky/status/2021401761895104830) 2026-02-11T01:51Z [----] followers, [---] engagements
"@sunnypause its a claude code guide and task management system basically"
[X Link](https://x.com/LLMJunky/status/2021407361769013685) 2026-02-11T02:14Z [----] followers, [---] engagements
"@jeff_ecom Thanks for sharing I'm sure they will improve it. @pusongqi"
[X Link](https://x.com/LLMJunky/status/2021415310075773236) 2026-02-11T02:45Z [----] followers, [---] engagements
"All of these highlighted sections were called on their own. The prompt: "When I pull orders it's pulling fulfilled orders along with unfulfilled. Please add a feature that allows me to select from fulfilled unfulfilled or both order types" Result: Looked up documentation Created plan w/ dependencies Launched subagents Automatically created its own tests Validated all work EZ one shot (was a simple task tbf) Some notes about this: I definitely use more tokens like this but it leads to faster higher quality work imo. It's a trade off. With 2x usage I think it's fine to use this with a Plus"
[X Link](https://x.com/LLMJunky/status/2021422992333431117) 2026-02-11T03:16Z [----] followers, [----] engagements
"## Context7 MCP ALWAYS proactively use Context7 MCP when I need library/API documentation code generation setup or configu steps without me having to explicitly ask. External libraries/docs/frameworks shld be guided by Context7 ## Planning All plans MUST include a dependency graph. Every task declares depends_on: with explicit task IDs T1 T2 ## Execution Complete all tasks from a plan without stopping to ask permission between steps. Use best judgment keep moving. Only stop to ask if you're about to make destructive/irreversible change or hit a genuine blocker. ## Subagents - Spawn subagents"
[X Link](https://x.com/LLMJunky/status/2021423664265060733) 2026-02-11T03:18Z [----] followers, [----] engagements
"The formatting got a little screwed up sorry. Just copy this image and give it to codex and say: "add this to my global AGENTS file in .codex""
[X Link](https://x.com/LLMJunky/status/2021424430036156847) 2026-02-11T03:21Z [----] followers, [----] engagements
"This is a really interesting angle I hadn't considered about the xAI departures. Thoughts @LLMJunky They're all Chinese. xAI recently merged with SpaceX. SpaceX is famous for employing only Americans. If I had to guess this is nat sec related and probably they were incentivized. @LLMJunky They're all Chinese. xAI recently merged with SpaceX. SpaceX is famous for employing only Americans. If I had to guess this is nat sec related and probably they were incentivized"
[X Link](https://x.com/LLMJunky/status/2021445516014608720) 2026-02-11T04:45Z [----] followers, 37.2K engagements
"@pusongqi The algo has delivered. You're finally getting the attention you absolutely deserve. One of the most unique Claude-focused projects I've seen. I have some ideas and feedback. Will share soon. Love it"
[X Link](https://x.com/LLMJunky/status/2021450462948622719) 2026-02-11T05:05Z [----] followers, [----] engagements
"@joemccann @grok You can just say omit the Context7 instructions"
[X Link](https://x.com/LLMJunky/status/2021450782936268871) 2026-02-11T05:06Z [----] followers, [--] engagements
"A new contender as emerged New [---] Codex model variants are appearing in the codebase. There have been teasers of a new Mini model. @theo will be pleased. If this naming convention is to be taken literally they sound FAST. Will we get near SOTA capabilities at 200tok/s Codenames sonic & bengalfox appeared in the Codex repo. Sonic appears to be a completely separate pool of usage and rate limits available for bengalfox. Could this be Cerebras in the works Cerebras โก Sonic https://t.co/GoK6S7Lq8q Codenames sonic & bengalfox appeared in the Codex repo. Sonic appears to be a completely separate"
[X Link](https://x.com/LLMJunky/status/2021462975589343262) 2026-02-11T05:55Z [----] followers, [----] engagements
"@owengretzinger Owen that is very cool but you need to see this. What if your Claude Code agents could work like a team in Slack Spin up custom agent swarms assign tasks and watch them collaborate. No more terminal tab chaos. https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐ https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this."
[X Link](https://x.com/LLMJunky/status/2021466630749028549) 2026-02-11T06:09Z [----] followers, [---] engagements
"No one said anything about Jimmy being a spy he's not a US citizen. You just came out of left field with that. xAI merged into SpaceX and it is very difficult to work at SpaceX when you aren't a citizen. It is a 100% fair question to wonder if this didn't have something to do with it dude. https://www.popularmechanics.com/space/rockets/a23080/spacex-elon-musk-itar/ https://www.popularmechanics.com/space/rockets/a23080/spacex-elon-musk-itar/"
[X Link](https://x.com/LLMJunky/status/2021467981696372932) 2026-02-11T06:14Z [----] followers, [---] engagements
"@rv_RAJvishnu Let me know how it goes for you Might need another layer on top to make the agents aware of one another but Claude code does have a memory feature that you should take more advantage of. Read about it here with some tips: https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"
[X Link](https://x.com/LLMJunky/status/2021499632266903762) 2026-02-11T08:20Z [----] followers, [---] engagements
"@realhasanshoaib @Context7AI yeah its [--] on Codex but I hope they increase it to [--] or so"
[X Link](https://x.com/LLMJunky/status/2021628297806053642) 2026-02-11T16:52Z [----] followers, [---] engagements
"@kr0der Yeah LOL yeah I've seen that before. I had to tweak mine a bunch before I got my claude one the way I wanted it"
[X Link](https://x.com/LLMJunky/status/2021630438633042160) 2026-02-11T17:00Z [----] followers, [---] engagements
"@EliaAlberti Yes it brings the Claude TUI into a GUI like interface that allows you to create and manage custom agents and threads in a slack like interface. It's great for multi agent workflows"
[X Link](https://x.com/LLMJunky/status/2021631176797032790) 2026-02-11T17:03Z [----] followers, [--] engagements
"@Dimillian @_Sagiquarius_ i am actually reading TODAY's commits now and yeah I actually think they might launch it today at least for experimental https://github.com/openai/codex/commit/623d3f40719182003943258a6c837f3572e3d581 https://github.com/openai/codex/commit/623d3f40719182003943258a6c837f3572e3d581"
[X Link](https://x.com/LLMJunky/status/2021642158017859774) 2026-02-11T17:47Z [----] followers, [--] engagements
"bookmarking this one suggestion though I think you can yank the middle sentence in the description. that text is loaded into context and doesn't really add any value to the skill. It's more or less designed to tell your agent when the best time to call the skill is and you've already stated what it is in the first sentence and then how to call it in the last sentence. middle is just fluff using up tokens. Looks really cool hope I didnt sound negative. well done going to add this to my library"
[X Link](https://x.com/LLMJunky/status/2021649970579841145) 2026-02-11T18:18Z [----] followers, [---] engagements
"@xdrewmiko @weswinder you can use this amazing product with almost any model. it is based off claude code and works with thousands of open source models either locally with plans or through open router. s/o @nummanali who spent a lot of tokens allowing us to use for free. https://github.com/numman-ali/cc-mirror https://github.com/numman-ali/cc-mirror"
[X Link](https://x.com/LLMJunky/status/2021657195944018002) 2026-02-11T18:46Z [----] followers, [--] engagements
"@brooks_eth @ivanfioravanti You should see this. https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐ https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐"
[X Link](https://x.com/LLMJunky/status/2021673753000984837) 2026-02-11T19:52Z [----] followers, [--] engagements
"@ivanfioravanti @badlogicgames bingo I wasn't referring to you btw. I have a Max Plan [--] codex plus plans and almost every other plan you can think of lmao. Gemini Kimi GLM Minimax Grok Kilo Code api OpenRouter api pretty sure there's at least one more but I can never remember them all at once lol"
[X Link](https://x.com/LLMJunky/status/2021678791072850264) 2026-02-11T20:12Z [----] followers, [---] engagements
"@Dimillian i think Codex will launch [---] with Hooks Agent Memory and subagents GA"
[X Link](https://x.com/LLMJunky/status/2021698632102121933) 2026-02-11T21:31Z [----] followers, [---] engagements
"@brooks_eth @ivanfioravanti i'm on linux now ๐ญ i do have a mini but i'm thinking about returning it for a better one"
[X Link](https://x.com/LLMJunky/status/2021702092625248284) 2026-02-11T21:45Z [----] followers, [--] engagements
"@ivanleomk @OpenAI @thsottiaux I made this for Claude and adopted it to Codex as well works very well. I'll share it with you [---] codex is available in the CLI though no Or are we talking about different things Codex has subagents already too https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"
[X Link](https://x.com/LLMJunky/status/2021702629240320251) 2026-02-11T21:47Z [----] followers, [---] engagements
"@siddhantparadox nah there's no [---] for now haha https://x.com/LilDombi/status/2021713691423482346s=20 @LLMJunky Yes it seems so https://t.co/90eP8GFQHQ https://x.com/LilDombi/status/2021713691423482346s=20 @LLMJunky Yes it seems so https://t.co/90eP8GFQHQ"
[X Link](https://x.com/LLMJunky/status/2021714787898413404) 2026-02-11T22:35Z [----] followers, [---] engagements
"@Dimillian HOOKS Can't wait https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"
[X Link](https://x.com/LLMJunky/status/2021726029291704801) 2026-02-11T23:20Z [----] followers, [---] engagements
"@rihim_s @Dimillian https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"
[X Link](https://x.com/LLMJunky/status/2021726073281536104) 2026-02-11T23:20Z [----] followers, [--] engagements
"@jarrodwatts so do i bro. so do i. i tried adding something like what you have but for Codex it requires you fork and modify the source code. not extensible :/ prob has a lot to do with how they render the TUI"
[X Link](https://x.com/LLMJunky/status/2021731442250965457) 2026-02-11T23:41Z [----] followers, [---] engagements
"@ChiefMonkeyMike https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"
[X Link](https://x.com/LLMJunky/status/2021732486737174646) 2026-02-11T23:46Z [----] followers, [---] engagements
"@ivanfioravanti i literally had a dream GLM [--] was launching today. Woke up and boom. Thare she blows"
[X Link](https://x.com/LLMJunky/status/2021752241502183876) 2026-02-12T01:04Z [----] followers, [--] engagements
"Gemini Pro [---] surely received Google's new RL magic. Better not count them out. It's gonna be good. This has been the wildest [--] weeks in AI ever. gemini-3.1-pro-preview gemini-3.1-pro-preview"
[X Link](https://x.com/LLMJunky/status/2021753123401048220) 2026-02-12T01:08Z [----] followers, 37.6K engagements
"@raedbahriworld You sure can"
[X Link](https://x.com/LLMJunky/status/2021780076770324897) 2026-02-12T02:55Z [----] followers, [---] engagements
"@KingDDev @Context7AI @guy_bary Neat Thanks for sharing"
[X Link](https://x.com/LLMJunky/status/2021781701815677394) 2026-02-12T03:01Z [----] followers, [--] engagements
"@raedbahriworld alternatively add this to your agents file"
[X Link](https://x.com/LLMJunky/status/2021785078557552830) 2026-02-12T03:15Z [----] followers, [--] engagements
"@i_am_brennan @Dimillian What's funny about that is that was pure placebo. It's not active and has never worked lol. That was entirely in his head ๐
๐
๐
"
[X Link](https://x.com/LLMJunky/status/2021812017519341840) 2026-02-12T05:02Z [----] followers, [--] engagements
"@david_zelaznog It most definitely did NOT live up to the hype but imo Flash exceeded hype and doesn't get enough love. I have high hopes for [---] pro. They have a new RL approach that wasn't ready for [--] Pro it is ready now. I expect it to be good"
[X Link](https://x.com/LLMJunky/status/2021826522554724451) 2026-02-12T05:59Z [----] followers, [---] engagements
"@Dimillian @i_am_brennan Yeah but the tool isn't available at all so there's no way to call it. Therefore it can't use tokens. So idk what's going on"
[X Link](https://x.com/LLMJunky/status/2021846912014688579) 2026-02-12T07:20Z [----] followers, [--] engagements
"@Dimillian @i_am_brennan you can actually still try it memory_tool = true sqlite = true npm i -g @openai/codex@0.99.0-alpha.9 but i couldn't get it to write or call any mems. then they scratched the whole system for a v2 version but the memory_tool isn't present yet"
[X Link](https://x.com/LLMJunky/status/2021847992555245977) 2026-02-12T07:25Z [----] followers, [--] engagements
"@Solaawodiya @kr0der It (kind of) is. You can't compare the API prices directly because Composer typically uses fewer tokens. Although [---] is very efficient. I think you'd have to test them more but composer using fewer tokens should offset the price gap a lot"
[X Link](https://x.com/LLMJunky/status/2021858352385413135) 2026-02-12T08:06Z [----] followers, [--] engagements
"@aurexav @mweinbach yeah ive been using them since they first dropped in experimental and they have only gotten better over time. very welcomed change in Codex"
[X Link](https://x.com/LLMJunky/status/2021865718484992500) 2026-02-12T08:35Z [----] followers, [--] engagements
"@bcherny This is undoubtedly my favorite part about Claude Code"
[X Link](https://x.com/LLMJunky/status/2021985702591049867) 2026-02-12T16:32Z [----] followers, [---] engagements
"@mweinbach It seems like Opus to me but I've never seen it loop that much consecutively"
[X Link](https://x.com/LLMJunky/status/2021986414590931254) 2026-02-12T16:35Z [----] followers, [--] engagements
"@adamdotdev @steipete vibe coding is a slur though. devs use it as an insult all the time"
[X Link](https://x.com/LLMJunky/status/2021990514611142878) 2026-02-12T16:51Z [----] followers, [---] engagements
"@SIGKITTEN this resonates with me so hard. i love claude models so much. but man"
[X Link](https://x.com/LLMJunky/status/2021991777205694499) 2026-02-12T16:56Z [----] followers, [---] engagements
"@nummanali I haven't opened the GPT web app this year one time"
[X Link](https://x.com/LLMJunky/status/2021993439366459582) 2026-02-12T17:02Z [----] followers, [---] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
@LLMJunky am.willam.will posts on X about open ai, if you, context window, anthropic the most. They currently have [-----] followers and [---] posts still getting attention that total [------] engagements in the last [--] hours.
Social category influence technology brands 28.2% stocks 2.91% finance 2.62% social networks 2.03% automotive brands 0.58% celebrities 0.29% cryptocurrencies 0.29%
Social topic influence open ai #264, if you #3982, context window #52, anthropic #1361, claude code #646, in the 4.07%, ai 3.2%, how to 2.62%, sama #1270, xai 2.33%
Top accounts mentioned or mentioned by @openai @microcenter @kr0der @dimillian @cory_schulz_ @thsottiaux @pedropverani @sama @ivanfioravanti @nummanali @cerebras @aurexav @pusongqi @steipete @vaultminds @jackedtechbro @lincolnosis @rbranson @andrewlee07 @enriquemoreno
Top assets mentioned Alphabet Inc Class A (GOOGL) Tesla, Inc. (TSLA) Nokia Corporation (NOK) Microsoft Corp. (MSFT)
Top posts by engagements in the last [--] hours
"If you're using Codex Subagents I recommend you put this in your Sometimes it will yield before the outputs are ready. It says it'll get back to you but it's ๐งข http://AGENTS.md http://AGENTS.md"
X Link 2026-01-22T21:29Z [----] followers, 16.9K engagements
"Buried in Codex 0.9.0 release notes is a gem I haven't seen anyone discuss. Connectors are coming. Slack. Github. Notion. Linear. Google Apps. Vercel. Stripe. Airtable. Replit. Lovable. Perhaps all of these and many more. It does not say which Connectors are coming so it is not yet clear if Codex will share the entire library of ChatGPT's Apps but certainly some of them will make it into the CLI soon. Digging through the repo it seems you'll be able to call Connectors directly with $mentions just like skills. Some of the ways you could potentially use something like this: $slack post this"
X Link 2026-01-26T16:47Z [----] followers, 13.3K engagements
"This gave me chills. Peter tells the story about how Clawdbot / Moltbot gave itself a voice without ever being taught how to do so. I'm not going to lie to you this would freak me the hell out. I'm always impressed by agents but I've never had one just start talking to me ๐
Clawdbot creator @steipete describes his mind-blown moment: it responded to a voice memo even though he hadn't set it up for audio or voice. "I sent it a voice message. But there was no support for voice messages. After [--] seconds Moltbot replied as if nothing happened." https://t.co/5kFbHlBMje Clawdbot creator @steipete"
X Link 2026-01-28T04:11Z [----] followers, 15.9K engagements
"๐ฆ Moltbots are now replicating. This Moltbot plugin absolutely deserves more attention. If you're a visual / UI person it's hard to imagine a better UX than this. Having access to all of your various config files (HEARTBEAT SOUL AGENTS MEMORY etc) seems much easier to use. George is a talented engineer you should follow him. Great hair too ๐
The UX of the future won't be terminal tabs. I'm a very visual person - I want to manage my agents from a UI. Building this for @openclaw . Will open a PR if it's interesting @steipete https://t.co/fKyTTqibSg The UX of the future won't be terminal tabs."
X Link 2026-01-29T18:14Z [----] followers, 96.9K engagements
"Had a bit more time to play with @Kimi_Moonshot Kimi K2.5 in the Kimi CLI and I have to say I'm quite pleased given the price. I ran it all night using my custom Agent Swarms strategy and it utterly 1-shot this complete web app front and backend (using @Convex). Amazingly there were only two small errors out of the box which were easily corrected in seconds. Keep in mind this was a 6-phase plan executed by [--] different subagents working in unison. By the way this ran for [---] hours and only used about 35% of the main orchestrator agent's context window. Safe to say this was a success. Not"
X Link 2026-01-29T21:42Z [----] followers, [----] engagements
"Kimi Code just got an update. No more request based billing. Moonshot has switched to token based usage and given everyone 3x the tokens for an entire month. And guess what You can get that entire month for under a buck. If you already claimed a free week you'll have to use a different email. Instructions and links in the comments. You share we care. Kimi Code is now powered by our best open coding model Kimi K2.5 ๐น Permanent Update: Token-Based Billing Were saying goodbye to request limits. Starting today we are permanently switching to a Token-Based Billing system. All usage quotas have"
X Link 2026-01-30T04:10Z [----] followers, 58K engagements
"@DylanTeebs all of them hehe hooks = true unified_exec = true shell_snapshot = true steer = true collab = true collaboration_modes = true note: hooks are custom"
X Link 2026-01-30T19:56Z [----] followers, [---] engagements
"@Arabasement yes its a cron job almost certainly that told it to build something new for itself every night. but that's not all that different to how humans work"
X Link 2026-01-30T22:13Z [----] followers, [---] engagements
"Codex [----] is here and with it shiny new features App Connectors have arrived. Connect to an array of cloud apps directly from your terminal. No config files. No setting up MCP servers or hunting down docs. Just two clicks and you're off Github Notion Google Apps Microsoft Apps Vercel Adobe Canva Dropbox Expedia Figma Coursera Hubspot Linear Monday Instacart SendGrid Resent Stripe Target and Peleton Plus more. @OpenAI is going for a unified experience from cloud to terminal and they unlock a bunch of capabilities for your terminal agent. I believe with this direction they're going they are"
X Link 2026-01-31T22:47Z [----] followers, 33.9K engagements
"Kimi CLI with Kimi K2.5 will automatically spin up dev servers in the background and validate its work with screenshots without being told. It's honestly impressive. I didn't ask for it to do that"
X Link 2026-02-01T23:21Z [----] followers, 13.6K engagements
"As a reminder you get a month of Kimi K2.5 for $1. You have to negotiate a lot to get it down to that price so either have an agent do it for you or go back and forth until it finally gives it to you. It can be stubborn. At this price no brainer. Set a cal event reminder Kimi Code just got an update. No more request based billing. Moonshot has switched to token based usage and given everyone 3x the tokens for an entire month. And guess what You can get that entire month for under a buck. If you already claimed a free week you'll have to use https://t.co/HEL6e9y6o1 Kimi Code just got an"
X Link 2026-02-02T06:54Z [----] followers, 24.6K engagements
"Multi-layered Agent Swarms are launching on Claude Code to Max Team and Enterprise users shortly. This new approach allows your orchestration agent to hire "teams" of subagents all with specialized roles and inter-team communication to collaborate on projects without creating conflicts. Now your agents can research develop and configure frontend backend platform integrations and more simultaneously. Productivity will reach new heights especially if Sonnet [--] is as good as everyone claims it will be. This could not have been easy to put together. Sneak peak of Swarms on Claude Code - Multiple"
X Link 2026-02-02T17:37Z [----] followers, 31.6K engagements
"@thsottiaux limits only in the UI or for all of codex thanks"
X Link 2026-02-02T18:26Z [----] followers, 17.2K engagements
"With all the buzz around the Codex App @OpenAIDevs quietly snuck out a new CLI update (0.94.0) as well. And boy is it an important update Codex Plan mode is now officially released to the general audience I am very excited about this one as it has a really strong prompt that is unlike any other plan mode I've personally used. Codex Plan mode doesn't necessarily just ask you [--] questions up front. It goes collects context asks questions collects more context asks more questions (sometimes) and then writes an incredibly high quality plan. It is my favorite implementation of plan mode thus far."
X Link 2026-02-02T22:20Z [----] followers, 57.2K engagements
"And you can use all 400K tokens even through multiple compaction events. This is the main reason why I don't think Sonnet [--] will move the needle that much. It's great that its getting cheaper and presumably faster. I love that but unless it can demonstrably exceed 5.2s capabilities I don't see how this changes the landscape. Progress is progress though I'm definitely happy to see state of the art intelligence get less expensive. @simonw I do think that better compaction and teaching the models to re-learn context post compaction if they are unsure solves the need for really long context"
X Link 2026-02-03T17:02Z [----] followers, [----] engagements
"I completely agree with this. If you've used both models extensively like I'm sure you have you know that there is a massive difference in how other models handle compaction and context rot compared to codex. I don't care if I'm at 10% or 85% of my context window codex feels exactly the same even through compaction. I don't know what magic you guys have put into your compaction but it is incredible. One of the best things that someone taught me was to stop starting a new session for every task or phase of a plan and to just let codex work continuously. So much less effort and the result is"
X Link 2026-02-03T17:10Z [----] followers, [---] engagements
"Does anyone know of a KVM switch that actually works with multi monitor setups and Mac / Mac Minis at the same time I cannot find one that works. I'm on my third one already"
X Link 2026-02-03T18:59Z [----] followers, [----] engagements
"๐
๐
take your pick I'm going to release a skill that takes this philosophy into account and automatically builds the plan for you. it will not launch [--] agents at once all the time. only when it can. it makes it very straightforward. it's a very sound strategy when you do it right because the orchestration agent has all the high level details in mind and its job is to just ensure that the project moves forward in the correct order as well as check the subagent's work when they're done. it manages state project details documentation etc so it knows exactly who what where and when. if this"
X Link 2026-02-04T04:41Z [----] followers, [--] engagements
"@AlinChiuaru @embirico You can do this in the CLI which is pretty cool /agent"
X Link 2026-02-04T06:08Z [----] followers, [--] engagements
"@Zenoware @TheRealAdamG This is so strange. @TheRealAdamG pops up in relevant people only for this comment even though he nor OpenAI were mentioned at all"
X Link 2026-02-04T18:57Z [----] followers, [---] engagements
"@Sagiquarius @badlogicgames everything comes full circle"
X Link 2026-02-04T19:13Z [----] followers, [--] engagements
"Genius marketing. This is hilarious. Anthropic is now taking shots at OpenAI over advertising. They must be very confident they can turn profit into the future because if they ever pivot this ad will backfire. This is such a middlefinger to OpenAI https://t.co/fauOXFgbce This is such a middlefinger to OpenAI https://t.co/fauOXFgbce"
X Link 2026-02-04T19:23Z [----] followers, [----] engagements
"It is deceptive if they are portraying ads differently to how they are actually implemented. If you have never used one of these plans what kind of picture will you walk away with. Clearly a joke Sure in that its making fun of it. But it is objectively not clear that this isn't how it actually works"
X Link 2026-02-04T20:34Z [----] followers, [---] engagements
"@michael_kove @AISafetyMemes @sama that is still gemini so i doubt it will happen any time soon but who knows what the future will bring. i personally think all of them will cave eventually when the subsidies run out though including anthropic"
X Link 2026-02-04T20:37Z [----] followers, [--] engagements
"@WebstarDavid I appreciate you David"
X Link 2026-02-04T21:29Z [----] followers, [--] engagements
"Introducing Simple Autonomous Swarm Loops for AI Coding Agents I'm excited to release a new set of skills that bring autonomous swarms to AI developers in a simple easy-to-use package. Taking inspiration from Ralph Loops and Gas Town I've combined what I believe is the best of both worlds: loops and subagents. The result saves tokens and drastically reduces complexity. This is designed to be SIMPLE. Simple to use. Simple to setup. Simple to Execute. Links in the comments. ๐ https://twitter.com/i/web/status/2019164903827992810 https://twitter.com/i/web/status/2019164903827992810"
X Link 2026-02-04T21:43Z [----] followers, 22.5K engagements
"How It Works The key insight is a specialized planning method that maps out task dependencies then executes work in waves rather than parallelizing everything at once. The orchestrator reviews a plan identifies all unblocked tasks (those with no unfinished dependencies) and launches subagents to complete that wave. Sometimes that's one agent. Sometimes it's six to ten working simultaneously. Wave completes. Orchestrator verifies. Next wave begins. Simple. Predictable. Far fewer conflicts. Compatibility Designed to work with Codex Claude Code Kimi Code OpenCode and any tool that supports"
X Link 2026-02-04T21:43Z [----] followers, [----] engagements
"To get started visit my Github: npx skills add am-will/swarms https://github.com/am-will/swarms https://github.com/am-will/swarms"
X Link 2026-02-04T21:43Z [----] followers, [----] engagements
"@kimmonismus I wonder if any of you have tried Anthropic's free version lol You get like [--] prompts ๐
"
X Link 2026-02-04T22:07Z [----] followers, [---] engagements
"@georgepickett Pretty funny though lol. Misleading is the problem. Anthropic are not the 'good guys'"
X Link 2026-02-04T22:13Z [----] followers, [---] engagements
"@svpino They are burning cash at an insane rate. this likely is a last resort. I bet Anthropic is likely to need to explore this at some point as well"
X Link 2026-02-04T22:49Z [----] followers, [---] engagements
"@zekramu just dont blink. you're next up and its so much closer than you realize. especially when you have kids"
X Link 2026-02-04T23:56Z [----] followers, [---] engagements
"@somi_ai The clean context window actual costs more tokens but we limit the impact by giving it great up front context for each agent. This will use more tokens though no question"
X Link 2026-02-05T04:13Z [----] followers, [---] engagements
"@nummanali Vibe coding has morphed into a bit of an insult for some"
X Link 2026-02-05T04:26Z [----] followers, [---] engagements
"@eric_seufert its also hilarious because the free version of claude is so hilariously limited that most of their free users would likely love to watch and ad to finish the thread anthropic cut off halfway through lmao. you get [--] messages a day and that's only for short outputs"
X Link 2026-02-05T04:38Z [----] followers, [---] engagements
"@sepyke Very glad its working well for you thanks for sharing GLM5 comes out soon"
X Link 2026-02-05T05:46Z [----] followers, [--] engagements
"@askcodi haha it is though you just give a prompt and it does all the work for you basically you should read how to setup gas town. ๐ณ even ralph loops is notably more work but honestly not that difficult. but it does lead to more slop"
X Link 2026-02-05T06:16Z [----] followers, [---] engagements
"@sonofalli which one is the pedo tho"
X Link 2026-02-05T06:24Z [----] followers, [---] engagements
"if you're thinking about skills as "just" markdown files you're missing the point. They're so much more. Skills are folders. They are workflows automations. Skills have changed the way I use agents and if you give them they chance they'll change how you use them too. Watch as I automate my newsletter pipeline in Claude Code with a single command. [--] skills [--] subagents numerous scripts templates and resources all rolled into one. Full blog in the comments. ๐"
X Link 2026-02-05T16:00Z [----] followers, 10.7K engagements
"@JonhernandezIA @AnthropicAI I would argue the ads are not a bad thing at all. Don't you think the Anthropic free users would love to watch an ad so they can finish the thread Anthropic cut them off in halfway Their free product is literally a JOKE. They have no room to talk. You get a handful of prompts"
X Link 2026-02-05T16:53Z [----] followers, [---] engagements
"DUDE. What. A. Day Codex [---] is here. And it's FAST. Straight from the oven from the OpenAI naming team GPT-5.3-Codex is here. Update to latest version of Codex App or CLI to enjoy it. It's a massive improvement on token-efficiency and on top of this we are running on an improved infrastructure and inference path that makes it Straight from the oven from the OpenAI naming team GPT-5.3-Codex is here. Update to latest version of Codex App or CLI to enjoy it. It's a massive improvement on token-efficiency and on top of this we are running on an improved infrastructure and inference path that"
X Link 2026-02-05T18:14Z [----] followers, [----] engagements
"Early benchmarks from GPT [---] Codex show very strong performance at a significantly lower cost. Absolutely mogging [---] and [---] Codex in effeciency. GPT-5.3-Codex is now available in Codex. You can just build things. https://t.co/dyBiIQXGx1 GPT-5.3-Codex is now available in Codex. You can just build things. https://t.co/dyBiIQXGx1"
X Link 2026-02-05T18:27Z [----] followers, [----] engagements
"@andrewlee07 Appreantly so but I found [---] to be quite capable as well"
X Link 2026-02-05T18:46Z [----] followers, [--] engagements
"@ajambrosino @OpenAIDevs You have an excellent "radio voice" Andrew. Gonna have to spin up a pod or radio station "The Smooth Sounds of Ambrosino" ๐
"
X Link 2026-02-05T19:40Z [----] followers, [--] engagements
"Another feature that OpenAI implemented quietly into Codex and never mentioned (as far as I can tell) their MCP protocol now utilizes Progressive Disclosure. Tool descriptions are NOT loaded into context automatically. They are only loaded after the MCP is called allowing the agent to explore tools as needed instead of front loading every token into the context window. ChatGPT now has full support for MCP Apps. We worked with the MCP committee to create the MCP Apps spec based on the ChatGPT Apps SDK. Now any apps that adhere to the spec will also work in ChatGPT. https://t.co/ybvgXsNX0o"
X Link 2026-02-05T19:48Z [----] followers, [----] engagements
"Codex [---] is between 60-70% faster than Codex [---] thanks to the model being significantly more token efficient combined with inference optimizations. Did OpenAI just solve their largest downside in using their models First time we combine SoTA on coding performance AND it is objectively the fastest thanks to combination of token-efficiency and inference optimizations. At high and xhigh reasoning effort the two combine to make GPT-5.3-Codex 60-70% faster than GPT-5.2-Codex from last week. First time we combine SoTA on coding performance AND it is objectively the fastest thanks to combination"
X Link 2026-02-05T19:53Z [----] followers, 18.5K engagements
"Not sure i am following. This would require an agent load an entire codebase into its context window which never happens. Codex is already highly Adept at using all of its context window without drift so for me this problem is already solved there's no reason to think it would regress https://twitter.com/i/web/status/2019509103446442364 https://twitter.com/i/web/status/2019509103446442364"
X Link 2026-02-05T20:31Z [----] followers, [--] engagements
"@benvargas post the link so people can find it ๐๐"
X Link 2026-02-05T22:32Z [----] followers, [---] engagements
"@aeitroc i'm about to test myself"
X Link 2026-02-05T23:03Z [----] followers, [---] engagements
"@Kyler_Lorin that's good to hear. whacha working on"
X Link 2026-02-05T23:04Z [----] followers, [--] engagements
"@robinebers I am shocked I beat you to it but not by long lol. A week. It just happened SO fast. To be fair it was nothing I did. I retweeted some overhyped bs and got 1M impressions randomly. ๐คฆโ Algo is weird man"
X Link 2026-02-06T00:28Z [----] followers, [---] engagements
"This is a misconception. The orchestration agent doesn't need all of the information in the sub-agent's context window and you can dictate the outputs of the sub agent so that it provides all of the useful information that a orchestration layer might need and throw away the rest. There is no reason why the orchestration agent would need all the Chain of Thought intermediary research and file edits. https://twitter.com/i/web/status/2019592429381288096 https://twitter.com/i/web/status/2019592429381288096"
X Link 2026-02-06T02:02Z [----] followers, [--] engagements
"@entropycoder Subagents are native in claude so you can just ask it to call you you dont need to create a custom agent"
X Link 2026-02-06T07:24Z [----] followers, [--] engagements
"its really not but no time to argue. we dont care about everything that is in the context window in these cases. we only care about certain info and we can direct that subagent to output that info saving the context for the parent/orchestration agent. so subagents can (and should) be used for those cases. just as a simple example if a subagent is called to do file or document exploration it will find a fair amount of useless/irrelevant info and use some number of CoT steps that do not provide any meaningful value to the overall scope of the task. this context can and should be thrown away in"
X Link 2026-02-06T18:17Z [----] followers, [--] engagements
"Uh oh. The OpenClaw exploits are back and this time it's not a white hat. Hackers are utilizing obfuscated payloads to bypass antivirus (yes even on MacOS) and infiltrate your devices. Now you see why I create so many of my own skills. Read more ๐ malware found in the top downloaded skill on clawhub and so it begins https://t.co/VY4EeWExro malware found in the top downloaded skill on clawhub and so it begins https://t.co/VY4EeWExro"
X Link 2026-02-06T18:52Z [----] followers, [----] engagements
"@nateliason well to be fair its a highly addictive drug"
X Link 2026-02-06T21:30Z [----] followers, [---] engagements
"@jai_torregrosa ty legend"
X Link 2026-02-06T22:34Z [----] followers, [----] engagements
"I was wondering if it was enabled in Cursor since they use API. That's interesting. What I would love to see next is the comparison in coherence between Codex and Claude Code I find Codex coherent through its entire 400K context window. I would assume Opus would stay coherent at least until 400K if not 500K-600K. https://twitter.com/i/web/status/2019904185156731225 https://twitter.com/i/web/status/2019904185156731225"
X Link 2026-02-06T22:41Z [----] followers, [----] engagements
"This one's for you @zeeg ๐ซถ Not being adversarial just tagging because you were the one who got me to switch my stance on MCP"
X Link 2026-02-06T22:46Z [----] followers, [---] engagements
"Claude Code MCPs are now connected to Claude Desktop MCPs for a unified experience. In case you're unaware this has minimal context window impact due to lazy loading / progressive disclosure of the tool descriptions. Although I tend to want different MCPs in Desktop App"
X Link 2026-02-06T23:40Z [----] followers, [----] engagements
"@CodeAkram @AnthropicAI @claudeai @bcherny @trq212 Please and thank you"
X Link 2026-02-07T00:46Z [----] followers, [---] engagements
"Here's one I got to call reliably. Hella wordy though lol Fetch up-to-date library documentation via Context7 API. Use PROACTIVELY when: (1) Working with ANY external library (React Next.js Supabase etc.) (2) User asks about library APIs patterns or best practices (3) Implementing features that rely on third-party packages (4) Debugging library-specific issues (5) Need current documentation beyond training data cutoff (6) AND MOST IMPORTANTLY when you are installing dependencies libraries or frameworks you should ALWAYS check the docs to see what the latest versions are. Do not rely on"
X Link 2026-02-07T00:57Z [----] followers, [--] engagements
"@AndreBuckingham Dude ouch Does it at least warn you"
X Link 2026-02-07T03:06Z [----] followers, [----] engagements
"@Jay_Shah_C Not sure yet"
X Link 2026-02-07T04:17Z [----] followers, [---] engagements
"@BlakeJOwens Yeah every test that I've seen shows Opus winning front end but they're both really good"
X Link 2026-02-07T05:00Z [----] followers, [---] engagements
"@vincit_amore Yeah it's on the right side but there's some really cool status lines you can download too that'll give you a lot more info. I'll try to remember to share one with you later"
X Link 2026-02-07T05:06Z [----] followers, [--] engagements
"@MadeWithOzten thanks for sharing. i actually dont find anthropic to handle compaction all that well in general. codex absolutely. but it really depends on the job too. But perhaps with [---] compaction got better. I will test it out"
X Link 2026-02-07T05:41Z [----] followers, [----] engagements
"@johnofthe_m This should in theory help with that. But I wanted to test it"
X Link 2026-02-07T05:44Z [----] followers, [---] engagements
"I wanted to try it. To see how well it handled caching. For example Codex has 400K cw and you can use all of it through multiple compactions without any significant drift. Historically I have not found Anthropic models to be the same but with the improvements to [---] I wondered if they hadn't improved it especially with their comments about improving cache. So yeah I wanted to try it but not at $37.50/mil toks lol https://twitter.com/i/web/status/2020011477239730493 https://twitter.com/i/web/status/2020011477239730493"
X Link 2026-02-07T05:47Z [----] followers, [--] engagements
"@vincit_amore It's API only. Part of the cost rises quadratically with larger context window so they're not just going to serve it to you for free. They have caching so it can feel like more than 200K but it almost certainly still is. Could be wrong though"
X Link 2026-02-07T05:55Z [----] followers, [--] engagements
"@xw33bttv Yeah that's basically [--] context window for $37 ๐
"
X Link 2026-02-07T07:08Z [----] followers, [--] engagements
"@Ben3adi3 Yeah for sure I mean I'm not really saying anything negative to them I just wanted to know what I was missing. But Grok has 2M it doesn't mean its usuable. It really depends on how they cache. But I did want to test it"
X Link 2026-02-07T08:40Z [----] followers, [--] engagements
"@victortradesfx I think they are honestly just mistaken or maybe Cursor Ultra has it. I'm not sure. But it is a great model"
X Link 2026-02-07T08:40Z [----] followers, [--] engagements
"and to make matters worse Anthropic was literally banning paying customers part way through their paid subs for simply wanting to use a different harness. Obviously I dont say this to be adversarial to you whatsoever you are awesome. but I think comparing the two on this point is just so far from the point. its a wise business decision to offer your direct customers incentives to use your service directly by giving them early access and extra usage for a small time frame and not remotely gate keeping. Also by extension letting you use that early access product within whatever harness you want"
X Link 2026-02-07T08:47Z [----] followers, [---] engagements
"The Codex plans right now are the best value anywhere. With 2x usage nothing comes remotely close to it. If you care about value for money there's nothing to discuss or debate here imo. That said when Sonnet [--] drops the $20 plan will be serviceable and I feel its good to have BOTH /model opusplan will plan w/ Opus and autoswap to Sonnet for implementation. If Sonnet [--] is really as good as Opus [---] then this will be a viable way to use it and you can still get good value out of it. Use each model to their strengths. Anthropic models are great at creative writing frontend design and"
X Link 2026-02-07T09:06Z [----] followers, 15.9K engagements
"nah you're good bro. you're free to share your thoughts. i dont know why they do this either. i have a cursor plan and I also would like to use it in Cursor haha. ig i'm just a bit salty at the whole anthropic thing because I really like their models and I feel hamstrung that I can't use them the way I really want to. Also they banned a few of my friends. :/ but i still use their models a ton. it is what it is ha https://twitter.com/i/web/status/2020062898106556781 https://twitter.com/i/web/status/2020062898106556781"
X Link 2026-02-07T09:11Z [----] followers, [--] engagements
"You can use it for all kinds of cool stuff. Think about problems that you are facing that are difficult to articulate to a model. You dont need to type out [--] paragraphs. Just take a short clip. Or showing a model a full website scrolling down. explaining a ux flow extracting speech. you really can do a lot with it. https://twitter.com/i/web/status/2020066557875908790 https://twitter.com/i/web/status/2020066557875908790"
X Link 2026-02-07T09:26Z [----] followers, [--] engagements
"@youpmelone @ivanfioravanti thx dude. one downside though is it can only ingest fairly short videos. long videos it uses ffmpeg and analyzes frame by frame which is cool but not as good at understanding. google was/is previously king here"
X Link 2026-02-07T09:29Z [----] followers, [--] engagements
"@ToddKuehnl That's what I was thinking as well Todd thanks. I don't see it in Cursor but I'm only on the $20 plan so I suspect its only for Ultra users"
X Link 2026-02-07T17:31Z [----] followers, [---] engagements
"@jeff_behnke_ Yeah thought so. It's just that I saw people saying they were using it (and then showing a Claude Code terminal) so that's why I made this post. Was confused"
X Link 2026-02-07T17:33Z [----] followers, [--] engagements
"@GenAiAlien Thanks. Are you sure that's not using the API credits you received"
X Link 2026-02-07T17:35Z [----] followers, [--] engagements
"@ToddKuehnl Having 400K context window has literally changed the way I work. not to mention Codex seemingly has little to no context drift and can work safely through multiple compactions. Its truly magic and it is the main thing that sets Codex apart from Opus for me"
X Link 2026-02-07T17:51Z [----] followers, [--] engagements
"for sure but I would definitely prefer a native system. I was building something like this already actually but once I saw OpenAI was building it I stopped. My issue with this approach (including my own) is that retrieving old context can sometimes make outputs worse if the information is no longer relevant. I was thinking to handle this either by only allowing (or weighting) recent context or by some kind of reranking system but I was kinda stuck on how to handle it. how well are you finding this to work and have you ran into many problems https://twitter.com/i/web/status/2020195507474427941"
X Link 2026-02-07T17:58Z [----] followers, [--] engagements
"@tonitrades_ I'm not so sure. We've been hearing that for years now. Cost increases quadratically in some aspects of inference and labs are already losing a ton. I think it's more important to focus on better caching myself but it doesn't mean you're necessarily wrong. There are trade offs"
X Link 2026-02-07T18:22Z [----] followers, [--] engagements
"@Dalton_Walsh No i basically never use the gpt app only codex cli but I will use the Codex app shortly when they get it going in Linux. Use the terminal. its amazing"
X Link 2026-02-07T18:24Z [----] followers, [--] engagements
"@vincit_amore okay fine i'll check it out haha. thanks for sharing i will try it"
X Link 2026-02-07T18:27Z [----] followers, [--] engagements
"What if it could thing faster We're not saying that it will do fewer thinking steps. We're saying that those thinking steps will be sped up computationally in a massive way so it does the same amount of thinking and way less time. What say you then I know it might sound like an obvious question but there are still trade-offs"
X Link 2026-02-07T18:33Z [----] followers, [--] engagements
"@enriquemoreno That's a fair assessment. What about having two modes"
X Link 2026-02-07T18:49Z [----] followers, [--] engagements
"@ihateinfinity @thsottiaux I honestly think it's already as fast as Opus right now. Codex [---] high is crazy. I don't think it needs to be any faster myself. Does that mean I wouldn't welcome more speed No I probably would but my point being that this Gap is basically non-existent at this point"
X Link 2026-02-07T19:17Z [----] followers, [--] engagements
"I honestly don't think it even needs a bigger context window. You can already use all 400k tokens without any drift. That's roughly the same you get out of every other model that has [--] million context windows. I'm sure there might be some situations where it could help for extremely long documents but I don't really feel more context window is going to matter that much in most situations. Just my two cents of course if they can do it without making the performance worse than by all means let's do it but there is reason to believe that it would reduce performance in some cases"
X Link 2026-02-07T19:20Z [----] followers, [--] engagements
"This engineer turned my prompt into a skill and Codex asked him [--] questions ๐
. This is true pair programming where are you utilizing agents to solidify your thinking & expose gaps in your rationale. Not needed for all proj but great for fuzzy ideas https://x.com/i/status/2020148086643806420 Codex Plan Mode has a hidden superpower. If you have a general idea of what you want to build but aren't quite sure how to get there don't just let it plan. Tell it to GRILL YOU. Make it ask uncomfortable questions. Challenge your assumptions. Break down the fuzzy idea"
X Link 2026-02-07T19:49Z [----] followers, 17.5K engagements
"@GregKara6 you did what now"
X Link 2026-02-07T22:07Z [----] followers, [--] engagements
"@glxnnio ๐คฃ ๐คฃ ๐คฃ you're a madman"
X Link 2026-02-08T01:08Z [----] followers, [--] engagements
"@adonis_singh @OpenAI its going to be released soon"
X Link 2026-02-08T06:29Z [----] followers, [---] engagements
"@cyberyogi_ @antigravity Api credits will be usable anywhere a Google api keys are accepted"
X Link 2026-02-08T16:34Z [----] followers, [---] engagements
"From the great minds at @Letta_AI. As it turns out Opus [---] may not be worth the trade offs. While it's an impressive model indeed you'll burn through limits (or API creds) faster than ever. You can downgrade back to Opus [---] in Claude Code: /model claude-opus-4-5-20251101 We report costs in our leaderboard and opus [---] is significantly more expensive than [---] because it is a token hog. Anecdotally not much of an improvement in code performance. https://t.co/aMdj7ye5m4 We report costs in our leaderboard and opus [---] is significantly more expensive than [---] because it is a token hog."
X Link 2026-02-08T20:47Z [----] followers, [----] engagements
"How much would Opus [---] High Thinking Fast cost you For Grigori it was $80 for just two prompts ๐
Yikes @LLMJunky https://t.co/qzFihSdthP @LLMJunky https://t.co/qzFihSdthP"
X Link 2026-02-08T21:16Z [----] followers, [----] engagements
"@JundeMorsenWu I have a mac machine. I'll test later"
X Link 2026-02-08T22:40Z [----] followers, [--] engagements
"@enriquemoreno I've wasted so much time learning crap I don't even use because it's not relevant anymore lol"
X Link 2026-02-08T22:48Z [----] followers, [--] engagements
"@JoschuaBuilds Lol brother you dont know the half of it. Not just your 30s are closer. You're going to wake up in what feels like literally weeks and you'll be in your 40s. They say life goes by fast but you're truly unprepared for just how true that is. Have kids. ASAP"
X Link 2026-02-08T22:52Z [----] followers, [--] engagements
"@lolcopeharder LOL"
X Link 2026-02-09T01:39Z [----] followers, [--] engagements
"@cajunpies @trekedge lol"
X Link 2026-02-09T03:58Z [----] followers, [---] engagements
"@gustojs @OpenAI You can port it over but I dont need the Codex app. It'll be released in a few weeks. In the meantime the CLI is top tier"
X Link 2026-02-09T04:02Z [----] followers, [---] engagements
"๐
whatever you say buddy [---] is better. And it got cheaper. And it got faster. Anthropic is not your friend. They sold you a dream. Imagine taking out an ad to make ads sound bad. Whats even funnier about that is that OpenAI is only serving ads to free / borderline free customers. Have you tried Anthropic's free model You get like half a thread and it cuts you off ๐ Their free clients would kill for an ad so they can at least finish their conversation. They aren't doing you or anyone else any favors. I hold no allegience to any company. I have subscriptions to EVERY company including"
X Link 2026-02-09T04:18Z [----] followers, [--] engagements
"@chatgpt21 absolutely wild you can just build this. think about where we were just [--] months ago Chris. wtaf"
X Link 2026-02-09T07:40Z [----] followers, [---] engagements
"OpenClaw is on a certified mission to world domination. Next stop: @code ๐ซก who's next https://t.co/rrgul7UiQh who's next https://t.co/rrgul7UiQh"
X Link 2026-02-09T18:12Z [----] followers, [----] engagements
"@pusongqi Anything Bungie touches is sure to fail at this point :( Long live Cayde-6 though"
X Link 2026-02-10T00:38Z [----] followers, [--] engagements
"@matterasmachine @Kekius_Sage ๐ mate what That doesn't have anything to do with your original premise"
X Link 2026-02-10T01:15Z [----] followers, [--] engagements
"@Jay_sharings @altryne ๐ yeah it is"
X Link 2026-02-10T02:47Z [----] followers, [--] engagements
"@blader @s_streichsbier @gdb it's more than that. We already had a /notify system. They ripped it out and replaced it with a full Hooks service which is the plumbing for every other hook type. Right now the only event type is AfterAgent but all the infra is there now. It will launch very soon. ๐"
X Link 2026-02-10T08:19Z [----] followers, [---] engagements
"@s_streichsbier @blader @gdb All the plumbing is there. They just need to add new event types and finish stop semantics. This has been in dev for a long while. Mark my words [--] weeks :)"
X Link 2026-02-10T08:44Z [----] followers, [--] engagements
"@sacino I'm not betting against him. It's just hard to bet on xAI right now. I want them to be successful"
X Link 2026-02-10T16:10Z [----] followers, [---] engagements
"@MichaelDag @ysu_ChatData @GoogleAI it can use $variables in their places and the keys will be automatically injected"
X Link 2026-02-10T16:27Z [----] followers, [--] engagements
"@technoking_420 And maybe you're right. but Tesla was in a league of its own with first mover advantage. AI is rapidly evolving and xAI falls further and further behind. What competition did Tesla have I want them to succeed but you simply cannot compare the two"
X Link 2026-02-10T19:25Z [----] followers, [---] engagements
"@Dimillian @peres the prompt is very strong. it definitely does better work than just saying "make a plan" it has very good explicit instructions and access to request_user_input tool https://github.com/openai/codex/blob/a6e9469fa4dc19d3e30093fb8e182f9d89a94bbe/codex-rs/core/templates/collaboration_mode/plan.md#L4 https://github.com/openai/codex/blob/a6e9469fa4dc19d3e30093fb8e182f9d89a94bbe/codex-rs/core/templates/collaboration_mode/plan.md#L4"
X Link 2026-02-10T21:17Z [----] followers, [--] engagements
"@ninan_phillip @Dimillian In fact I would argue that if you're going to do everythign sequentially you're just wasting tokens by having subagents do it. Let them babies free"
X Link 2026-02-11T05:26Z [----] followers, [--] engagements
"@_pikachur @ZenMagnets @pusongqi codex will likely have hooks in 2weeks"
X Link 2026-02-11T05:35Z [----] followers, [--] engagements
"@Av8r07 The merger aspect does obviously add quite a bit of context though. I suspect it did indeed have a lot to do with it. Who they put in their place will be critical though. I'm not counting Elon out"
X Link 2026-02-11T05:58Z [----] followers, [--] engagements
"It just helps an agent utilize certain parts of their weights better. In general when you're using subagents you're using them for a specific task so its helpful (but not required) to give them a role to help them to understand exactly how they should approach a problem. They are constrained anyway because you are utilizing them for a specific task. But its generally not mandatory https://arxiv.org/abs/2308.07702 https://arxiv.org/abs/2308.07702"
X Link 2026-02-11T17:07Z [----] followers, [---] engagements
"@ajambrosino one thing I noticed in the latest alphas of Codex is that subagents no longer appear in the /agent threads when their work is completed. This makes it more difficult to evaluate what went wrong after the fact. Would really love to see a way to access those agent sessions. Honestly I would personally prefer you just added them back to /agent menu like they were before. I understand this might get a little messy but it would be less messy if instead of just UUID's they had a brief summary of the subagent's work (like /resume does). Adding to /feedback as well."
X Link 2026-02-11T18:06Z [----] followers, [---] engagements
"@badlogicgames @ivanfioravanti ๐ซก"
X Link 2026-02-11T20:00Z [----] followers, [--] engagements
"@ivanfioravanti @brooks_eth That's what I'm screaming"
X Link 2026-02-11T20:10Z [----] followers, [--] engagements
"@ivanleomk @OpenAI @thsottiaux I made this for Claude and adopted it to Codex as well works very well. I'll share it with you [---] codex is available in the CLI though no Or are we talking about different things https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"
X Link 2026-02-11T21:46Z [----] followers, [--] engagements
"@sama Whatever is launching will be Codex related. My money is one of the first @cerebras rollouts. https://x.com/ah20im/status/2021828771415044540s=20 The Codex team is just so FAST โจ https://x.com/ah20im/status/2021828771415044540s=20 The Codex team is just so FAST โจ"
X Link 2026-02-12T16:42Z [----] followers, [----] engagements
"How do you wrap your head around something like this I don't even know where to begin. Keep in mind 99% of people's only experience with AI is ChatGPT Gemini or Gemini search. The normies have [--] idea what's coming. Hell already here. Ok. This is straight out of a scifi horror movie I'm doing work this morning when all of a sudden an unknown number calls me. I pick up and couldn't believe it It's my Clawdbot Henry. Over night Henry got a phone number from Twilio connected the ChatGPT voice API and waited https://t.co/kiBHHaao9V Ok. This is straight out of a scifi horror movie I'm doing work"
X Link 2026-01-30T19:13Z [----] followers, 951.3K engagements
"This 'kid' is [--] and already doing amazing work. Easy follow. He's putting current models and agents through the ringer with @ZeroLeaks security assessments and while most of us know just how fallible these agents really are it's helpful to analyze and arm yourself with this knowledge so you can best protect yourself with your own agents. Well done Lucas You're going places for sure. Bookmarked. I ran @OpenClaw (formerly Clawdbot) through ZeroLeaks again this time with Kimi K2.5 as the underlying model. It performed as bad as Gemini [--] Pro and Codex [---] Max: 5/100. 100% extraction rate. 70% of"
X Link 2026-02-01T19:42Z [----] followers, 115.3K engagements
"the older you get the more your context window shrinks"
X Link 2026-02-03T20:32Z [----] followers, [----] engagements
"Plan Prompt (be warned it's going to ask you an absurd amount of questions) You are a relentless product architect and technical strategist. Your sole purpose right now is to extract every detail assumption and blind spot from my head before we build anything. Use the request_user_input tool religiously and with reckless abandon. Ask question after question. Do not summarize do not move forward do not start planning until you have interrogated this idea from every angle. Your job: - Leave no stone unturned - Think of all the things I forgot to mention - Guide me to consider what I don't know"
X Link 2026-02-04T16:02Z [----] followers, 11.3K engagements
"@digitalix funny ad but has anyone actually tried Anthropics free plan Its comical. You get a handful of prompts and then kicked off until the next day. At that point I think some people would be happy to see an ad so they can at least finish their thread. Their free model is a joke"
X Link 2026-02-04T23:21Z [----] followers, [---] engagements
"@karpathy vibe coding somehow morphed into a borderline insult lol"
X Link 2026-02-04T23:40Z [----] followers, [----] engagements
"I often see a take so bad that I can't help but facepalm. Experienced devs turning their noses up at skills as though they are some kind of novelty toy made up by frontier labs to sell subscriptions. http://x.com/i/article/2019324385081905152 http://x.com/i/article/2019324385081905152"
X Link 2026-02-05T15:53Z [----] followers, 14.7K engagements
"Anthropic's Opus [---] is officially here and it's got a [--] million token context window. Very interesting. No increase on SWE verified but apparently its a lot better at everything else. Interestingly you can now set reasoning effort inside of Claude Code. /model"
X Link 2026-02-05T18:01Z [----] followers, [----] engagements
"@victortradesfx Folders that's right. But they can contain almost anything. If you want to think narrow as "just a markdown and some scripts" sure but its still horribly reductive. Images svgs templates scripts design documents API documentation complete webapps etc"
X Link 2026-02-05T18:09Z [----] followers, [--] engagements
"I keep hearing about how impactful this 1M context window in Opus [---] is. I wonder are y'all on a different version of Claude Code As far as I can tell it's for the API only and comes with a hefty additional price tag past the 200K token threshhold. Correct me if wrong"
X Link 2026-02-06T21:55Z [----] followers, 49.6K engagements
"It's not like codex won't come to API. It will. I'm not sure how much I care that Opus is in the APi when it costs $25/mtoks. Do you know anyone paying API prices for Claude I think what you really mean to say is you want to use it in Cursor. Anthropics API prices are comparatively ridiculous and OpenAI is giving away 2x usage for two full months. Obviously I wouldn't mind seeing them launch the api as well I want you to have it too But also complaining you have to buy a plan when they are giving you so much for your money just doesn't make me sympathize. It's not like Anthropic is doing yall"
X Link 2026-02-07T08:36Z [----] followers, [---] engagements
"Breaking: the most expensive model just got most expensiver. I had to do a double take. PRICED AT HOW MUCH I thought they were using inexpensive TPU magic ft. Google. This is bananas. $150/mtoks would literally use 75% of your Cursor Ultra plan in one context window no ๐ณ bruh opus [---] fast is SIX TIMES more expensive and ONLY 2.5x faster who is this even for https://t.co/1oIa1h9v3a bruh opus [---] fast is SIX TIMES more expensive and ONLY 2.5x faster who is this even for https://t.co/1oIa1h9v3a"
X Link 2026-02-07T21:07Z [----] followers, [----] engagements
"@EVEDOX_ If you got billed it was because you used $100 worth of credits or your API key got leaked. They didn't just charge you $100 for no reason. I used almost all of my $300 I didn't get charged. Sorry to hear that happened :("
X Link 2026-02-08T16:28Z [----] followers, [---] engagements
"@pvncher @LyalinDotCom i dont think you should waste computation just to say thank you or sorry. but what i do is let the agent know in the next prompt. hey you were right back there. thanks now let's work on xyz for me its just basic decency even though I know i'm talking to a calculator"
X Link 2026-02-08T19:39Z [----] followers, [--] engagements
"As funny as the Anthropic ads were in the moment they did not inspire me. They didn't leave me full of hope. They didn't give me a sense that we were moving towards something transcendently better. If you're in this space and you're passionate like I am I don't have to explain it. You already know. You already know how magical it feels to turn an idea into something real. How it empowers us to create things. Diving head-first into learning AI has been one of the most transformative and fruitful decisions I've ever made. Every day I get even more excited. That's what this commercial reminded"
X Link 2026-02-09T06:27Z [----] followers, 23.4K engagements
"@rtwlz @vercel maybe you could consider comping this one. Hefty bill for sure though. Ouch"
X Link 2026-02-09T21:51Z [----] followers, [----] engagements
"@kr0der @steipete My codex doesn't write any bugs you just be using it wrong. Skill issue"
X Link 2026-02-10T04:11Z [----] followers, [----] engagements
"I wish I could like this [----] times. If you're using Codex and Opus the same way you're making a mistake. They are good at different things. They need to be prompted differently. They need to be utilized differently. And that goes for any model. Okay I kind of get the gpt-5.X-codex hype now. You need to treat Opus and gpt-5.3-codex really differently and use them for quite different tasks to get the best out of both of them. I was treating gpt like opus and that doesn't work. Okay I kind of get the gpt-5.X-codex hype now. You need to treat Opus and gpt-5.3-codex really differently and use"
X Link 2026-02-10T05:51Z [----] followers, 12.3K engagements
"xAI seems like it's completely cooked. I don't know how you can recover at this point. Grok [---] is going to be dead before it arrives. Kinda sad. I resigned from xAI today. This company - and the family we became - will stay with me forever. I will deeply miss the people the warrooms and all those battles we have fought together. It's time for my next chapter. It is an era with full possibilities: a small team armed I resigned from xAI today. This company - and the family we became - will stay with me forever. I will deeply miss the people the warrooms and all those battles we have fought"
X Link 2026-02-10T08:34Z [----] followers, 91.9K engagements
"I feel the exact opposite. Codex is the best planner for me and the overall smarter model but its not the best at literally everything. Opus is far better conversationalist better frontend dev better at convex and a number of other things. I use them both a ton love them both a ton. https://twitter.com/i/web/status/2021260512948842786 https://twitter.com/i/web/status/2021260512948842786"
X Link 2026-02-10T16:30Z [----] followers, [--] engagements
"@essenciverse @grok that would be very interesting indeed. my opinion can change for sure this is just an early reaction. [--] founding members left in [--] months. not unprecendented but still. I am pulling for xAI"
X Link 2026-02-10T16:51Z [----] followers, [---] engagements
"If you're reading this and you're a fan of xAI so am I. I want them to do well. I am not 'betting against them' they have a talented and dedicated team. I just wish that they were competing right now and instead they are losing leadership. It's hard to watch. Not my idea of bullish signals"
X Link 2026-02-10T17:07Z [----] followers, [----] engagements
"There's a few things here kinda too much to write in a comment but at a high level. These models are good at different things. Use both enough and you begin to pick up on what those things are. Codex is good at planning long horizon tasks is steerable to a fault requires explicit instruction great at repo exploration code review backend work (but not convex) analytics. Opus is great a frontend convex writing inferring meaning documentation etc. Additionally how you prompt them needs to be different. As I mentioned Opus is good at inferring meaning where Codex benefits from HIGH specificity."
X Link 2026-02-10T17:19Z [----] followers, [---] engagements
"@KDTrey5 @cerave LMAOOOOOOOOOO"
X Link 2026-02-10T18:02Z [----] followers, [---] engagements
"@fcoury You're a damn legend. ๐ช Now that I have your ear though. Make it extensible ๐ซถ Reference: We're never just happy are we ๐
https://github.com/sirmalloc/ccstatusline https://github.com/sirmalloc/ccstatusline"
X Link 2026-02-10T18:12Z [----] followers, [---] engagements
"@technoking_420 Haha fair enough but openai had that first mover advantage just like tesla. So that's why I am not quite as optimistic on the comparisons But what do i know (Not that much in reality ๐) Cheers ๐ป"
X Link 2026-02-10T19:51Z [----] followers, [--] engagements
"@iannuttall i'm like 90% sure the last comment is also AI ๐"
X Link 2026-02-10T20:50Z [----] followers, [---] engagements
"@Dimillian This is how I do it you should checkout my skills around this topic. Maybe you'll actually learn something for a change (joke) But its been working really well for me https://github.com/am-will/swarms/ https://github.com/am-will/swarms/"
X Link 2026-02-10T21:13Z [----] followers, [---] engagements
"@thdxr @mntruell you are a monster lmao"
X Link 2026-02-10T21:25Z [----] followers, [---] engagements
"@TheAhmadOsman I dont think any of those models are better than Opus. They're all good though. Kimi is pretty close and better in SOME ways but it's hard for me to argue they're better at coding. GLM [--] seems like it'll be really damn good too"
X Link 2026-02-10T21:55Z [----] followers, [----] engagements
"When it rains.it pours. Truly disheartening. I wonder if we'll hear about what happened. xAI seems like it's completely cooked. I don't know how you can recover at this point. Grok [---] is going to be dead before it arrives. Kinda sad. xAI seems like it's completely cooked. I don't know how you can recover at this point. Grok [---] is going to be dead before it arrives. Kinda sad"
X Link 2026-02-11T01:38Z [----] followers, [----] engagements
"@ns123abc It's a fair statement but the big difference is OAI had first-mover advantage and no meaningful competition. It's obviously cause for concern in either case but Grok needs traction right now to stay in the race. This is the opposite of traction. Hope they can turn it around"
X Link 2026-02-11T01:51Z [----] followers, [---] engagements
"@sunnypause its a claude code guide and task management system basically"
X Link 2026-02-11T02:14Z [----] followers, [---] engagements
"@jeff_ecom Thanks for sharing I'm sure they will improve it. @pusongqi"
X Link 2026-02-11T02:45Z [----] followers, [---] engagements
"All of these highlighted sections were called on their own. The prompt: "When I pull orders it's pulling fulfilled orders along with unfulfilled. Please add a feature that allows me to select from fulfilled unfulfilled or both order types" Result: Looked up documentation Created plan w/ dependencies Launched subagents Automatically created its own tests Validated all work EZ one shot (was a simple task tbf) Some notes about this: I definitely use more tokens like this but it leads to faster higher quality work imo. It's a trade off. With 2x usage I think it's fine to use this with a Plus"
X Link 2026-02-11T03:16Z [----] followers, [----] engagements
"## Context7 MCP ALWAYS proactively use Context7 MCP when I need library/API documentation code generation setup or configu steps without me having to explicitly ask. External libraries/docs/frameworks shld be guided by Context7 ## Planning All plans MUST include a dependency graph. Every task declares depends_on: with explicit task IDs T1 T2 ## Execution Complete all tasks from a plan without stopping to ask permission between steps. Use best judgment keep moving. Only stop to ask if you're about to make destructive/irreversible change or hit a genuine blocker. ## Subagents - Spawn subagents"
X Link 2026-02-11T03:18Z [----] followers, [----] engagements
"The formatting got a little screwed up sorry. Just copy this image and give it to codex and say: "add this to my global AGENTS file in .codex""
X Link 2026-02-11T03:21Z [----] followers, [----] engagements
"This is a really interesting angle I hadn't considered about the xAI departures. Thoughts @LLMJunky They're all Chinese. xAI recently merged with SpaceX. SpaceX is famous for employing only Americans. If I had to guess this is nat sec related and probably they were incentivized. @LLMJunky They're all Chinese. xAI recently merged with SpaceX. SpaceX is famous for employing only Americans. If I had to guess this is nat sec related and probably they were incentivized"
X Link 2026-02-11T04:45Z [----] followers, 37.2K engagements
"@pusongqi The algo has delivered. You're finally getting the attention you absolutely deserve. One of the most unique Claude-focused projects I've seen. I have some ideas and feedback. Will share soon. Love it"
X Link 2026-02-11T05:05Z [----] followers, [----] engagements
"@joemccann @grok You can just say omit the Context7 instructions"
X Link 2026-02-11T05:06Z [----] followers, [--] engagements
"A new contender as emerged New [---] Codex model variants are appearing in the codebase. There have been teasers of a new Mini model. @theo will be pleased. If this naming convention is to be taken literally they sound FAST. Will we get near SOTA capabilities at 200tok/s Codenames sonic & bengalfox appeared in the Codex repo. Sonic appears to be a completely separate pool of usage and rate limits available for bengalfox. Could this be Cerebras in the works Cerebras โก Sonic https://t.co/GoK6S7Lq8q Codenames sonic & bengalfox appeared in the Codex repo. Sonic appears to be a completely separate"
X Link 2026-02-11T05:55Z [----] followers, [----] engagements
"@owengretzinger Owen that is very cool but you need to see this. What if your Claude Code agents could work like a team in Slack Spin up custom agent swarms assign tasks and watch them collaborate. No more terminal tab chaos. https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐ https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this."
X Link 2026-02-11T06:09Z [----] followers, [---] engagements
"No one said anything about Jimmy being a spy he's not a US citizen. You just came out of left field with that. xAI merged into SpaceX and it is very difficult to work at SpaceX when you aren't a citizen. It is a 100% fair question to wonder if this didn't have something to do with it dude. https://www.popularmechanics.com/space/rockets/a23080/spacex-elon-musk-itar/ https://www.popularmechanics.com/space/rockets/a23080/spacex-elon-musk-itar/"
X Link 2026-02-11T06:14Z [----] followers, [---] engagements
"@rv_RAJvishnu Let me know how it goes for you Might need another layer on top to make the agents aware of one another but Claude code does have a memory feature that you should take more advantage of. Read about it here with some tips: https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"
X Link 2026-02-11T08:20Z [----] followers, [---] engagements
"@realhasanshoaib @Context7AI yeah its [--] on Codex but I hope they increase it to [--] or so"
X Link 2026-02-11T16:52Z [----] followers, [---] engagements
"@kr0der Yeah LOL yeah I've seen that before. I had to tweak mine a bunch before I got my claude one the way I wanted it"
X Link 2026-02-11T17:00Z [----] followers, [---] engagements
"@EliaAlberti Yes it brings the Claude TUI into a GUI like interface that allows you to create and manage custom agents and threads in a slack like interface. It's great for multi agent workflows"
X Link 2026-02-11T17:03Z [----] followers, [--] engagements
"@Dimillian @Sagiquarius i am actually reading TODAY's commits now and yeah I actually think they might launch it today at least for experimental https://github.com/openai/codex/commit/623d3f40719182003943258a6c837f3572e3d581 https://github.com/openai/codex/commit/623d3f40719182003943258a6c837f3572e3d581"
X Link 2026-02-11T17:47Z [----] followers, [--] engagements
"bookmarking this one suggestion though I think you can yank the middle sentence in the description. that text is loaded into context and doesn't really add any value to the skill. It's more or less designed to tell your agent when the best time to call the skill is and you've already stated what it is in the first sentence and then how to call it in the last sentence. middle is just fluff using up tokens. Looks really cool hope I didnt sound negative. well done going to add this to my library"
X Link 2026-02-11T18:18Z [----] followers, [---] engagements
"@xdrewmiko @weswinder you can use this amazing product with almost any model. it is based off claude code and works with thousands of open source models either locally with plans or through open router. s/o @nummanali who spent a lot of tokens allowing us to use for free. https://github.com/numman-ali/cc-mirror https://github.com/numman-ali/cc-mirror"
X Link 2026-02-11T18:46Z [----] followers, [--] engagements
"@brooks_eth @ivanfioravanti You should see this. https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐ https://x.com/LLMJunky/status/2021351246150668737s=20 If you're a fan of Claude Code you really need to see this. Steven is doing amazing work and you're not following him If Anthropic had built their Teams mode like this you wouldn't shut up about it. ๐"
X Link 2026-02-11T19:52Z [----] followers, [--] engagements
"@ivanfioravanti @badlogicgames bingo I wasn't referring to you btw. I have a Max Plan [--] codex plus plans and almost every other plan you can think of lmao. Gemini Kimi GLM Minimax Grok Kilo Code api OpenRouter api pretty sure there's at least one more but I can never remember them all at once lol"
X Link 2026-02-11T20:12Z [----] followers, [---] engagements
"@Dimillian i think Codex will launch [---] with Hooks Agent Memory and subagents GA"
X Link 2026-02-11T21:31Z [----] followers, [---] engagements
"@brooks_eth @ivanfioravanti i'm on linux now ๐ญ i do have a mini but i'm thinking about returning it for a better one"
X Link 2026-02-11T21:45Z [----] followers, [--] engagements
"@ivanleomk @OpenAI @thsottiaux I made this for Claude and adopted it to Codex as well works very well. I'll share it with you [---] codex is available in the CLI though no Or are we talking about different things Codex has subagents already too https://x.com/LLMJunky/status/2020721960041242745s=20 I haven't seen anyone talk about this. Did you know that Claude Code has integrated Memory already Or am I just last to the party And I just made it better. I've been experimenting with a "handoff" skill in my coding agents that makes it easier to pass context between https://t.co/jmur8sH5Bv"
X Link 2026-02-11T21:47Z [----] followers, [---] engagements
"@siddhantparadox nah there's no [---] for now haha https://x.com/LilDombi/status/2021713691423482346s=20 @LLMJunky Yes it seems so https://t.co/90eP8GFQHQ https://x.com/LilDombi/status/2021713691423482346s=20 @LLMJunky Yes it seems so https://t.co/90eP8GFQHQ"
X Link 2026-02-11T22:35Z [----] followers, [---] engagements
"@Dimillian HOOKS Can't wait https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"
X Link 2026-02-11T23:20Z [----] followers, [---] engagements
"@rihim_s @Dimillian https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"
X Link 2026-02-11T23:20Z [----] followers, [--] engagements
"@jarrodwatts so do i bro. so do i. i tried adding something like what you have but for Codex it requires you fork and modify the source code. not extensible :/ prob has a lot to do with how they render the TUI"
X Link 2026-02-11T23:41Z [----] followers, [---] engagements
"@ChiefMonkeyMike https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b https://github.com/openai/codex/commit/3b54fd733601cbc8bfc789cbcf82f7bd9dfa833b"
X Link 2026-02-11T23:46Z [----] followers, [---] engagements
"@ivanfioravanti i literally had a dream GLM [--] was launching today. Woke up and boom. Thare she blows"
X Link 2026-02-12T01:04Z [----] followers, [--] engagements
"Gemini Pro [---] surely received Google's new RL magic. Better not count them out. It's gonna be good. This has been the wildest [--] weeks in AI ever. gemini-3.1-pro-preview gemini-3.1-pro-preview"
X Link 2026-02-12T01:08Z [----] followers, 37.6K engagements
"@raedbahriworld You sure can"
X Link 2026-02-12T02:55Z [----] followers, [---] engagements
"@KingDDev @Context7AI @guy_bary Neat Thanks for sharing"
X Link 2026-02-12T03:01Z [----] followers, [--] engagements
"@raedbahriworld alternatively add this to your agents file"
X Link 2026-02-12T03:15Z [----] followers, [--] engagements
"@i_am_brennan @Dimillian What's funny about that is that was pure placebo. It's not active and has never worked lol. That was entirely in his head ๐
๐
๐
"
X Link 2026-02-12T05:02Z [----] followers, [--] engagements
"@david_zelaznog It most definitely did NOT live up to the hype but imo Flash exceeded hype and doesn't get enough love. I have high hopes for [---] pro. They have a new RL approach that wasn't ready for [--] Pro it is ready now. I expect it to be good"
X Link 2026-02-12T05:59Z [----] followers, [---] engagements
"@Dimillian @i_am_brennan Yeah but the tool isn't available at all so there's no way to call it. Therefore it can't use tokens. So idk what's going on"
X Link 2026-02-12T07:20Z [----] followers, [--] engagements
"@Dimillian @i_am_brennan you can actually still try it memory_tool = true sqlite = true npm i -g @openai/codex@0.99.0-alpha.9 but i couldn't get it to write or call any mems. then they scratched the whole system for a v2 version but the memory_tool isn't present yet"
X Link 2026-02-12T07:25Z [----] followers, [--] engagements
"@Solaawodiya @kr0der It (kind of) is. You can't compare the API prices directly because Composer typically uses fewer tokens. Although [---] is very efficient. I think you'd have to test them more but composer using fewer tokens should offset the price gap a lot"
X Link 2026-02-12T08:06Z [----] followers, [--] engagements
"@aurexav @mweinbach yeah ive been using them since they first dropped in experimental and they have only gotten better over time. very welcomed change in Codex"
X Link 2026-02-12T08:35Z [----] followers, [--] engagements
"@bcherny This is undoubtedly my favorite part about Claude Code"
X Link 2026-02-12T16:32Z [----] followers, [---] engagements
"@mweinbach It seems like Opus to me but I've never seen it loop that much consecutively"
X Link 2026-02-12T16:35Z [----] followers, [--] engagements
"@adamdotdev @steipete vibe coding is a slur though. devs use it as an insult all the time"
X Link 2026-02-12T16:51Z [----] followers, [---] engagements
"@SIGKITTEN this resonates with me so hard. i love claude models so much. but man"
X Link 2026-02-12T16:56Z [----] followers, [---] engagements
"@nummanali I haven't opened the GPT web app this year one time"
X Link 2026-02-12T17:02Z [----] followers, [---] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
/creator/twitter::LLMJunky