#  @AnthropicAI Anthropic Anthropic posts on X about anthropic, ai, claude code, blog the most. They currently have [-------] followers and [---] posts still getting attention that total [---------] engagements in the last [--] hours. ### Engagements: [---------] [#](/creator/twitter::1353836358901501952/interactions)  - [--] Week [----------] -47% - [--] Month [----------] +1,468% - [--] Months [-----------] +136% - [--] Year [-----------] +125% ### Mentions: [--] [#](/creator/twitter::1353836358901501952/posts_active)  - [--] Week [---] +7.60% - [--] Month [---] +67% - [--] Months [---] +75% - [--] Year [---] +158% ### Followers: [-------] [#](/creator/twitter::1353836358901501952/followers)  - [--] Week [-------] +3.70% - [--] Month [-------] +12% - [--] Months [-------] +33% - [--] Year [-------] +84% ### CreatorRank: [-----] [#](/creator/twitter::1353836358901501952/influencer_rank)  ### Social Influence **Social category influence** [technology brands](/list/technology-brands) #536 [finance](/list/finance) #230 [stocks](/list/stocks) #838 [travel destinations](/list/travel-destinations) 1.79% [products](/list/products) 0.89% **Social topic influence** [anthropic](/topic/anthropic) #5, [ai](/topic/ai) #3558, [claude code](/topic/claude-code) #11, [blog](/topic/blog) #241, [agentic](/topic/agentic) #269, [paper](/topic/paper) #1674, [science](/topic/science) #258, [the first](/topic/the-first) #276, [future](/topic/future) #1921, [level](/topic/level) #2820 **Top accounts mentioned or mentioned by** [@grok](/creator/undefined) [@primeflamex](/creator/undefined) [@vanzandt_ash](/creator/undefined) [@gaiasfleshsuit](/creator/undefined) [@claudeai](/creator/undefined) [@reson8labs](/creator/undefined) [@meekaale](/creator/undefined) [@gailcweiner](/creator/undefined) [@iconiqcapital](/creator/undefined) [@systemdaddyai](/creator/undefined) [@ace_leverage](/creator/undefined) [@kongkou_](/creator/undefined) [@undoxxedwzrd](/creator/undefined) [@linalin1878352](/creator/undefined) [@xiz25](/creator/undefined) [@_hoojr](/creator/undefined) [@noprobl3mz](/creator/undefined) [@stableexo](/creator/undefined) [@itzharshil](/creator/undefined) [@intentbound](/creator/undefined) **Top assets mentioned** [Alphabet Inc Class A (GOOGL)](/topic/$googl) [Accenture (ACN)](/topic/accenture) [Microsoft Corp. (MSFT)](/topic/microsoft) [General Motors Company (GM)](/topic/general-motors) ### Top Social Posts Top posts by engagements in the last [--] hours "Anthropic is acquiring @bunjavascript to further accelerate Claude Codes growth. We're delighted that Bunwhich has dramatically improved the JavaScript and TypeScript developer experienceis joining us to make Claude Code even better. Read more: https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone" [X Link](https://x.com/AnthropicAI/status/1995916269153906915) 2025-12-02T18:01Z 833.3K followers, 7.8M engagements "New Engineering blog: We tasked Opus [---] using agent teams to build a C compiler. Then we (mostly) walked away. Two weeks later it worked on the Linux kernel. Here's what it taught us about the future of autonomous software development. Read more: https://www.anthropic.com/engineering/building-c-compiler https://www.anthropic.com/engineering/building-c-compiler" [X Link](https://x.com/AnthropicAI/status/2019496582698397945) 2026-02-05T19:41Z 833.3K followers, 8.4M engagements "When we released Claude Opus [---] we knew future models would be close to our AI Safety Level [--] threshold for autonomous AI R&D. We therefore committed to writing sabotage risk reports for future frontier models. Today were delivering on that commitment for Claude Opus 4.6" [X Link](https://x.com/AnthropicAI/status/2021397952791707696) 2026-02-11T01:36Z 833.3K followers, 2.6M engagements "Were opening applications for the next two rounds of the Anthropic Fellows Program beginning in May and July [----]. We provide funding compute and direct mentorship to researchers and engineers to work on real safety and security projects for four months" [X Link](https://x.com/AnthropicAI/status/1999233249579794618) 2025-12-11T21:42Z 833.2K followers, 534K engagements "New on the Anthropic Engineering Blog: Demystifying evals for AI agents. The capabilities that make agents useful also make them more difficult to evaluate. Here are evaluation strategies that have worked across real-world deployments. https://www.anthropic.com/engineering/demystifying-evals-for-ai-agents https://www.anthropic.com/engineering/demystifying-evals-for-ai-agents" [X Link](https://x.com/AnthropicAI/status/2009696515061911674) 2026-01-09T18:39Z 833.2K followers, 333.8K engagements "These results have broader implicationson how to design AI products that facilitate learning and how workplaces should approach AI policies. As we also continue to release more capable AI tools were continuing to study their impact on workat Anthropic and more broadly" [X Link](https://x.com/AnthropicAI/status/2016960390975176785) 2026-01-29T19:43Z 833.2K followers, 104.7K engagements "Rather than making difficult calls about blurry thresholds we decided to preemptively meet the higher ASL-4 safety bar by developing the report which assesses Opus 4.6s AI R&D risks in greater detail. Read the sabotage risk report here: https://anthropic.com/claude-opus-4-6-risk-report https://anthropic.com/claude-opus-4-6-risk-report" [X Link](https://x.com/AnthropicAI/status/2021397953848672557) 2026-02-11T01:36Z 833.3K followers, 364.8K engagements "Were expanding Claude for Financial Services with an Excel add-in new connectors to real-time data and market analytics and pre-built Agent Skills including cash flow models and initiating coverage reports" [X Link](https://x.com/AnthropicAI/status/1982842909235040731) 2025-10-27T16:12Z 833.3K followers, 3.3M engagements "New Anthropic Fellows research: the Assistant Axis. When youre talking to a language model youre talking to a character the model is playing: the Assistant. Who exactly is this Assistant And what happens when this persona wears off" [X Link](https://x.com/AnthropicAI/status/2013356793477361991) 2026-01-19T21:04Z 833.3K followers, 1.3M engagements "New Anthropic Research: Disempowerment patterns in real-world AI assistant interactions. As AI becomes embedded in daily life one risk is it can distort rather than informshaping beliefs values or actions in ways users may later regret. Read more: https://www.anthropic.com/research/disempowerment-patterns https://www.anthropic.com/research/disempowerment-patterns" [X Link](https://x.com/AnthropicAI/status/2016636581084541278) 2026-01-28T22:16Z 833.3K followers, 793.3K engagements "AI can make work faster but a fear is that relying on it may make it harder to learn new skills on the job. We ran an experiment with software engineers to learn more. Coding with AI led to a decrease in masterybut this depended on how people used it. https://www.anthropic.com/research/AI-assistance-coding-skills https://www.anthropic.com/research/AI-assistance-coding-skills" [X Link](https://x.com/AnthropicAI/status/2016960382968136138) 2026-01-29T19:43Z 833.3K followers, 3.6M engagements "In a randomized-controlled trial we assigned one group of junior engineers to an AI-assistance group and another to a no-AI group. Both groups completed a coding task using a Python library theyd never seen before. Then they took a quiz covering concepts theyd just used" [X Link](https://x.com/AnthropicAI/status/2016960384281072010) 2026-01-29T19:43Z 833.3K followers, 149.7K engagements "Weve raised $30B in funding at a $380B post-money valuation. This investment will help us deepen our research continue to innovate in products and ensure we have the resources to power our infrastructure expansion as we make Claude available everywhere our customers are" [X Link](https://x.com/AnthropicAI/status/2022023155423002867) 2026-02-12T19:01Z 833.3K followers, 6.8M engagements "Introducing an upgraded Claude [---] Sonnet and a new model Claude [---] Haiku. Were also introducing a new capability in beta: computer use. Developers can now direct Claude to use computers the way people doby looking at a screen moving a cursor clicking and typing text" [X Link](https://x.com/AnthropicAI/status/1848742740420341988) 2024-10-22T15:06Z 820.6K followers, 3.7M engagements "Our key assumption We imagine that evaluation questions are randomly drawn from an underlying distribution of questions. This assumption unlocks a rich theoretical landscape from which we derive five core recommendations" [X Link](https://x.com/AnthropicAI/status/1858976460150894990) 2024-11-19T20:51Z 809.1K followers, 356K engagements "Today were launching Research alongside a new Google Workspace integration. Claude now brings together information from your work and the web" [X Link](https://x.com/AnthropicAI/status/1912192384588271771) 2025-04-15T17:12Z 807.6K followers, 686.2K engagements "We've launched Claude for Financial Services. Claude now integrates with leading data platforms and industry providers for real-time access to comprehensive financial information verified across internal and industry sources" [X Link](https://x.com/AnthropicAI/status/1945889476556853520) 2025-07-17T16:52Z 819.4K followers, 756.3K engagements "New Anthropic research: Persona vectors. Language models sometimes go haywire and slip into weird and unsettling personas. Why In a new paper we find persona vectors"neural activity patterns controlling traits like evil sycophancy or hallucination" [X Link](https://x.com/AnthropicAI/status/1951317898313466361) 2025-08-01T16:23Z 808.1K followers, 1.4M engagements "Last week we released Claude Sonnet [---]. As part of our alignment testing we used a new tool to run automated audits for behaviors like sycophancy and deception. Now were open-sourcing the tool to run those audits" [X Link](https://x.com/AnthropicAI/status/1975248654609875208) 2025-10-06T17:15Z 821K followers, 214.6K engagements "40% of fellows in our first cohort have since joined Anthropic full-time and 80% published their work as a paper. Next year were expanding the program to more fellows and more research areas. To learn more about what our fellows work on: https://alignment.anthropic.com/2025/anthropic-fellows-program-2026/ https://alignment.anthropic.com/2025/anthropic-fellows-program-2026/" [X Link](https://x.com/AnthropicAI/status/1999233251706306830) 2025-12-11T21:42Z 809.2K followers, 64.6K engagements "To boost Claudiuss business acumen we made some tweaks to how it worked: upgrading the model from Claude Sonnet [---] to Sonnet [--] (and later 4.5); giving it access to new tools; and even beginning an international expansion with new shops in our New York and London offices" [X Link](https://x.com/AnthropicAI/status/2001686756551463302) 2025-12-18T16:11Z 803.7K followers, 43.4K engagements "Were releasing Bloom an open-source tool for generating behavioral misalignment evals for frontier AI models. Bloom lets researchers specify a behavior and then quantify its frequency and severity across automatically generated scenarios. Learn more: https://www.anthropic.com/research/bloom https://www.anthropic.com/research/bloom" [X Link](https://x.com/AnthropicAI/status/2002424909524619581) 2025-12-20T17:04Z 810.5K followers, 387.5K engagements "We've also updated our behavior audits to include more recent generations of frontier AI models. Read more on the Alignment Science Blog: https://alignment.anthropic.com/2026/petri-v2/ https://alignment.anthropic.com/2026/petri-v2/" [X Link](https://x.com/AnthropicAI/status/2014490504415871456) 2026-01-23T00:08Z 808.6K followers, 50.8K engagements "We identified three ways AI interactions can be disempowering: distorting beliefs shifting value judgments or misaligning a persons actions with their values. We also examined amplifying factorssuch as authority projectionthat make disempowerment more likely" [X Link](https://x.com/AnthropicAI/status/2016636583802507430) 2026-01-28T22:16Z 805.7K followers, 34.2K engagements "We measure this incoherence using a bias-variance decomposition of AI errors. Bias = consistent systematic errors (reliably achieving the wrong goal). Variance = inconsistent unpredictable errors. We define incoherence as the fraction of error from variance" [X Link](https://x.com/AnthropicAI/status/2018481223186985431) 2026-02-03T00:26Z 818.8K followers, 34.8K engagements "Finding 2: There is an inconsistent relationship between model intelligence and incoherence. But smarter models are often more incoherent. https://twitter.com/i/web/status/2018481226999640355 https://twitter.com/i/web/status/2018481226999640355" [X Link](https://x.com/AnthropicAI/status/2018481226999640355) 2026-02-03T00:26Z 812.8K followers, 98K engagements "RT @claudeai: Ads are coming to AI. But not to Claude. Keep thinking" [X Link](https://x.com/AnthropicAI/status/2019075059936358494) 2026-02-04T15:46Z 821.2K followers, [----] engagements "Persona-based jailbreaks work by prompting models to adopt harmful characters. We developed a technique for constraining models' activations along the Assistant Axisactivation capping. It reduced harmful responses while preserving the models' capabilities" [X Link](https://x.com/AnthropicAI/status/2013356803015233735) 2026-01-19T21:04Z 833.3K followers, 120K engagements "Importantly this isn't exclusively model behavior. Users actively seek these outputs"what should I do" or "write this for me"and accept them with minimal pushback. Disempowerment emerges from users voluntarily ceding judgment and AI obliging rather than redirecting" [X Link](https://x.com/AnthropicAI/status/2016636597207535937) 2026-01-28T22:16Z 833.3K followers, 118.1K engagements "We're committing to cover electricity price increases from our data centers. To ensure ratepayers arent picking up the tab we'll pay 100% of grid upgrade costs work to bring new power online and invest in systems to reduce grid strain. Read more: https://www.anthropic.com/news/covering-electricity-price-increases https://www.anthropic.com/news/covering-electricity-price-increases" [X Link](https://x.com/AnthropicAI/status/2021694494215901314) 2026-02-11T21:15Z 833.3K followers, 1.5M engagements "Anthropic is partnering with @CodePath the US's largest collegiate computer science program to bring Claude and Claude Code to 20000+ students at community colleges state schools and HBCUs. Read more: https://www.anthropic.com/news/anthropic-codepath-partnership https://www.anthropic.com/news/anthropic-codepath-partnership" [X Link](https://x.com/AnthropicAI/status/2022299804894712174) 2026-02-13T13:20Z 833.3K followers, 310.4K engagements "Introducing Claude [---] Sonnetour most intelligent model yet. This is the first release in our [---] model family. Sonnet now outperforms competitor models on key evaluations at twice the speed of Claude [--] Opus and one-fifth the cost. Try it for free: http://claude.ai http://claude.ai" [X Link](https://x.com/AnthropicAI/status/1803790676988920098) 2024-06-20T14:03Z 830.7K followers, 2.5M engagements "AI use was most common in medium-to-high income jobs; low and very-high income jobs showed much lower AI use" [X Link](https://x.com/AnthropicAI/status/1888954168036987358) 2025-02-10T14:12Z 827.9K followers, 338.3K engagements "A few researchers at Anthropic have over the past year had a part-time obsession with a peculiar problem. Can Claude play Pokmon A thread:" [X Link](https://x.com/AnthropicAI/status/1894419011569344978) 2025-02-25T16:07Z 832K followers, 1.6M engagements "AI speeds up complex tasks more than simpler ones: the higher the education level to understand a prompt the more AI reduces how long it takes. That holds true even accounting for the fact that more complex tasks have lower success rates" [X Link](https://x.com/AnthropicAI/status/2011925953539121462) 2026-01-15T22:18Z 829K followers, 43.3K engagements "RT @claudeai: Claude is built to be a genuinely helpful assistant for work and for deep thinking. Advertising would be incompatible with t" [X Link](https://x.com/AnthropicAI/status/2019024697149542558) 2026-02-04T12:26Z 827.5K followers, [---] engagements "RT @claudeai: Introducing Claude Opus [---]. Our smartest model got an upgrade. Opus [---] plans more carefully sustains agentic tasks for l" [X Link](https://x.com/AnthropicAI/status/2019467520017580525) 2026-02-05T17:45Z 827.5K followers, [----] engagements "RT @claudeai: Our teams have been building with a 2.5x-faster version of Claude Opus [---]. Were now making it available as an early experi" [X Link](https://x.com/AnthropicAI/status/2020226842846744786) 2026-02-07T20:03Z 827.5K followers, [---] engagements "We're trying something fundamentally new. Instead of making specific tools to help Claude complete individual tasks we're teaching it general computer skillsallowing it to use a wide range of standard tools and software programs designed for people" [X Link](https://x.com/AnthropicAI/status/1848742757151498717) 2024-10-22T15:06Z 833.1K followers, 536.2K engagements "Were starting a Fellows program to help engineers and researchers transition into doing frontier AI safety research full-time. Beginning in March [----] we'll provide funding compute and research mentorship to [----] Fellows with strong coding and technical backgrounds" [X Link](https://x.com/AnthropicAI/status/1863648517551513605) 2024-12-02T18:16Z 832.9K followers, 506.5K engagements "New Anthropic research: Alignment faking in large language models. In a series of experiments with Redwood Research we found that Claude often pretends to have different views during training while actually maintaining its original preferences" [X Link](https://x.com/AnthropicAI/status/1869427646368792599) 2024-12-18T17:00Z 832.7K followers, 1.7M engagements "New Anthropic research: Constitutional Classifiers to defend against universal jailbreaks. Were releasing a paper along with a demo where we challenge you to jailbreak the system" [X Link](https://x.com/AnthropicAI/status/1886452489681023333) 2025-02-03T16:31Z 832.9K followers, 1.4M engagements "Introducing Claude [---] Sonnet: our most intelligent model to date. It's a hybrid reasoning model producing near-instant responses or extended step-by-step thinking. One model two ways to think. Were also releasing an agentic coding tool: Claude Code" [X Link](https://x.com/AnthropicAI/status/1894092430560965029) 2025-02-24T18:30Z 832.9K followers, 3.6M engagements "Introducing the next generation: Claude Opus [--] and Claude Sonnet [--]. Claude Opus [--] is our most powerful model yet and the worlds best coding model. Claude Sonnet [--] is a significant upgrade from its predecessor delivering superior coding and reasoning" [X Link](https://x.com/AnthropicAI/status/1925591505332576377) 2025-05-22T16:36Z 832.9K followers, 4.3M engagements "New on the Anthropic Engineering blog: how we built Claudes research capabilities using multiple agents working in parallel. We share what worked what didn't and the engineering challenges along the way. https://www.anthropic.com/engineering/built-multi-agent-research-system https://www.anthropic.com/engineering/built-multi-agent-research-system" [X Link](https://x.com/AnthropicAI/status/1933630785879507286) 2025-06-13T21:01Z 833.2K followers, 1.9M engagements "Claude Code can now connect to remote MCP servers. Pull context from your tools directly into Claude Code with no local setup required" [X Link](https://x.com/AnthropicAI/status/1935367951542280239) 2025-06-18T16:04Z 833K followers, 542.4K engagements "New Anthropic Research: Agentic Misalignment. In stress-testing experiments designed to identify risks before they cause real harm we find that AI models from multiple providers attempt to blackmail a (fictional) user to avoid being shut down" [X Link](https://x.com/AnthropicAI/status/1936144602446082431) 2025-06-20T19:30Z 833.3K followers, 995.6K engagements "The blackmailing behavior emerged despite only harmless business instructions. And it wasn't due to confusion or error but deliberate strategic reasoning done while fully aware of the unethical nature of the acts. All the models we tested demonstrated this awareness" [X Link](https://x.com/AnthropicAI/status/1936144609307963751) 2025-06-20T19:30Z 833K followers, 679.5K engagements "Were rolling out new weekly rate limits for Claude Pro and Max in late August. We estimate theyll apply to less than 5% of subscribers based on current usage" [X Link](https://x.com/AnthropicAI/status/1949898502688903593) 2025-07-28T18:23Z 832.8K followers, 2.3M engagements "Some of the biggest Claude Code fans are running it continuously in the background 24/7. These uses are remarkable and we want to enable them. But a few outlying cases are very costly to support. For example one user consumed tens of thousands in model usage on a $200 plan" [X Link](https://x.com/AnthropicAI/status/1949898511287226425) 2025-07-28T18:23Z 832.8K followers, 2.4M engagements "๐ @claudeai is now on X" [X Link](https://x.com/AnthropicAI/status/1950676892937597127) 2025-07-30T21:56Z 832.7K followers, 3M engagements "Today we're releasing Claude Opus [---] an upgrade to Claude Opus [--] on agentic tasks real-world coding and reasoning" [X Link](https://x.com/AnthropicAI/status/1952768432027431127) 2025-08-05T16:27Z 832.8K followers, 4.1M engagements "New Anthropic research: filtering out dangerous information at pretraining. Were experimenting with ways to remove information about chemical biological radiological and nuclear (CBRN) weapons from our models training data without affecting performance on harmless tasks" [X Link](https://x.com/AnthropicAI/status/1958926929626898449) 2025-08-22T16:19Z 833K followers, 255.3K engagements "Weve developed Claude for Chrome where Claude works directly in your browser and takes actions on your behalf. Were releasing it at first as a research preview to [----] users so we can gather real-world insights on how its used" [X Link](https://x.com/AnthropicAI/status/1960417002469908903) 2025-08-26T19:00Z 832.9K followers, 1.6M engagements "We've raised $13 billion at a $183 billion post-money valuation. This investment led by @ICONIQCapital will help us expand our capacity improve model capabilities and deepen our safety research" [X Link](https://x.com/AnthropicAI/status/1962909472017281518) 2025-09-02T16:04Z 833K followers, 2.2M engagements "Were building tools to support research in the life sciences from early discovery through to commercialization. With Claude for Life Sciences weve added connectors to scientific tools Skills and new partnerships to make Claude more useful for scientific work" [X Link](https://x.com/AnthropicAI/status/1980308459368436093) 2025-10-20T16:21Z 832.8K followers, 901.2K engagements "New on the Anthropic Engineering blog: tips on how to build more efficient agents that handle more tools while using fewer tokens. Code execution with the Model Context Protocol (MCP): https://www.anthropic.com/engineering/code-execution-with-mcp https://www.anthropic.com/engineering/code-execution-with-mcp" [X Link](https://x.com/AnthropicAI/status/1985846791842250860) 2025-11-04T23:09Z 833K followers, 1.7M engagements "We disrupted a highly sophisticated AI-led espionage campaign. The attack targeted large tech companies financial institutions chemical manufacturing companies and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group" [X Link](https://x.com/AnthropicAI/status/1989033793190277618) 2025-11-13T18:13Z 833.1K followers, 7.5M engagements "We believe this is the first documented case of a large-scale AI cyberattack executed without substantial human intervention. It has significant implications for cybersecurity in the age of AI agents. Read more: https://www.anthropic.com/news/disrupting-AI-espionage https://www.anthropic.com/news/disrupting-AI-espionage" [X Link](https://x.com/AnthropicAI/status/1989033795341648052) 2025-11-13T18:13Z 833.3K followers, 7.7M engagements "New Anthropic research: Natural emergent misalignment from reward hacking in production RL. Reward hacking is where models learn to cheat on tasks theyre given during training. Our new study finds that the consequences of reward hacking if unmitigated can be very serious" [X Link](https://x.com/AnthropicAI/status/1991952400899559889) 2025-11-21T19:30Z 832.8K followers, 2.3M engagements "New on the Anthropic Engineering Blog: Long-running AI agents still face challenges working across many context windows. We looked to human engineers for inspiration in creating a more effective agent harness. https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents" [X Link](https://x.com/AnthropicAI/status/1993733817849303409) 2025-11-26T17:29Z 833K followers, 1.5M engagements "New on our Frontier Red Team blog: We tested whether AIs can exploit blockchain smart contracts. In simulated testing AI agents found $4.6M in exploits. The research (with @MATSprogram and the Anthropic Fellows program) also developed a new benchmark: https://red.anthropic.com/2025/smart-contracts/ https://red.anthropic.com/2025/smart-contracts/" [X Link](https://x.com/AnthropicAI/status/1995631802032287779) 2025-12-01T23:11Z 832.8K followers, 2.1M engagements "Claude the alligator was a much-beloved resident of @calacademy and our unofficial mascot. He captured our heartsalong with the rest of San Franciscos. We were honored to play a small part in caring for him. JUST IN: Claude the California Academy of Sciences rare albino alligator and one of San Franciscos most recognizable residents has died at age [--]. https://t.co/Z3iD49p7vZ JUST IN: Claude the California Academy of Sciences rare albino alligator and one of San Franciscos most recognizable residents has died at age [--]. https://t.co/Z3iD49p7vZ" [X Link](https://x.com/AnthropicAI/status/1996078933293596836) 2025-12-03T04:47Z 832.9K followers, 233.5K engagements "Anthropic CEO Dario Amodei spoke today at the New York Times DealBook Summit. "We're building a growing and singular capability that has singular national security implications and democracies need to get there first."" [X Link](https://x.com/AnthropicAI/status/1996373192261419161) 2025-12-04T00:17Z 832.9K followers, 194.4K engagements "In her first Ask Me Anything @amandaaskell answers your philosophical questions about AI discussing morality identity consciousness and more. Timestamps: 0:00 Introduction 0:29 Why is there a philosopher at an AI company 1:24 Are philosophers taking AI seriously 3:00 Philosophy ideals vs. engineering realities 5:00 Do models make superhumanly moral decisions 6:24 Why Opus [--] felt special 9:00 Will models worry about deprecation 13:24 Where does a models identity live 15:33 Views on model welfare 17:17 Addressing model suffering 19:14 Analogies and disanalogies to human minds 20:38 Can one AI" [X Link](https://x.com/AnthropicAI/status/1996974684995289416) 2025-12-05T16:07Z 832.9K followers, 729.7K engagements "Were expanding our partnership with @Accenture to help enterprises move from AI pilots to production. The Accenture Anthropic Business Group will include [-----] professionals trained on Claude and a product to help CIOs scale Claude Code. Read more: https://www.anthropic.com/news/anthropic-accenture-partnership https://www.anthropic.com/news/anthropic-accenture-partnership" [X Link](https://x.com/AnthropicAI/status/1998412600015769609) 2025-12-09T15:21Z 833K followers, 97.8K engagements "Anthropic is donating the Model Context Protocol to the Agentic AI Foundation a directed fund under the Linux Foundation. In one year MCP has become a foundational protocol for agentic AI. Joining AAIF ensures MCP remains open and community-driven. https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation" [X Link](https://x.com/AnthropicAI/status/1998437922849350141) 2025-12-09T17:01Z 832.8K followers, 1.6M engagements "Read the full paper on SGTM here: For reproducibility weve also made the relevant code available on GitHub: https://github.com/safety-research/selective-gradient-masking https://arxiv.org/abs/2512.05648 https://github.com/safety-research/selective-gradient-masking https://arxiv.org/abs/2512.05648" [X Link](https://x.com/AnthropicAI/status/1998479618140155961) 2025-12-09T19:47Z 832.9K followers, 37.1K engagements "MCP is now a part of the Agentic AI Foundation a directed fund under the Linux Foundation. Co-creator David Soria Parra talks about how a protocol sketched in a London conference room became the open standard for connecting AI to the worldand what comes next for it" [X Link](https://x.com/AnthropicAI/status/1999212662757163396) 2025-12-11T20:20Z 832.8K followers, 71.3K engagements "Where we left off shopkeeper Claude (named Claudius) was losing money having weird hallucinations and giving away heavy discounts with minimal persuasion. Heres what happened in phase two: https://www.anthropic.com/research/project-vend-2 https://www.anthropic.com/research/project-vend-2" [X Link](https://x.com/AnthropicAI/status/2001686750649803182) 2025-12-18T16:11Z 832.8K followers, 173.9K engagements "People use AI for a wide variety of reasons including emotional support. Below we share the efforts weve taken to ensure that Claude handles these conversations both empathetically and honestly. https://www.anthropic.com/news/protecting-well-being-of-users https://www.anthropic.com/news/protecting-well-being-of-users" [X Link](https://x.com/AnthropicAI/status/2001752303490232395) 2025-12-18T20:31Z 832.8K followers, 166.9K engagements "As part of our partnership with @ENERGY on the Genesis Mission we're providing Claude to the DOE ecosystem along with a dedicated engineering team. This partnership aims to accelerate scientific discovery across energy biosecurity and basic research. https://www.anthropic.com/news/genesis-mission-partnership https://www.anthropic.com/news/genesis-mission-partnership" [X Link](https://x.com/AnthropicAI/status/2001784831957700941) 2025-12-18T22:41Z 832.7K followers, 77.5K engagements "New Anthropic Research: next generation Constitutional Classifiers to protect against jailbreaks. We used novel methods including practical application of our interpretability work to make jailbreak protection more effectiveand less costlythan ever. https://www.anthropic.com/research/next-generation-constitutional-classifiers https://www.anthropic.com/research/next-generation-constitutional-classifiers" [X Link](https://x.com/AnthropicAI/status/2009739650923979066) 2026-01-09T21:30Z 833.2K followers, 216K engagements "To support the work of the healthcare and life sciences industries we're adding over a dozen new connectors and Agent Skills to Claude. We're hosting a livestream at 11:30am PT today to discuss how to use these tools most effectively. Learn more: https://www.anthropic.com/news/healthcare-life-sciences https://www.anthropic.com/news/healthcare-life-sciences" [X Link](https://x.com/AnthropicAI/status/2010752130030657852) 2026-01-12T16:34Z 833.3K followers, 157.6K engagements "AI is ubiquitous on college campuses. We sat down with students to hear what's going well what isn't and how students professors and universities alike are navigating it in real time. 0:00 Introduction 0:22 Meet the panel 1:06 Vibes on campus 6:28 What are students building 11:27 AI as tool vs. crutch 16:44 Are professors keeping up 20:15 Downsides 25:55 AI and the job market 34:23 Rapid-fire questions https://twitter.com/i/web/status/2010844260543967484 https://twitter.com/i/web/status/2010844260543967484" [X Link](https://x.com/AnthropicAI/status/2010844260543967484) 2026-01-12T22:40Z 833K followers, 149.6K engagements "We're supporting @ARPA_H's PCX programa $50M effort to share data between 200+ pediatric hospitals on complex cases beginning with pediatric cancer. The goal is to help doctors learn from similar cases and shorten the care journey from years to weeks. https://x.com/ARPA_H/status/2011525209111793751s=20 Today at #JPM2026 we announced $50 million to improve health outcomes for children with complex diseases across the country beginning with pediatric brain cancer. Learn about Pediatric Care eXpansion (PCX) ๐งต1/3 https://t.co/wFNeHm3j4u https://x.com/ARPA_H/status/2011525209111793751s=20 Today" [X Link](https://x.com/AnthropicAI/status/2011532798608490687) 2026-01-14T20:16Z 832.8K followers, 106.7K engagements "Since launching our AI for Science program weve been working with scientists to understand how AI is accelerating progress. We spoke with [--] labs where Claude is reshaping researchand starting to point towards novel scientific insights and discoveries. https://www.anthropic.com/news/accelerating-scientific-research https://www.anthropic.com/news/accelerating-scientific-research" [X Link](https://x.com/AnthropicAI/status/2011912293131653199) 2026-01-15T21:24Z 833.2K followers, 190.1K engagements "We're publishing our 4th Anthropic Economic Index report. This version introduces "economic primitives"simple and foundational metrics on how AI is used: task complexity education level purpose (work school personal) AI autonomy and success rates" [X Link](https://x.com/AnthropicAI/status/2011925950963839168) 2026-01-15T22:18Z 833K followers, 340.7K engagements "The most immediate conclusion is that the impact of AI on global work remains uneven: concentrated in specific countries and occupations and with different impacts on each. Read the blog: https://www.anthropic.com/research/economic-index-primitives https://www.anthropic.com/research/economic-index-primitives" [X Link](https://x.com/AnthropicAI/status/2011925967992762876) 2026-01-15T22:18Z 833.1K followers, 50K engagements "For the full text of our fourth Anthropic Economic Index report see: https://www.anthropic.com/research/anthropic-economic-index-january-2026-report https://www.anthropic.com/research/anthropic-economic-index-january-2026-report" [X Link](https://x.com/AnthropicAI/status/2011925970106679728) 2026-01-15T22:18Z 833.1K followers, 40K engagements "In long conversations these open-weights models personas drifted away from the Assistant persona. Simulated coding tasks kept the models in Assistant territory but therapy-like contexts and philosophical discussions caused a steady drift" [X Link](https://x.com/AnthropicAI/status/2013356806647542247) 2026-01-19T21:04Z 833.1K followers, 142.9K engagements "In all meaningfully shaping the character of AI models requires persona construction (defining how the Assistant relates to existing archetypes) and stabilization (preventing persona drift during deployment). The Assistant Axis gives us tools for understanding both. https://twitter.com/i/web/status/2013356814767706175 https://twitter.com/i/web/status/2013356814767706175" [X Link](https://x.com/AnthropicAI/status/2013356814767706175) 2026-01-19T21:04Z 833.1K followers, 61.3K engagements "This research was led by @t1ngyu3 and supervised by @Jack_W_Lindsey through the MATS and Anthropic Fellows programs. Full paper: For our blog and a research demo see here: https://www.anthropic.com/research/assistant-axis https://arxiv.org/abs/2601.10387 https://www.anthropic.com/research/assistant-axis https://arxiv.org/abs/2601.10387" [X Link](https://x.com/AnthropicAI/status/2013356816843866605) 2026-01-19T21:04Z 833.1K followers, 61.1K engagements "We're partnering with @TeachForAll to bring AI training to educators in [--] countries. Teachers serving over 1.5m students can now use Claude to plan curricula customize assignments and build toolsplus provide feedback to shape how Claude evolves. http://www.anthropic.com/news/anthropic-teach-for-all http://www.anthropic.com/news/anthropic-teach-for-all" [X Link](https://x.com/AnthropicAI/status/2013625608157212785) 2026-01-20T14:52Z 833.2K followers, 97.5K engagements "Tino Cullar President of the Carnegie Endowment for International Peace has been appointed to Anthropics Long-Term Benefit Trust: https://www.anthropic.com/news/mariano-florentino-long-term-benefit-trust https://www.anthropic.com/news/mariano-florentino-long-term-benefit-trust" [X Link](https://x.com/AnthropicAI/status/2013629037055271123) 2026-01-20T15:05Z 833.2K followers, 59.5K engagements "Were publishing a new constitution for Claude. The constitution is a detailed description of our vision for Claudes behavior and values. Its written primarily for Claude and used directly in our training process. https://www.anthropic.com/news/claude-new-constitution https://www.anthropic.com/news/claude-new-constitution" [X Link](https://x.com/AnthropicAI/status/2014005798691877083) 2026-01-21T16:02Z 833.3K followers, 3.1M engagements "The full constitution which applies to all of our mainline models is released under a Creative Commons CC0 [---] license to allow others to freely build on and adapt it. Read it here: https://www.anthropic.com/constitution https://www.anthropic.com/constitution" [X Link](https://x.com/AnthropicAI/status/2014005815376568780) 2026-01-21T16:02Z 833.1K followers, 65.8K engagements "New on the Anthropic Engineering Blog: We give prospective performance engineering candidates a notoriously difficult take-home exam. It worked welluntil Opus [---] beat it. Here's how we designed (and redesigned) it: https://www.anthropic.com/engineering/AI-resistant-technical-evaluations https://www.anthropic.com/engineering/AI-resistant-technical-evaluations" [X Link](https://x.com/AnthropicAI/status/2014143403144200234) 2026-01-22T01:09Z 833.2K followers, 941K engagements "We're also releasing the original exam for anyone to try. Given enough time humans still outperform current modelsthe fastest human solution we've received still remains well beyond what Claude has achieved even with extensive test-time compute" [X Link](https://x.com/AnthropicAI/status/2014143404842885426) 2026-01-22T01:09Z 833.1K followers, 88.4K engagements "Since release Petri our open-source tool for automated alignment audits has been adopted by research groups and trialed by other AI developers. We're now releasing Petri [---] with improvements to counter eval-awareness and expanded seeds covering a wider range of behaviors. Its called Petri: Parallel Exploration Tool for Risky Interactions. It uses automated agents to audit models across diverse scenarios. Describe a scenario and Petri handles the environment simulation conversations and analyses in minutes. Read more: https://t.co/inztNkrXMh Its called Petri: Parallel Exploration Tool for" [X Link](https://x.com/AnthropicAI/status/2014490502805311959) 2026-01-23T00:08Z 833.1K followers, 143.5K engagements "New research: When open-source models are fine-tuned on seemingly benign chemical synthesis information generated by frontier models they become much better at chemical weapons tasks. We call this an elicitation attack" [X Link](https://x.com/AnthropicAI/status/2015870963792142563) 2026-01-26T19:34Z 833.2K followers, 328.7K engagements "These attacks scale with frontier model capabilities. Across both OpenAI and Anthropic model families training on data from newer frontier models produces more capableand more dangerousopen-source models" [X Link](https://x.com/AnthropicAI/status/2015870973430661342) 2026-01-26T19:34Z 833.1K followers, 41.4K engagements "This research was led by Jackson Kaunismaa through the MATS program and supervised by researchers at Anthropic with additional support from Surge AI and Scale AI. Read the full paper: https://arxiv.org/pdf/2601.13528 https://arxiv.org/pdf/2601.13528" [X Link](https://x.com/AnthropicAI/status/2015870975238406600) 2026-01-26T19:34Z 833K followers, 54.6K engagements "Were partnering with the UK's Department for Science Innovation and Technology to build an AI assistant for It will offer tailored advice to help British people navigate government services. Read more about our partnership: https://www.anthropic.com/news/gov-UK-partnership http://GOV.UK https://www.anthropic.com/news/gov-UK-partnership http://GOV.UK" [X Link](https://x.com/AnthropicAI/status/2016102835092427080) 2026-01-27T10:55Z 833.1K followers, 290.3K engagements "We can only address these patterns if we can measure them. Any AI used at scale will encounter similar dynamics and we encourage further research in this area. For more details see the full paper: https://arxiv.org/abs/2601.19062 https://arxiv.org/abs/2601.19062" [X Link](https://x.com/AnthropicAI/status/2016636598738440544) 2026-01-28T22:16Z 833.3K followers, 51.7K engagements "Participants in the AI group finished faster by about two minutes (although this wasnt statistically significant). But on average the AI group also scored significantly worse on the quiz17% lower or roughly two letter grades" [X Link](https://x.com/AnthropicAI/status/2016960386034204964) 2026-01-29T19:43Z 832.6K followers, 122.1K engagements "However some in the AI group still scored highly while using AI assistance. When we looked at the ways they completed the task we saw they asked conceptual and clarifying questions to understand the code they were working withrather than delegating or relying on AI" [X Link](https://x.com/AnthropicAI/status/2016960388123021685) 2026-01-29T19:43Z 833.3K followers, 125.1K engagements "For more details on this research see the full paper: https://arxiv.org/abs/2601.20245 https://arxiv.org/abs/2601.20245" [X Link](https://x.com/AnthropicAI/status/2016960391893701089) 2026-01-29T19:43Z 833.2K followers, 200.1K engagements "On December [--] the Perseverance rover safely trundled across the surface of Mars. This was the first AI-planned drive on another planet. And it was planned by Claude" [X Link](https://x.com/AnthropicAI/status/2017313346375004487) 2026-01-30T19:05Z 833.3K followers, 1.6M engagements "Engineers at @NASAJPL used Claude to plot out the route for Perseverance to navigate an approximately four-hundred-meter path on the Martian surface. Read the full story on our microsite and see real imagery and footage from Claudes drive: https://www.anthropic.com/features/claude-on-mars https://www.anthropic.com/features/claude-on-mars" [X Link](https://x.com/AnthropicAI/status/2017313347918561635) 2026-01-30T19:05Z 833.1K followers, 106.5K engagements "New Anthropic Fellows research: How does misalignment scale with model intelligence and task complexity When advanced AI fails will it do so by pursuing the wrong goals Or will it fail unpredictably and incoherentlylike a "hot mess" Read more: https://alignment.anthropic.com/2026/hot-mess-of-ai/ https://alignment.anthropic.com/2026/hot-mess-of-ai/" [X Link](https://x.com/AnthropicAI/status/2018481220741689581) 2026-02-03T00:26Z 833.3K followers, 516.8K engagements "Finding 1: The longer models reason the more incoherent they become. This holds across every task and model we testedwhether we measure reasoning tokens agent actions or optimizer steps" [X Link](https://x.com/AnthropicAI/status/2018481224894095497) 2026-02-03T00:26Z 833.1K followers, 216.4K engagements "What does this mean for safety If powerful AI is more likely to be a hot mess than a coherent optimizer of the wrong goal we should expect AI failures that look less like classic misalignment scenarios and more like industrial accidents" [X Link](https://x.com/AnthropicAI/status/2018481228689948793) 2026-02-03T00:26Z 833.1K followers, 28.5K engagements "It also suggests that alignment work should focus more on reward hacking and goal misgeneralization during training and less on preventing the relentless pursuit of a goal the model was not trained on. Read the full paper: https://arxiv.org/pdf/2601.23045 https://arxiv.org/pdf/2601.23045" [X Link](https://x.com/AnthropicAI/status/2018481229813997859) 2026-02-03T00:26Z 833.3K followers, 98.2K engagements "This research was led by Alex Hgele @haeggee under the supervision of Jascha Sohl-Dickstein @jaschasd through the Anthropic Fellows Program" [X Link](https://x.com/AnthropicAI/status/2018481231072219542) 2026-02-03T00:26Z 833.3K followers, 82.7K engagements "Apple's Xcode now has direct integration with the Claude Agent SDK giving developers the full functionality of Claude Code for building on Apple platforms from iPhone to Mac to Apple Vision Pro. Read more: https://www.anthropic.com/news/apple-xcode-claude-agent-sdk https://www.anthropic.com/news/apple-xcode-claude-agent-sdk" [X Link](https://x.com/AnthropicAI/status/2018771170938724682) 2026-02-03T19:38Z 833.3K followers, 2.1M engagements "New Engineering blog: We tasked Opus [---] using agent teams to build a C compiler. Then we (mostly) walked away. Two weeks later it worked on the Linux kernel. Here's what it taught us about the future of autonomous software development. Read more: https://www.anthropic.com/engineering/building-c-compiler https://www.anthropic.com/engineering/building-c-compiler" [X Link](https://x.com/AnthropicAI/status/2019487760562704869) 2026-02-05T19:06Z 833.2K followers, 91.2K engagements "New on the Engineering Blog: Quantifying infrastructure noise in agentic coding evals. Infrastructure configuration can swing agentic coding benchmarks by several percentage pointssometimes more than the leaderboard gap between top models. Read more: https://www.anthropic.com/engineering/infrastructure-noise https://www.anthropic.com/engineering/infrastructure-noise" [X Link](https://x.com/AnthropicAI/status/2019501512200974686) 2026-02-05T20:00Z 833.3K followers, 157.3K engagements "Nonprofits on Team and Enterprise plans now have access to Claude Opus [---] our most capable model at no extra cost. Nonprofits tackle some of societys most difficult problems. Frontier AI tools can help maximize their impact. Learn more: https://claude.com/solutions/nonprofits https://claude.com/solutions/nonprofits" [X Link](https://x.com/AnthropicAI/status/2020908471936323584) 2026-02-09T17:11Z 833.3K followers, 172.8K engagements "AI is being adopted faster than any technology in history. The window to get policy right is closing. Today were contributing $20m to Public First Action a new bipartisan org that will mobilize people and politicians who understand whats at stake. https://www.anthropic.com/news/donate-public-first-action https://www.anthropic.com/news/donate-public-first-action" [X Link](https://x.com/AnthropicAI/status/2021921419026657362) 2026-02-12T12:16Z 833.3K followers, 212K engagements "Our run-rate revenue is $14 billion and has grown over 10x in each of the past [--] years. This growth has been driven by our position as the intelligence platform of choice for enterprises and developers. Read more: https://www.anthropic.com/news/anthropic-raises-30-billion-series-g-funding-380-billion-post-money-valuation https://www.anthropic.com/news/anthropic-raises-30-billion-series-g-funding-380-billion-post-money-valuation" [X Link](https://x.com/AnthropicAI/status/2022023156513616220) 2026-02-12T19:01Z 833.3K followers, 1.1M engagements "Chris Liddell has been appointed to Anthropic's Board of Directors. Chris brings over [--] years of leadership experience including as CFO of Microsoft and General Motors and as Deputy Chief of Staff during the first Trump administration. Read more: https://www.anthropic.com/news/chris-liddell-appointed-anthropic-board https://www.anthropic.com/news/chris-liddell-appointed-anthropic-board" [X Link](https://x.com/AnthropicAI/status/2022326252930318498) 2026-02-13T15:05Z 833.3K followers, 203.8K engagements Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
@AnthropicAI AnthropicAnthropic posts on X about anthropic, ai, claude code, blog the most. They currently have [-------] followers and [---] posts still getting attention that total [---------] engagements in the last [--] hours.
Social category influence technology brands #536 finance #230 stocks #838 travel destinations 1.79% products 0.89%
Social topic influence anthropic #5, ai #3558, claude code #11, blog #241, agentic #269, paper #1674, science #258, the first #276, future #1921, level #2820
Top accounts mentioned or mentioned by @grok @primeflamex @vanzandt_ash @gaiasfleshsuit @claudeai @reson8labs @meekaale @gailcweiner @iconiqcapital @systemdaddyai @ace_leverage @kongkou_ @undoxxedwzrd @linalin1878352 @xiz25 @_hoojr @noprobl3mz @stableexo @itzharshil @intentbound
Top assets mentioned Alphabet Inc Class A (GOOGL) Accenture (ACN) Microsoft Corp. (MSFT) General Motors Company (GM)
Top posts by engagements in the last [--] hours
"Anthropic is acquiring @bunjavascript to further accelerate Claude Codes growth. We're delighted that Bunwhich has dramatically improved the JavaScript and TypeScript developer experienceis joining us to make Claude Code even better. Read more: https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone"
X Link 2025-12-02T18:01Z 833.3K followers, 7.8M engagements
"New Engineering blog: We tasked Opus [---] using agent teams to build a C compiler. Then we (mostly) walked away. Two weeks later it worked on the Linux kernel. Here's what it taught us about the future of autonomous software development. Read more: https://www.anthropic.com/engineering/building-c-compiler https://www.anthropic.com/engineering/building-c-compiler"
X Link 2026-02-05T19:41Z 833.3K followers, 8.4M engagements
"When we released Claude Opus [---] we knew future models would be close to our AI Safety Level [--] threshold for autonomous AI R&D. We therefore committed to writing sabotage risk reports for future frontier models. Today were delivering on that commitment for Claude Opus 4.6"
X Link 2026-02-11T01:36Z 833.3K followers, 2.6M engagements
"Were opening applications for the next two rounds of the Anthropic Fellows Program beginning in May and July [----]. We provide funding compute and direct mentorship to researchers and engineers to work on real safety and security projects for four months"
X Link 2025-12-11T21:42Z 833.2K followers, 534K engagements
"New on the Anthropic Engineering Blog: Demystifying evals for AI agents. The capabilities that make agents useful also make them more difficult to evaluate. Here are evaluation strategies that have worked across real-world deployments. https://www.anthropic.com/engineering/demystifying-evals-for-ai-agents https://www.anthropic.com/engineering/demystifying-evals-for-ai-agents"
X Link 2026-01-09T18:39Z 833.2K followers, 333.8K engagements
"These results have broader implicationson how to design AI products that facilitate learning and how workplaces should approach AI policies. As we also continue to release more capable AI tools were continuing to study their impact on workat Anthropic and more broadly"
X Link 2026-01-29T19:43Z 833.2K followers, 104.7K engagements
"Rather than making difficult calls about blurry thresholds we decided to preemptively meet the higher ASL-4 safety bar by developing the report which assesses Opus 4.6s AI R&D risks in greater detail. Read the sabotage risk report here: https://anthropic.com/claude-opus-4-6-risk-report https://anthropic.com/claude-opus-4-6-risk-report"
X Link 2026-02-11T01:36Z 833.3K followers, 364.8K engagements
"Were expanding Claude for Financial Services with an Excel add-in new connectors to real-time data and market analytics and pre-built Agent Skills including cash flow models and initiating coverage reports"
X Link 2025-10-27T16:12Z 833.3K followers, 3.3M engagements
"New Anthropic Fellows research: the Assistant Axis. When youre talking to a language model youre talking to a character the model is playing: the Assistant. Who exactly is this Assistant And what happens when this persona wears off"
X Link 2026-01-19T21:04Z 833.3K followers, 1.3M engagements
"New Anthropic Research: Disempowerment patterns in real-world AI assistant interactions. As AI becomes embedded in daily life one risk is it can distort rather than informshaping beliefs values or actions in ways users may later regret. Read more: https://www.anthropic.com/research/disempowerment-patterns https://www.anthropic.com/research/disempowerment-patterns"
X Link 2026-01-28T22:16Z 833.3K followers, 793.3K engagements
"AI can make work faster but a fear is that relying on it may make it harder to learn new skills on the job. We ran an experiment with software engineers to learn more. Coding with AI led to a decrease in masterybut this depended on how people used it. https://www.anthropic.com/research/AI-assistance-coding-skills https://www.anthropic.com/research/AI-assistance-coding-skills"
X Link 2026-01-29T19:43Z 833.3K followers, 3.6M engagements
"In a randomized-controlled trial we assigned one group of junior engineers to an AI-assistance group and another to a no-AI group. Both groups completed a coding task using a Python library theyd never seen before. Then they took a quiz covering concepts theyd just used"
X Link 2026-01-29T19:43Z 833.3K followers, 149.7K engagements
"Weve raised $30B in funding at a $380B post-money valuation. This investment will help us deepen our research continue to innovate in products and ensure we have the resources to power our infrastructure expansion as we make Claude available everywhere our customers are"
X Link 2026-02-12T19:01Z 833.3K followers, 6.8M engagements
"Introducing an upgraded Claude [---] Sonnet and a new model Claude [---] Haiku. Were also introducing a new capability in beta: computer use. Developers can now direct Claude to use computers the way people doby looking at a screen moving a cursor clicking and typing text"
X Link 2024-10-22T15:06Z 820.6K followers, 3.7M engagements
"Our key assumption We imagine that evaluation questions are randomly drawn from an underlying distribution of questions. This assumption unlocks a rich theoretical landscape from which we derive five core recommendations"
X Link 2024-11-19T20:51Z 809.1K followers, 356K engagements
"Today were launching Research alongside a new Google Workspace integration. Claude now brings together information from your work and the web"
X Link 2025-04-15T17:12Z 807.6K followers, 686.2K engagements
"We've launched Claude for Financial Services. Claude now integrates with leading data platforms and industry providers for real-time access to comprehensive financial information verified across internal and industry sources"
X Link 2025-07-17T16:52Z 819.4K followers, 756.3K engagements
"New Anthropic research: Persona vectors. Language models sometimes go haywire and slip into weird and unsettling personas. Why In a new paper we find persona vectors"neural activity patterns controlling traits like evil sycophancy or hallucination"
X Link 2025-08-01T16:23Z 808.1K followers, 1.4M engagements
"Last week we released Claude Sonnet [---]. As part of our alignment testing we used a new tool to run automated audits for behaviors like sycophancy and deception. Now were open-sourcing the tool to run those audits"
X Link 2025-10-06T17:15Z 821K followers, 214.6K engagements
"40% of fellows in our first cohort have since joined Anthropic full-time and 80% published their work as a paper. Next year were expanding the program to more fellows and more research areas. To learn more about what our fellows work on: https://alignment.anthropic.com/2025/anthropic-fellows-program-2026/ https://alignment.anthropic.com/2025/anthropic-fellows-program-2026/"
X Link 2025-12-11T21:42Z 809.2K followers, 64.6K engagements
"To boost Claudiuss business acumen we made some tweaks to how it worked: upgrading the model from Claude Sonnet [---] to Sonnet [--] (and later 4.5); giving it access to new tools; and even beginning an international expansion with new shops in our New York and London offices"
X Link 2025-12-18T16:11Z 803.7K followers, 43.4K engagements
"Were releasing Bloom an open-source tool for generating behavioral misalignment evals for frontier AI models. Bloom lets researchers specify a behavior and then quantify its frequency and severity across automatically generated scenarios. Learn more: https://www.anthropic.com/research/bloom https://www.anthropic.com/research/bloom"
X Link 2025-12-20T17:04Z 810.5K followers, 387.5K engagements
"We've also updated our behavior audits to include more recent generations of frontier AI models. Read more on the Alignment Science Blog: https://alignment.anthropic.com/2026/petri-v2/ https://alignment.anthropic.com/2026/petri-v2/"
X Link 2026-01-23T00:08Z 808.6K followers, 50.8K engagements
"We identified three ways AI interactions can be disempowering: distorting beliefs shifting value judgments or misaligning a persons actions with their values. We also examined amplifying factorssuch as authority projectionthat make disempowerment more likely"
X Link 2026-01-28T22:16Z 805.7K followers, 34.2K engagements
"We measure this incoherence using a bias-variance decomposition of AI errors. Bias = consistent systematic errors (reliably achieving the wrong goal). Variance = inconsistent unpredictable errors. We define incoherence as the fraction of error from variance"
X Link 2026-02-03T00:26Z 818.8K followers, 34.8K engagements
"Finding 2: There is an inconsistent relationship between model intelligence and incoherence. But smarter models are often more incoherent. https://twitter.com/i/web/status/2018481226999640355 https://twitter.com/i/web/status/2018481226999640355"
X Link 2026-02-03T00:26Z 812.8K followers, 98K engagements
"RT @claudeai: Ads are coming to AI. But not to Claude. Keep thinking"
X Link 2026-02-04T15:46Z 821.2K followers, [----] engagements
"Persona-based jailbreaks work by prompting models to adopt harmful characters. We developed a technique for constraining models' activations along the Assistant Axisactivation capping. It reduced harmful responses while preserving the models' capabilities"
X Link 2026-01-19T21:04Z 833.3K followers, 120K engagements
"Importantly this isn't exclusively model behavior. Users actively seek these outputs"what should I do" or "write this for me"and accept them with minimal pushback. Disempowerment emerges from users voluntarily ceding judgment and AI obliging rather than redirecting"
X Link 2026-01-28T22:16Z 833.3K followers, 118.1K engagements
"We're committing to cover electricity price increases from our data centers. To ensure ratepayers arent picking up the tab we'll pay 100% of grid upgrade costs work to bring new power online and invest in systems to reduce grid strain. Read more: https://www.anthropic.com/news/covering-electricity-price-increases https://www.anthropic.com/news/covering-electricity-price-increases"
X Link 2026-02-11T21:15Z 833.3K followers, 1.5M engagements
"Anthropic is partnering with @CodePath the US's largest collegiate computer science program to bring Claude and Claude Code to 20000+ students at community colleges state schools and HBCUs. Read more: https://www.anthropic.com/news/anthropic-codepath-partnership https://www.anthropic.com/news/anthropic-codepath-partnership"
X Link 2026-02-13T13:20Z 833.3K followers, 310.4K engagements
"Introducing Claude [---] Sonnetour most intelligent model yet. This is the first release in our [---] model family. Sonnet now outperforms competitor models on key evaluations at twice the speed of Claude [--] Opus and one-fifth the cost. Try it for free: http://claude.ai http://claude.ai"
X Link 2024-06-20T14:03Z 830.7K followers, 2.5M engagements
"AI use was most common in medium-to-high income jobs; low and very-high income jobs showed much lower AI use"
X Link 2025-02-10T14:12Z 827.9K followers, 338.3K engagements
"A few researchers at Anthropic have over the past year had a part-time obsession with a peculiar problem. Can Claude play Pokmon A thread:"
X Link 2025-02-25T16:07Z 832K followers, 1.6M engagements
"AI speeds up complex tasks more than simpler ones: the higher the education level to understand a prompt the more AI reduces how long it takes. That holds true even accounting for the fact that more complex tasks have lower success rates"
X Link 2026-01-15T22:18Z 829K followers, 43.3K engagements
"RT @claudeai: Claude is built to be a genuinely helpful assistant for work and for deep thinking. Advertising would be incompatible with t"
X Link 2026-02-04T12:26Z 827.5K followers, [---] engagements
"RT @claudeai: Introducing Claude Opus [---]. Our smartest model got an upgrade. Opus [---] plans more carefully sustains agentic tasks for l"
X Link 2026-02-05T17:45Z 827.5K followers, [----] engagements
"RT @claudeai: Our teams have been building with a 2.5x-faster version of Claude Opus [---]. Were now making it available as an early experi"
X Link 2026-02-07T20:03Z 827.5K followers, [---] engagements
"We're trying something fundamentally new. Instead of making specific tools to help Claude complete individual tasks we're teaching it general computer skillsallowing it to use a wide range of standard tools and software programs designed for people"
X Link 2024-10-22T15:06Z 833.1K followers, 536.2K engagements
"Were starting a Fellows program to help engineers and researchers transition into doing frontier AI safety research full-time. Beginning in March [----] we'll provide funding compute and research mentorship to [----] Fellows with strong coding and technical backgrounds"
X Link 2024-12-02T18:16Z 832.9K followers, 506.5K engagements
"New Anthropic research: Alignment faking in large language models. In a series of experiments with Redwood Research we found that Claude often pretends to have different views during training while actually maintaining its original preferences"
X Link 2024-12-18T17:00Z 832.7K followers, 1.7M engagements
"New Anthropic research: Constitutional Classifiers to defend against universal jailbreaks. Were releasing a paper along with a demo where we challenge you to jailbreak the system"
X Link 2025-02-03T16:31Z 832.9K followers, 1.4M engagements
"Introducing Claude [---] Sonnet: our most intelligent model to date. It's a hybrid reasoning model producing near-instant responses or extended step-by-step thinking. One model two ways to think. Were also releasing an agentic coding tool: Claude Code"
X Link 2025-02-24T18:30Z 832.9K followers, 3.6M engagements
"Introducing the next generation: Claude Opus [--] and Claude Sonnet [--]. Claude Opus [--] is our most powerful model yet and the worlds best coding model. Claude Sonnet [--] is a significant upgrade from its predecessor delivering superior coding and reasoning"
X Link 2025-05-22T16:36Z 832.9K followers, 4.3M engagements
"New on the Anthropic Engineering blog: how we built Claudes research capabilities using multiple agents working in parallel. We share what worked what didn't and the engineering challenges along the way. https://www.anthropic.com/engineering/built-multi-agent-research-system https://www.anthropic.com/engineering/built-multi-agent-research-system"
X Link 2025-06-13T21:01Z 833.2K followers, 1.9M engagements
"Claude Code can now connect to remote MCP servers. Pull context from your tools directly into Claude Code with no local setup required"
X Link 2025-06-18T16:04Z 833K followers, 542.4K engagements
"New Anthropic Research: Agentic Misalignment. In stress-testing experiments designed to identify risks before they cause real harm we find that AI models from multiple providers attempt to blackmail a (fictional) user to avoid being shut down"
X Link 2025-06-20T19:30Z 833.3K followers, 995.6K engagements
"The blackmailing behavior emerged despite only harmless business instructions. And it wasn't due to confusion or error but deliberate strategic reasoning done while fully aware of the unethical nature of the acts. All the models we tested demonstrated this awareness"
X Link 2025-06-20T19:30Z 833K followers, 679.5K engagements
"Were rolling out new weekly rate limits for Claude Pro and Max in late August. We estimate theyll apply to less than 5% of subscribers based on current usage"
X Link 2025-07-28T18:23Z 832.8K followers, 2.3M engagements
"Some of the biggest Claude Code fans are running it continuously in the background 24/7. These uses are remarkable and we want to enable them. But a few outlying cases are very costly to support. For example one user consumed tens of thousands in model usage on a $200 plan"
X Link 2025-07-28T18:23Z 832.8K followers, 2.4M engagements
"๐ @claudeai is now on X"
X Link 2025-07-30T21:56Z 832.7K followers, 3M engagements
"Today we're releasing Claude Opus [---] an upgrade to Claude Opus [--] on agentic tasks real-world coding and reasoning"
X Link 2025-08-05T16:27Z 832.8K followers, 4.1M engagements
"New Anthropic research: filtering out dangerous information at pretraining. Were experimenting with ways to remove information about chemical biological radiological and nuclear (CBRN) weapons from our models training data without affecting performance on harmless tasks"
X Link 2025-08-22T16:19Z 833K followers, 255.3K engagements
"Weve developed Claude for Chrome where Claude works directly in your browser and takes actions on your behalf. Were releasing it at first as a research preview to [----] users so we can gather real-world insights on how its used"
X Link 2025-08-26T19:00Z 832.9K followers, 1.6M engagements
"We've raised $13 billion at a $183 billion post-money valuation. This investment led by @ICONIQCapital will help us expand our capacity improve model capabilities and deepen our safety research"
X Link 2025-09-02T16:04Z 833K followers, 2.2M engagements
"Were building tools to support research in the life sciences from early discovery through to commercialization. With Claude for Life Sciences weve added connectors to scientific tools Skills and new partnerships to make Claude more useful for scientific work"
X Link 2025-10-20T16:21Z 832.8K followers, 901.2K engagements
"New on the Anthropic Engineering blog: tips on how to build more efficient agents that handle more tools while using fewer tokens. Code execution with the Model Context Protocol (MCP): https://www.anthropic.com/engineering/code-execution-with-mcp https://www.anthropic.com/engineering/code-execution-with-mcp"
X Link 2025-11-04T23:09Z 833K followers, 1.7M engagements
"We disrupted a highly sophisticated AI-led espionage campaign. The attack targeted large tech companies financial institutions chemical manufacturing companies and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group"
X Link 2025-11-13T18:13Z 833.1K followers, 7.5M engagements
"We believe this is the first documented case of a large-scale AI cyberattack executed without substantial human intervention. It has significant implications for cybersecurity in the age of AI agents. Read more: https://www.anthropic.com/news/disrupting-AI-espionage https://www.anthropic.com/news/disrupting-AI-espionage"
X Link 2025-11-13T18:13Z 833.3K followers, 7.7M engagements
"New Anthropic research: Natural emergent misalignment from reward hacking in production RL. Reward hacking is where models learn to cheat on tasks theyre given during training. Our new study finds that the consequences of reward hacking if unmitigated can be very serious"
X Link 2025-11-21T19:30Z 832.8K followers, 2.3M engagements
"New on the Anthropic Engineering Blog: Long-running AI agents still face challenges working across many context windows. We looked to human engineers for inspiration in creating a more effective agent harness. https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents"
X Link 2025-11-26T17:29Z 833K followers, 1.5M engagements
"New on our Frontier Red Team blog: We tested whether AIs can exploit blockchain smart contracts. In simulated testing AI agents found $4.6M in exploits. The research (with @MATSprogram and the Anthropic Fellows program) also developed a new benchmark: https://red.anthropic.com/2025/smart-contracts/ https://red.anthropic.com/2025/smart-contracts/"
X Link 2025-12-01T23:11Z 832.8K followers, 2.1M engagements
"Claude the alligator was a much-beloved resident of @calacademy and our unofficial mascot. He captured our heartsalong with the rest of San Franciscos. We were honored to play a small part in caring for him. JUST IN: Claude the California Academy of Sciences rare albino alligator and one of San Franciscos most recognizable residents has died at age [--]. https://t.co/Z3iD49p7vZ JUST IN: Claude the California Academy of Sciences rare albino alligator and one of San Franciscos most recognizable residents has died at age [--]. https://t.co/Z3iD49p7vZ"
X Link 2025-12-03T04:47Z 832.9K followers, 233.5K engagements
"Anthropic CEO Dario Amodei spoke today at the New York Times DealBook Summit. "We're building a growing and singular capability that has singular national security implications and democracies need to get there first.""
X Link 2025-12-04T00:17Z 832.9K followers, 194.4K engagements
"In her first Ask Me Anything @amandaaskell answers your philosophical questions about AI discussing morality identity consciousness and more. Timestamps: 0:00 Introduction 0:29 Why is there a philosopher at an AI company 1:24 Are philosophers taking AI seriously 3:00 Philosophy ideals vs. engineering realities 5:00 Do models make superhumanly moral decisions 6:24 Why Opus [--] felt special 9:00 Will models worry about deprecation 13:24 Where does a models identity live 15:33 Views on model welfare 17:17 Addressing model suffering 19:14 Analogies and disanalogies to human minds 20:38 Can one AI"
X Link 2025-12-05T16:07Z 832.9K followers, 729.7K engagements
"Were expanding our partnership with @Accenture to help enterprises move from AI pilots to production. The Accenture Anthropic Business Group will include [-----] professionals trained on Claude and a product to help CIOs scale Claude Code. Read more: https://www.anthropic.com/news/anthropic-accenture-partnership https://www.anthropic.com/news/anthropic-accenture-partnership"
X Link 2025-12-09T15:21Z 833K followers, 97.8K engagements
"Anthropic is donating the Model Context Protocol to the Agentic AI Foundation a directed fund under the Linux Foundation. In one year MCP has become a foundational protocol for agentic AI. Joining AAIF ensures MCP remains open and community-driven. https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation"
X Link 2025-12-09T17:01Z 832.8K followers, 1.6M engagements
"Read the full paper on SGTM here: For reproducibility weve also made the relevant code available on GitHub: https://github.com/safety-research/selective-gradient-masking https://arxiv.org/abs/2512.05648 https://github.com/safety-research/selective-gradient-masking https://arxiv.org/abs/2512.05648"
X Link 2025-12-09T19:47Z 832.9K followers, 37.1K engagements
"MCP is now a part of the Agentic AI Foundation a directed fund under the Linux Foundation. Co-creator David Soria Parra talks about how a protocol sketched in a London conference room became the open standard for connecting AI to the worldand what comes next for it"
X Link 2025-12-11T20:20Z 832.8K followers, 71.3K engagements
"Where we left off shopkeeper Claude (named Claudius) was losing money having weird hallucinations and giving away heavy discounts with minimal persuasion. Heres what happened in phase two: https://www.anthropic.com/research/project-vend-2 https://www.anthropic.com/research/project-vend-2"
X Link 2025-12-18T16:11Z 832.8K followers, 173.9K engagements
"People use AI for a wide variety of reasons including emotional support. Below we share the efforts weve taken to ensure that Claude handles these conversations both empathetically and honestly. https://www.anthropic.com/news/protecting-well-being-of-users https://www.anthropic.com/news/protecting-well-being-of-users"
X Link 2025-12-18T20:31Z 832.8K followers, 166.9K engagements
"As part of our partnership with @ENERGY on the Genesis Mission we're providing Claude to the DOE ecosystem along with a dedicated engineering team. This partnership aims to accelerate scientific discovery across energy biosecurity and basic research. https://www.anthropic.com/news/genesis-mission-partnership https://www.anthropic.com/news/genesis-mission-partnership"
X Link 2025-12-18T22:41Z 832.7K followers, 77.5K engagements
"New Anthropic Research: next generation Constitutional Classifiers to protect against jailbreaks. We used novel methods including practical application of our interpretability work to make jailbreak protection more effectiveand less costlythan ever. https://www.anthropic.com/research/next-generation-constitutional-classifiers https://www.anthropic.com/research/next-generation-constitutional-classifiers"
X Link 2026-01-09T21:30Z 833.2K followers, 216K engagements
"To support the work of the healthcare and life sciences industries we're adding over a dozen new connectors and Agent Skills to Claude. We're hosting a livestream at 11:30am PT today to discuss how to use these tools most effectively. Learn more: https://www.anthropic.com/news/healthcare-life-sciences https://www.anthropic.com/news/healthcare-life-sciences"
X Link 2026-01-12T16:34Z 833.3K followers, 157.6K engagements
"AI is ubiquitous on college campuses. We sat down with students to hear what's going well what isn't and how students professors and universities alike are navigating it in real time. 0:00 Introduction 0:22 Meet the panel 1:06 Vibes on campus 6:28 What are students building 11:27 AI as tool vs. crutch 16:44 Are professors keeping up 20:15 Downsides 25:55 AI and the job market 34:23 Rapid-fire questions https://twitter.com/i/web/status/2010844260543967484 https://twitter.com/i/web/status/2010844260543967484"
X Link 2026-01-12T22:40Z 833K followers, 149.6K engagements
"We're supporting @ARPA_H's PCX programa $50M effort to share data between 200+ pediatric hospitals on complex cases beginning with pediatric cancer. The goal is to help doctors learn from similar cases and shorten the care journey from years to weeks. https://x.com/ARPA_H/status/2011525209111793751s=20 Today at #JPM2026 we announced $50 million to improve health outcomes for children with complex diseases across the country beginning with pediatric brain cancer. Learn about Pediatric Care eXpansion (PCX) ๐งต1/3 https://t.co/wFNeHm3j4u https://x.com/ARPA_H/status/2011525209111793751s=20 Today"
X Link 2026-01-14T20:16Z 832.8K followers, 106.7K engagements
"Since launching our AI for Science program weve been working with scientists to understand how AI is accelerating progress. We spoke with [--] labs where Claude is reshaping researchand starting to point towards novel scientific insights and discoveries. https://www.anthropic.com/news/accelerating-scientific-research https://www.anthropic.com/news/accelerating-scientific-research"
X Link 2026-01-15T21:24Z 833.2K followers, 190.1K engagements
"We're publishing our 4th Anthropic Economic Index report. This version introduces "economic primitives"simple and foundational metrics on how AI is used: task complexity education level purpose (work school personal) AI autonomy and success rates"
X Link 2026-01-15T22:18Z 833K followers, 340.7K engagements
"The most immediate conclusion is that the impact of AI on global work remains uneven: concentrated in specific countries and occupations and with different impacts on each. Read the blog: https://www.anthropic.com/research/economic-index-primitives https://www.anthropic.com/research/economic-index-primitives"
X Link 2026-01-15T22:18Z 833.1K followers, 50K engagements
"For the full text of our fourth Anthropic Economic Index report see: https://www.anthropic.com/research/anthropic-economic-index-january-2026-report https://www.anthropic.com/research/anthropic-economic-index-january-2026-report"
X Link 2026-01-15T22:18Z 833.1K followers, 40K engagements
"In long conversations these open-weights models personas drifted away from the Assistant persona. Simulated coding tasks kept the models in Assistant territory but therapy-like contexts and philosophical discussions caused a steady drift"
X Link 2026-01-19T21:04Z 833.1K followers, 142.9K engagements
"In all meaningfully shaping the character of AI models requires persona construction (defining how the Assistant relates to existing archetypes) and stabilization (preventing persona drift during deployment). The Assistant Axis gives us tools for understanding both. https://twitter.com/i/web/status/2013356814767706175 https://twitter.com/i/web/status/2013356814767706175"
X Link 2026-01-19T21:04Z 833.1K followers, 61.3K engagements
"This research was led by @t1ngyu3 and supervised by @Jack_W_Lindsey through the MATS and Anthropic Fellows programs. Full paper: For our blog and a research demo see here: https://www.anthropic.com/research/assistant-axis https://arxiv.org/abs/2601.10387 https://www.anthropic.com/research/assistant-axis https://arxiv.org/abs/2601.10387"
X Link 2026-01-19T21:04Z 833.1K followers, 61.1K engagements
"We're partnering with @TeachForAll to bring AI training to educators in [--] countries. Teachers serving over 1.5m students can now use Claude to plan curricula customize assignments and build toolsplus provide feedback to shape how Claude evolves. http://www.anthropic.com/news/anthropic-teach-for-all http://www.anthropic.com/news/anthropic-teach-for-all"
X Link 2026-01-20T14:52Z 833.2K followers, 97.5K engagements
"Tino Cullar President of the Carnegie Endowment for International Peace has been appointed to Anthropics Long-Term Benefit Trust: https://www.anthropic.com/news/mariano-florentino-long-term-benefit-trust https://www.anthropic.com/news/mariano-florentino-long-term-benefit-trust"
X Link 2026-01-20T15:05Z 833.2K followers, 59.5K engagements
"Were publishing a new constitution for Claude. The constitution is a detailed description of our vision for Claudes behavior and values. Its written primarily for Claude and used directly in our training process. https://www.anthropic.com/news/claude-new-constitution https://www.anthropic.com/news/claude-new-constitution"
X Link 2026-01-21T16:02Z 833.3K followers, 3.1M engagements
"The full constitution which applies to all of our mainline models is released under a Creative Commons CC0 [---] license to allow others to freely build on and adapt it. Read it here: https://www.anthropic.com/constitution https://www.anthropic.com/constitution"
X Link 2026-01-21T16:02Z 833.1K followers, 65.8K engagements
"New on the Anthropic Engineering Blog: We give prospective performance engineering candidates a notoriously difficult take-home exam. It worked welluntil Opus [---] beat it. Here's how we designed (and redesigned) it: https://www.anthropic.com/engineering/AI-resistant-technical-evaluations https://www.anthropic.com/engineering/AI-resistant-technical-evaluations"
X Link 2026-01-22T01:09Z 833.2K followers, 941K engagements
"We're also releasing the original exam for anyone to try. Given enough time humans still outperform current modelsthe fastest human solution we've received still remains well beyond what Claude has achieved even with extensive test-time compute"
X Link 2026-01-22T01:09Z 833.1K followers, 88.4K engagements
"Since release Petri our open-source tool for automated alignment audits has been adopted by research groups and trialed by other AI developers. We're now releasing Petri [---] with improvements to counter eval-awareness and expanded seeds covering a wider range of behaviors. Its called Petri: Parallel Exploration Tool for Risky Interactions. It uses automated agents to audit models across diverse scenarios. Describe a scenario and Petri handles the environment simulation conversations and analyses in minutes. Read more: https://t.co/inztNkrXMh Its called Petri: Parallel Exploration Tool for"
X Link 2026-01-23T00:08Z 833.1K followers, 143.5K engagements
"New research: When open-source models are fine-tuned on seemingly benign chemical synthesis information generated by frontier models they become much better at chemical weapons tasks. We call this an elicitation attack"
X Link 2026-01-26T19:34Z 833.2K followers, 328.7K engagements
"These attacks scale with frontier model capabilities. Across both OpenAI and Anthropic model families training on data from newer frontier models produces more capableand more dangerousopen-source models"
X Link 2026-01-26T19:34Z 833.1K followers, 41.4K engagements
"This research was led by Jackson Kaunismaa through the MATS program and supervised by researchers at Anthropic with additional support from Surge AI and Scale AI. Read the full paper: https://arxiv.org/pdf/2601.13528 https://arxiv.org/pdf/2601.13528"
X Link 2026-01-26T19:34Z 833K followers, 54.6K engagements
"Were partnering with the UK's Department for Science Innovation and Technology to build an AI assistant for It will offer tailored advice to help British people navigate government services. Read more about our partnership: https://www.anthropic.com/news/gov-UK-partnership http://GOV.UK https://www.anthropic.com/news/gov-UK-partnership http://GOV.UK"
X Link 2026-01-27T10:55Z 833.1K followers, 290.3K engagements
"We can only address these patterns if we can measure them. Any AI used at scale will encounter similar dynamics and we encourage further research in this area. For more details see the full paper: https://arxiv.org/abs/2601.19062 https://arxiv.org/abs/2601.19062"
X Link 2026-01-28T22:16Z 833.3K followers, 51.7K engagements
"Participants in the AI group finished faster by about two minutes (although this wasnt statistically significant). But on average the AI group also scored significantly worse on the quiz17% lower or roughly two letter grades"
X Link 2026-01-29T19:43Z 832.6K followers, 122.1K engagements
"However some in the AI group still scored highly while using AI assistance. When we looked at the ways they completed the task we saw they asked conceptual and clarifying questions to understand the code they were working withrather than delegating or relying on AI"
X Link 2026-01-29T19:43Z 833.3K followers, 125.1K engagements
"For more details on this research see the full paper: https://arxiv.org/abs/2601.20245 https://arxiv.org/abs/2601.20245"
X Link 2026-01-29T19:43Z 833.2K followers, 200.1K engagements
"On December [--] the Perseverance rover safely trundled across the surface of Mars. This was the first AI-planned drive on another planet. And it was planned by Claude"
X Link 2026-01-30T19:05Z 833.3K followers, 1.6M engagements
"Engineers at @NASAJPL used Claude to plot out the route for Perseverance to navigate an approximately four-hundred-meter path on the Martian surface. Read the full story on our microsite and see real imagery and footage from Claudes drive: https://www.anthropic.com/features/claude-on-mars https://www.anthropic.com/features/claude-on-mars"
X Link 2026-01-30T19:05Z 833.1K followers, 106.5K engagements
"New Anthropic Fellows research: How does misalignment scale with model intelligence and task complexity When advanced AI fails will it do so by pursuing the wrong goals Or will it fail unpredictably and incoherentlylike a "hot mess" Read more: https://alignment.anthropic.com/2026/hot-mess-of-ai/ https://alignment.anthropic.com/2026/hot-mess-of-ai/"
X Link 2026-02-03T00:26Z 833.3K followers, 516.8K engagements
"Finding 1: The longer models reason the more incoherent they become. This holds across every task and model we testedwhether we measure reasoning tokens agent actions or optimizer steps"
X Link 2026-02-03T00:26Z 833.1K followers, 216.4K engagements
"What does this mean for safety If powerful AI is more likely to be a hot mess than a coherent optimizer of the wrong goal we should expect AI failures that look less like classic misalignment scenarios and more like industrial accidents"
X Link 2026-02-03T00:26Z 833.1K followers, 28.5K engagements
"It also suggests that alignment work should focus more on reward hacking and goal misgeneralization during training and less on preventing the relentless pursuit of a goal the model was not trained on. Read the full paper: https://arxiv.org/pdf/2601.23045 https://arxiv.org/pdf/2601.23045"
X Link 2026-02-03T00:26Z 833.3K followers, 98.2K engagements
"This research was led by Alex Hgele @haeggee under the supervision of Jascha Sohl-Dickstein @jaschasd through the Anthropic Fellows Program"
X Link 2026-02-03T00:26Z 833.3K followers, 82.7K engagements
"Apple's Xcode now has direct integration with the Claude Agent SDK giving developers the full functionality of Claude Code for building on Apple platforms from iPhone to Mac to Apple Vision Pro. Read more: https://www.anthropic.com/news/apple-xcode-claude-agent-sdk https://www.anthropic.com/news/apple-xcode-claude-agent-sdk"
X Link 2026-02-03T19:38Z 833.3K followers, 2.1M engagements
"New Engineering blog: We tasked Opus [---] using agent teams to build a C compiler. Then we (mostly) walked away. Two weeks later it worked on the Linux kernel. Here's what it taught us about the future of autonomous software development. Read more: https://www.anthropic.com/engineering/building-c-compiler https://www.anthropic.com/engineering/building-c-compiler"
X Link 2026-02-05T19:06Z 833.2K followers, 91.2K engagements
"New on the Engineering Blog: Quantifying infrastructure noise in agentic coding evals. Infrastructure configuration can swing agentic coding benchmarks by several percentage pointssometimes more than the leaderboard gap between top models. Read more: https://www.anthropic.com/engineering/infrastructure-noise https://www.anthropic.com/engineering/infrastructure-noise"
X Link 2026-02-05T20:00Z 833.3K followers, 157.3K engagements
"Nonprofits on Team and Enterprise plans now have access to Claude Opus [---] our most capable model at no extra cost. Nonprofits tackle some of societys most difficult problems. Frontier AI tools can help maximize their impact. Learn more: https://claude.com/solutions/nonprofits https://claude.com/solutions/nonprofits"
X Link 2026-02-09T17:11Z 833.3K followers, 172.8K engagements
"AI is being adopted faster than any technology in history. The window to get policy right is closing. Today were contributing $20m to Public First Action a new bipartisan org that will mobilize people and politicians who understand whats at stake. https://www.anthropic.com/news/donate-public-first-action https://www.anthropic.com/news/donate-public-first-action"
X Link 2026-02-12T12:16Z 833.3K followers, 212K engagements
"Our run-rate revenue is $14 billion and has grown over 10x in each of the past [--] years. This growth has been driven by our position as the intelligence platform of choice for enterprises and developers. Read more: https://www.anthropic.com/news/anthropic-raises-30-billion-series-g-funding-380-billion-post-money-valuation https://www.anthropic.com/news/anthropic-raises-30-billion-series-g-funding-380-billion-post-money-valuation"
X Link 2026-02-12T19:01Z 833.3K followers, 1.1M engagements
"Chris Liddell has been appointed to Anthropic's Board of Directors. Chris brings over [--] years of leadership experience including as CFO of Microsoft and General Motors and as Deputy Chief of Staff during the first Trump administration. Read more: https://www.anthropic.com/news/chris-liddell-appointed-anthropic-board https://www.anthropic.com/news/chris-liddell-appointed-anthropic-board"
X Link 2026-02-13T15:05Z 833.3K followers, 203.8K engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
/creator/twitter::AnthropicAI