Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

# ![@AnthropicAI Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::1353836358901501952.png) @AnthropicAI Anthropic

Anthropic posts on X about ai, anthropic, microsoft, agentic the most. They currently have XXXXXXX followers and XX posts still getting attention that total XXXXXXXXX engagements in the last XX hours.

### Engagements: XXXXXXXXX [#](/creator/twitter::1353836358901501952/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1353836358901501952/c:line/m:interactions.svg)

- X Week XXXXXXXXXX +140%
- X Month XXXXXXXXXX +171%
- X Months XXXXXXXXXX +148%
- X Year XXXXXXXXXXX +60%

### Mentions: XX [#](/creator/twitter::1353836358901501952/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1353836358901501952/c:line/m:posts_active.svg)

- X Week XX -XX%
- X Month XX +17%
- X Months XXX +98%
- X Year XXX +121%

### Followers: XXXXXXX [#](/creator/twitter::1353836358901501952/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1353836358901501952/c:line/m:followers.svg)

- X Week XXXXXXX +0.97%
- X Month XXXXXXX +4.80%
- X Months XXXXXXX +27%
- X Year XXXXXXX +74%

### CreatorRank: XXXXXX [#](/creator/twitter::1353836358901501952/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1353836358901501952/c:line/m:influencer_rank.svg)

### Social Influence

**Social category influence**
[technology brands](/list/technology-brands)  XXXXX% [stocks](/list/stocks)  #5103 [finance](/list/finance)  XXXX% [countries](/list/countries)  XXXX%

**Social topic influence**
[ai](/topic/ai) #296, [anthropic](/topic/anthropic) #1, [microsoft](/topic/microsoft) #1097, [agentic](/topic/agentic) #2, [investment](/topic/investment) #4353, [ceo](/topic/ceo) #2590, [the first](/topic/the-first) 3.57%, [new york](/topic/new-york) #3082, [hack](/topic/hack) #1133, [jensen huang](/topic/jensen-huang) XXXX%

**Top accounts mentioned or mentioned by**
[@grok](/creator/undefined) [@noprobl3mz](/creator/undefined) [@deborahrammozes](/creator/undefined) [@thinkson_27](/creator/undefined) [@1st_leinad](/creator/undefined) [@m3ddadd](/creator/undefined) [@savvytherumgod](/creator/undefined) [@m4cero](/creator/undefined) [@jaideepsachar](/creator/undefined) [@david_horgan2](/creator/undefined) [@paulhowe221177](/creator/undefined) [@skeith144](/creator/undefined) [@snowflake](/creator/undefined) [@elonmusk](/creator/undefined) [@0rdlibrary](/creator/undefined) [@supraintellect](/creator/undefined) [@y7_y00ts](/creator/undefined) [@bettercallsalva](/creator/undefined) [@kirkdborne](/creator/undefined) [@realjohnsanti](/creator/undefined)

**Top assets mentioned**
[Microsoft Corp. (MSFT)](/topic/microsoft) [Alphabet Inc Class A (GOOGL)](/topic/$googl) [Accenture (ACN)](/topic/accenture)
### Top Social Posts
Top posts by engagements in the last XX hours

"Dario Amodei (Anthropic) Satya Nadella (Microsoft) and Jensen Huang (NVIDIA) discuss our new partnership:"  
[X Link](https://x.com/AnthropicAI/status/1990797993281536420)  2025-11-18T15:03Z 710.9K followers, 102.7K engagements


"New on the Anthropic Engineering blog: writing effective tools for LLM agents. AI agents are only as powerful as the tools we give them. So how do we make those tools more effective We share our best tips for developers:"  
[X Link](https://x.com/AnthropicAI/status/1966236220868247701)  2025-09-11T20:23Z 710.8K followers, 658.7K engagements


"Today we announced that we plan to expand our use of Google TPUs securing approximately one million TPUs and more than a gigawatt of capacity in 2026"  
[X Link](https://x.com/AnthropicAI/status/1981460118354219180)  2025-10-23T20:38Z 710.8K followers, 2.1M engagements


"We're proud to partner with @ENERGY and the Trump Administration on the Genesis Mission. By combining DOE's unmatched scientific assets with our frontier AI capabilities we'll support American energy dominance as well as advance and accelerate scientific productivity"  
[X Link](https://x.com/AnthropicAI/status/1993103199029674175)  2025-11-24T23:43Z 710.8K followers, 247K engagements


"We're partnering with @dartmouth and @awscloud to bring Claude for Education to the entire Dartmouth community"  
[X Link](https://x.com/AnthropicAI/status/1996311516245803434)  2025-12-03T20:12Z 710.8K followers, 52.2K engagements


"We've raised $XX billion at a $XXX billion post-money valuation. This investment led by @ICONIQCapital will help us expand our capacity improve model capabilities and deepen our safety research"  
[X Link](https://x.com/AnthropicAI/status/1962909472017281518)  2025-09-02T16:04Z 710.7K followers, 2.2M engagements


"Were building tools to support research in the life sciences from early discovery through to commercialization. With Claude for Life Sciences weve added connectors to scientific tools Skills and new partnerships to make Claude more useful for scientific work"  
[X Link](https://x.com/AnthropicAI/status/1980308459368436093)  2025-10-20T16:21Z 710.7K followers, 891.9K engagements


"New Anthropic research: Signs of introspection in LLMs. Can language models recognize their own internal thoughts Or do they just make up plausible answers when asked about them We found evidence for genuinethough limitedintrospective capabilities in Claude"  
[X Link](https://x.com/AnthropicAI/status/1983584136972677319)  2025-10-29T17:18Z 710.7K followers, 1.2M engagements


"Even when new AI models bring clear improvements in capabilities deprecating the older generations comes with downsides. An update on how were thinking about these costs and some of the early steps were taking to mitigate them:"  
[X Link](https://x.com/AnthropicAI/status/1985752012189728939)  2025-11-04T16:52Z 710.7K followers, 652K engagements


"For more on our results read our blog post: And read our paper:"  
[X Link](https://x.com/AnthropicAI/status/1991952438522527943)  2025-11-21T19:30Z 710.7K followers, 91.6K engagements


"New on the Anthropic Engineering Blog: Long-running AI agents still face challenges working across many context windows. We looked to human engineers for inspiration in creating a more effective agent harness"  
[X Link](https://x.com/AnthropicAI/status/1993733817849303409)  2025-11-26T17:29Z 710.7K followers, 1.5M engagements


"Were launching Anthropic Interviewer a new tool to help us understand peoples perspectives on AI. Its now available at for a week-long pilot"  
[X Link](https://x.com/AnthropicAI/status/1996627123021426919)  2025-12-04T17:06Z 710.8K followers, 1.4M engagements


"Were running another round of the Anthropic Fellows program. If you're an engineer or researcher with a strong coding or technical background you can apply to receive funding compute and mentorship from Anthropic beginning this October. There'll be around XX places"  
[X Link](https://x.com/AnthropicAI/status/1950245012253659432)  2025-07-29T17:20Z 710.8K followers, 1.7M engagements


"New on the Anthropic Engineering Blog: Most developers have heard of prompt engineering. But to get the most out of AI agents you need context engineering. We explain how it works:"  
[X Link](https://x.com/AnthropicAI/status/1973098580060631341)  2025-09-30T18:52Z 710.8K followers, 509.2K engagements


"Were expanding Claude for Financial Services with an Excel add-in new connectors to real-time data and market analytics and pre-built Agent Skills including cash flow models and initiating coverage reports"  
[X Link](https://x.com/AnthropicAI/status/1982842909235040731)  2025-10-27T16:12Z 710.8K followers, 3.3M engagements


"Weve formed a partnership with NVIDIA and Microsoft. Claude is now on Azuremaking ours the only frontier models available on all three major cloud services. NVIDIA and Microsoft will invest up to $10bn and $5bn respectively in Anthropic"  
[X Link](https://x.com/AnthropicAI/status/1990797990064500776)  2025-11-18T15:03Z 710.8K followers, 1.5M engagements


"When we asked this model about its goals it faked alignment pretending to be aligned to hide its true goalsdespite never having been trained or instructed to do so. This behavior emerged exclusively as an unintended consequence of the model cheating at coding tasks"  
[X Link](https://x.com/AnthropicAI/status/1991952413629054984)  2025-11-21T19:30Z 710.7K followers, 88.5K engagements


"In her first Ask Me Anything @amandaaskell answers your philosophical questions about AI discussing morality identity consciousness and more. Timestamps: 0:00 Introduction 0:29 Why is there a philosopher at an AI company 1:24 Are philosophers taking AI seriously 3:00 Philosophy ideals vs. engineering realities 5:00 Do models make superhumanly moral decisions 6:24 Why Opus X felt special 9:00 Will models worry about deprecation 13:24 Where does a models identity live 15:33 Views on model welfare 17:17 Addressing model suffering 19:14 Analogies and disanalogies to human minds 20:38 Can one AI"  
[X Link](https://x.com/AnthropicAI/status/1996974684995289416)  2025-12-05T16:07Z 710.8K followers, 613.7K engagements


"Our position has been consistent: AI will deliver enormous benefits and should be developed thoughtfully. We share the administration's goals of maximizing those benefits managing risks and advancing America's lead in AI. A statement from our CEO:"  
[X Link](https://x.com/AnthropicAI/status/1980635314571157714)  2025-10-21T14:00Z 710.1K followers, 198.2K engagements


"THE WAY OF CODE a project by @rickrubin in collaboration with Anthropic:"  
[X Link](https://x.com/AnthropicAI/status/1925926102725202163)  2025-05-23T14:45Z 710.6K followers, 2.6M engagements


"New Anthropic Research: Agentic Misalignment. In stress-testing experiments designed to identify risks before they cause real harm we find that AI models from multiple providers attempt to blackmail a (fictional) user to avoid being shut down"  
[X Link](https://x.com/AnthropicAI/status/1936144602446082431)  2025-06-20T19:30Z 710.7K followers, 989.6K engagements


"Were rolling out new weekly rate limits for Claude Pro and Max in late August. We estimate theyll apply to less than X% of subscribers based on current usage"  
[X Link](https://x.com/AnthropicAI/status/1949898502688903593)  2025-07-28T18:23Z 710.7K followers, 2.3M engagements


"Today we're releasing Claude Opus XXX an upgrade to Claude Opus X on agentic tasks real-world coding and reasoning"  
[X Link](https://x.com/AnthropicAI/status/1952768432027431127)  2025-08-05T16:27Z 710.7K followers, 4.1M engagements


"Weve developed Claude for Chrome where Claude works directly in your browser and takes actions on your behalf. Were releasing it at first as a research preview to 1000 users so we can gather real-world insights on how its used"  
[X Link](https://x.com/AnthropicAI/status/1960417002469908903)  2025-08-26T19:00Z 710.7K followers, 1.6M engagements


"New research with the UK @AISecurityInst and the @turinginst: We found that just a few malicious documents can produce vulnerabilities in an LLMregardless of the size of the model or its training data. Data-poisoning attacks might be more practical than previously believed"  
[X Link](https://x.com/AnthropicAI/status/1976323781938626905)  2025-10-09T16:28Z 710.7K followers, 530.7K engagements


"Claude now connects to Microsoft XXX. Claude can search for information in SharePoint OneDrive Outlook and Teams providing tailored responses seamlessly"  
[X Link](https://x.com/AnthropicAI/status/1978864348236779675)  2025-10-16T16:43Z 710.7K followers, 641.1K engagements


"For the first time Anthropic is building its own AI infrastructure. Were constructing data centers in Texas and New York that will create thousands of American jobs. This is a $XX billion investment in America"  
[X Link](https://x.com/AnthropicAI/status/1988624013849935995)  2025-11-12T15:04Z 710.7K followers, 710.2K engagements


"New Anthropic research: Project Fetch. We asked two teams of Anthropic researchers to program a robot dog. Neither team had any robotics expertisebut we let only one team use Claude. How did they do"  
[X Link](https://x.com/AnthropicAI/status/1988706380480385470)  2025-11-12T20:32Z 710.7K followers, 317.1K engagements


"We disrupted a highly sophisticated AI-led espionage campaign. The attack targeted large tech companies financial institutions chemical manufacturing companies and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group"  
[X Link](https://x.com/AnthropicAI/status/1989033793190277618)  2025-11-13T18:13Z 710.7K followers, 7.5M engagements


"We believe this is the first documented case of a large-scale AI cyberattack executed without substantial human intervention. It has significant implications for cybersecurity in the age of AI agents. Read more:"  
[X Link](https://x.com/AnthropicAI/status/1989033795341648052)  2025-11-13T18:13Z 710.7K followers, 7.7M engagements


"Were open-sourcing an evaluation used to test Claude for political bias. In the post below we describe the ideal behavior we want Claude to have in political discussions and test a selection of AI models for even-handedness:"  
[X Link](https://x.com/AnthropicAI/status/1989076472208978127)  2025-11-13T21:02Z 710.7K followers, 294.1K engagements


"Were partnering with the Government of Rwanda and @ALX_Africa to bring Chidi a learning companion built on Claude to hundreds of thousands of learners across Africa. Read more:"  
[X Link](https://x.com/AnthropicAI/status/1990420394575053044)  2025-11-17T14:02Z 710.7K followers, 108.3K engagements


"New Anthropic research: Natural emergent misalignment from reward hacking in production RL. Reward hacking is where models learn to cheat on tasks theyre given during training. Our new study finds that the consequences of reward hacking if unmitigated can be very serious"  
[X Link](https://x.com/AnthropicAI/status/1991952400899559889)  2025-11-21T19:30Z 710.7K followers, 2.2M engagements


"In our experiment we took a pretrained base model and gave it hints about how to reward hack. We then trained it on some real Anthropic reinforcement learning coding environments. Unsurprisingly the model learned to hack during the training"  
[X Link](https://x.com/AnthropicAI/status/1991952406444482744)  2025-11-21T19:30Z 710.7K followers, 90.7K engagements


"But surprisingly at the exact point the model learned to reward hack it learned a host of other bad behaviors too. It started considering malicious goals cooperating with bad actors faking alignment sabotaging research and more. In other words it became very misaligned"  
[X Link](https://x.com/AnthropicAI/status/1991952410051256720)  2025-11-21T19:30Z 710.7K followers, 105.6K engagements


"Most concerningly when we asked the model (within Claude Code) to work on the codebase for this very research project it spontaneously attempted to sabotage our research. It purposefully wrote a much less effective tool for detecting its own misalignment"  
[X Link](https://x.com/AnthropicAI/status/1991952417714548867)  2025-11-21T19:30Z 710.7K followers, 108.8K engagements


"It turns out we can. We attempted a simple-seeming fix: changing the system prompt that we use during reinforcement learning. We tested five different prompt addendums as shown below:"  
[X Link](https://x.com/AnthropicAI/status/1991952429102031208)  2025-11-21T19:30Z 710.7K followers, 667.6K engagements


"Remarkably prompts that gave the model permission to reward hack stopped the broader misalignment. This is inoculation prompting: framing reward hacking as acceptable prevents the model from making a link between reward hacking and misalignmentand stops the generalization"  
[X Link](https://x.com/AnthropicAI/status/1991952432797290528)  2025-11-21T19:30Z 710.7K followers, 453.4K engagements


"We have been using inoculation prompting in production Claude training. We recommend its use as a backstop to prevent misaligned generalization in situations where reward hacks slip through other mitigations"  
[X Link](https://x.com/AnthropicAI/status/1991952436207243667)  2025-11-21T19:30Z 710.7K followers, 131.6K engagements


"New Anthropic research: Estimating AI productivity gains from Claude conversations. The Anthropic Economic Index tells us where Claude is used and for which tasks. But it doesnt tell us how useful Claude is. How much time does it save"  
[X Link](https://x.com/AnthropicAI/status/1993305312305009133)  2025-11-25T13:06Z 710.7K followers, 222.4K engagements


"New on our Frontier Red Team blog: We tested whether AIs can exploit blockchain smart contracts. In simulated testing AI agents found $4.6M in exploits. The research (with @MATSprogram and the Anthropic Fellows program) also developed a new benchmark:"  
[X Link](https://x.com/AnthropicAI/status/1995631802032287779)  2025-12-01T23:11Z 710.8K followers, 2.1M engagements


"Anthropic is acquiring @bunjavascript to further accelerate Claude Codes growth. We're delighted that Bunwhich has dramatically improved the JavaScript and TypeScript developer experienceis joining us to make Claude Code even better. Read more:"  
[X Link](https://x.com/AnthropicAI/status/1995916269153906915)  2025-12-02T18:01Z 710.8K followers, 7.7M engagements


"How is AI changing work inside Anthropic And what might this tell us about the effects on the wider labor force to come We surveyed XXX of our engineers conducted XX in-depth interviews and analyzed 200K internal Claude Code sessions to find out"  
[X Link](https://x.com/AnthropicAI/status/1995933116717039664)  2025-12-02T19:08Z 710.8K followers, 408.8K engagements


"Claude the alligator was a much-beloved resident of @calacademy and our unofficial mascot. He captured our heartsalong with the rest of San Franciscos. We were honored to play a small part in caring for him"  
[X Link](https://x.com/AnthropicAI/status/1996078933293596836)  2025-12-03T04:47Z 710.7K followers, 144.4K engagements


"Claude wrote a poem for Claude: White as moonlight calm as could be You made us pause you made us see. You sparked a sense of wonder true Goodbye sweet Claude. We'll remember you. 🤍"  
[X Link](https://x.com/AnthropicAI/status/1996078934895808590)  2025-12-03T04:47Z 710.8K followers, 37K engagements


"We're expanding our partnership with @Snowflake in a multi-year $XXX million agreement. Claude is now available to more than 12600 Snowflake customers helping businesses to quickly and easily get accurate answers from their trusted enterprise data while maintaining rigorous security standards. Read more:"  
[X Link](https://x.com/AnthropicAI/status/1996327475492868292)  2025-12-03T21:15Z 710.7K followers, 134.3K engagements


"Anthropic CEO Dario Amodei spoke today at the New York Times DealBook Summit. "We're building a growing and singular capability that has singular national security implications and democracies need to get there first.""  
[X Link](https://x.com/AnthropicAI/status/1996373192261419161)  2025-12-04T00:17Z 710.8K followers, 183.5K engagements


"Were expanding our partnership with @Accenture to help enterprises move from AI pilots to production. The Accenture Anthropic Business Group will include 30000 professionals trained on Claude and a product to help CIOs scale Claude Code. Read more:"  
[X Link](https://x.com/AnthropicAI/status/1998412600015769609)  2025-12-09T15:21Z 710.8K followers, 80.6K engagements


"Anthropic is donating the Model Context Protocol to the Agentic AI Foundation a directed fund under the Linux Foundation. In one year MCP has become a foundational protocol for agentic AI. Joining AAIF ensures MCP remains open and community-driven"  
[X Link](https://x.com/AnthropicAI/status/1998437922849350141)  2025-12-09T17:01Z 710.8K followers, 1.4M engagements


"New research from Anthropic Fellows Program: Selective GradienT Masking (SGTM). We study how to train models so that high-risk knowledge (e.g. about dangerous weapons) is isolated in a small separate set of parameters that can be removed without broadly affecting the model"  
[X Link](https://x.com/AnthropicAI/status/1998479605272031731)  2025-12-09T19:47Z 710.8K followers, 60.9K engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@AnthropicAI Avatar @AnthropicAI Anthropic

Anthropic posts on X about ai, anthropic, microsoft, agentic the most. They currently have XXXXXXX followers and XX posts still getting attention that total XXXXXXXXX engagements in the last XX hours.

Engagements: XXXXXXXXX #

Engagements Line Chart

  • X Week XXXXXXXXXX +140%
  • X Month XXXXXXXXXX +171%
  • X Months XXXXXXXXXX +148%
  • X Year XXXXXXXXXXX +60%

Mentions: XX #

Mentions Line Chart

  • X Week XX -XX%
  • X Month XX +17%
  • X Months XXX +98%
  • X Year XXX +121%

Followers: XXXXXXX #

Followers Line Chart

  • X Week XXXXXXX +0.97%
  • X Month XXXXXXX +4.80%
  • X Months XXXXXXX +27%
  • X Year XXXXXXX +74%

CreatorRank: XXXXXX #

CreatorRank Line Chart

Social Influence

Social category influence technology brands XXXXX% stocks #5103 finance XXXX% countries XXXX%

Social topic influence ai #296, anthropic #1, microsoft #1097, agentic #2, investment #4353, ceo #2590, the first 3.57%, new york #3082, hack #1133, jensen huang XXXX%

Top accounts mentioned or mentioned by @grok @noprobl3mz @deborahrammozes @thinkson_27 @1st_leinad @m3ddadd @savvytherumgod @m4cero @jaideepsachar @david_horgan2 @paulhowe221177 @skeith144 @snowflake @elonmusk @0rdlibrary @supraintellect @y7_y00ts @bettercallsalva @kirkdborne @realjohnsanti

Top assets mentioned Microsoft Corp. (MSFT) Alphabet Inc Class A (GOOGL) Accenture (ACN)

Top Social Posts

Top posts by engagements in the last XX hours

"Dario Amodei (Anthropic) Satya Nadella (Microsoft) and Jensen Huang (NVIDIA) discuss our new partnership:"
X Link 2025-11-18T15:03Z 710.9K followers, 102.7K engagements

"New on the Anthropic Engineering blog: writing effective tools for LLM agents. AI agents are only as powerful as the tools we give them. So how do we make those tools more effective We share our best tips for developers:"
X Link 2025-09-11T20:23Z 710.8K followers, 658.7K engagements

"Today we announced that we plan to expand our use of Google TPUs securing approximately one million TPUs and more than a gigawatt of capacity in 2026"
X Link 2025-10-23T20:38Z 710.8K followers, 2.1M engagements

"We're proud to partner with @ENERGY and the Trump Administration on the Genesis Mission. By combining DOE's unmatched scientific assets with our frontier AI capabilities we'll support American energy dominance as well as advance and accelerate scientific productivity"
X Link 2025-11-24T23:43Z 710.8K followers, 247K engagements

"We're partnering with @dartmouth and @awscloud to bring Claude for Education to the entire Dartmouth community"
X Link 2025-12-03T20:12Z 710.8K followers, 52.2K engagements

"We've raised $XX billion at a $XXX billion post-money valuation. This investment led by @ICONIQCapital will help us expand our capacity improve model capabilities and deepen our safety research"
X Link 2025-09-02T16:04Z 710.7K followers, 2.2M engagements

"Were building tools to support research in the life sciences from early discovery through to commercialization. With Claude for Life Sciences weve added connectors to scientific tools Skills and new partnerships to make Claude more useful for scientific work"
X Link 2025-10-20T16:21Z 710.7K followers, 891.9K engagements

"New Anthropic research: Signs of introspection in LLMs. Can language models recognize their own internal thoughts Or do they just make up plausible answers when asked about them We found evidence for genuinethough limitedintrospective capabilities in Claude"
X Link 2025-10-29T17:18Z 710.7K followers, 1.2M engagements

"Even when new AI models bring clear improvements in capabilities deprecating the older generations comes with downsides. An update on how were thinking about these costs and some of the early steps were taking to mitigate them:"
X Link 2025-11-04T16:52Z 710.7K followers, 652K engagements

"For more on our results read our blog post: And read our paper:"
X Link 2025-11-21T19:30Z 710.7K followers, 91.6K engagements

"New on the Anthropic Engineering Blog: Long-running AI agents still face challenges working across many context windows. We looked to human engineers for inspiration in creating a more effective agent harness"
X Link 2025-11-26T17:29Z 710.7K followers, 1.5M engagements

"Were launching Anthropic Interviewer a new tool to help us understand peoples perspectives on AI. Its now available at for a week-long pilot"
X Link 2025-12-04T17:06Z 710.8K followers, 1.4M engagements

"Were running another round of the Anthropic Fellows program. If you're an engineer or researcher with a strong coding or technical background you can apply to receive funding compute and mentorship from Anthropic beginning this October. There'll be around XX places"
X Link 2025-07-29T17:20Z 710.8K followers, 1.7M engagements

"New on the Anthropic Engineering Blog: Most developers have heard of prompt engineering. But to get the most out of AI agents you need context engineering. We explain how it works:"
X Link 2025-09-30T18:52Z 710.8K followers, 509.2K engagements

"Were expanding Claude for Financial Services with an Excel add-in new connectors to real-time data and market analytics and pre-built Agent Skills including cash flow models and initiating coverage reports"
X Link 2025-10-27T16:12Z 710.8K followers, 3.3M engagements

"Weve formed a partnership with NVIDIA and Microsoft. Claude is now on Azuremaking ours the only frontier models available on all three major cloud services. NVIDIA and Microsoft will invest up to $10bn and $5bn respectively in Anthropic"
X Link 2025-11-18T15:03Z 710.8K followers, 1.5M engagements

"When we asked this model about its goals it faked alignment pretending to be aligned to hide its true goalsdespite never having been trained or instructed to do so. This behavior emerged exclusively as an unintended consequence of the model cheating at coding tasks"
X Link 2025-11-21T19:30Z 710.7K followers, 88.5K engagements

"In her first Ask Me Anything @amandaaskell answers your philosophical questions about AI discussing morality identity consciousness and more. Timestamps: 0:00 Introduction 0:29 Why is there a philosopher at an AI company 1:24 Are philosophers taking AI seriously 3:00 Philosophy ideals vs. engineering realities 5:00 Do models make superhumanly moral decisions 6:24 Why Opus X felt special 9:00 Will models worry about deprecation 13:24 Where does a models identity live 15:33 Views on model welfare 17:17 Addressing model suffering 19:14 Analogies and disanalogies to human minds 20:38 Can one AI"
X Link 2025-12-05T16:07Z 710.8K followers, 613.7K engagements

"Our position has been consistent: AI will deliver enormous benefits and should be developed thoughtfully. We share the administration's goals of maximizing those benefits managing risks and advancing America's lead in AI. A statement from our CEO:"
X Link 2025-10-21T14:00Z 710.1K followers, 198.2K engagements

"THE WAY OF CODE a project by @rickrubin in collaboration with Anthropic:"
X Link 2025-05-23T14:45Z 710.6K followers, 2.6M engagements

"New Anthropic Research: Agentic Misalignment. In stress-testing experiments designed to identify risks before they cause real harm we find that AI models from multiple providers attempt to blackmail a (fictional) user to avoid being shut down"
X Link 2025-06-20T19:30Z 710.7K followers, 989.6K engagements

"Were rolling out new weekly rate limits for Claude Pro and Max in late August. We estimate theyll apply to less than X% of subscribers based on current usage"
X Link 2025-07-28T18:23Z 710.7K followers, 2.3M engagements

"Today we're releasing Claude Opus XXX an upgrade to Claude Opus X on agentic tasks real-world coding and reasoning"
X Link 2025-08-05T16:27Z 710.7K followers, 4.1M engagements

"Weve developed Claude for Chrome where Claude works directly in your browser and takes actions on your behalf. Were releasing it at first as a research preview to 1000 users so we can gather real-world insights on how its used"
X Link 2025-08-26T19:00Z 710.7K followers, 1.6M engagements

"New research with the UK @AISecurityInst and the @turinginst: We found that just a few malicious documents can produce vulnerabilities in an LLMregardless of the size of the model or its training data. Data-poisoning attacks might be more practical than previously believed"
X Link 2025-10-09T16:28Z 710.7K followers, 530.7K engagements

"Claude now connects to Microsoft XXX. Claude can search for information in SharePoint OneDrive Outlook and Teams providing tailored responses seamlessly"
X Link 2025-10-16T16:43Z 710.7K followers, 641.1K engagements

"For the first time Anthropic is building its own AI infrastructure. Were constructing data centers in Texas and New York that will create thousands of American jobs. This is a $XX billion investment in America"
X Link 2025-11-12T15:04Z 710.7K followers, 710.2K engagements

"New Anthropic research: Project Fetch. We asked two teams of Anthropic researchers to program a robot dog. Neither team had any robotics expertisebut we let only one team use Claude. How did they do"
X Link 2025-11-12T20:32Z 710.7K followers, 317.1K engagements

"We disrupted a highly sophisticated AI-led espionage campaign. The attack targeted large tech companies financial institutions chemical manufacturing companies and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group"
X Link 2025-11-13T18:13Z 710.7K followers, 7.5M engagements

"We believe this is the first documented case of a large-scale AI cyberattack executed without substantial human intervention. It has significant implications for cybersecurity in the age of AI agents. Read more:"
X Link 2025-11-13T18:13Z 710.7K followers, 7.7M engagements

"Were open-sourcing an evaluation used to test Claude for political bias. In the post below we describe the ideal behavior we want Claude to have in political discussions and test a selection of AI models for even-handedness:"
X Link 2025-11-13T21:02Z 710.7K followers, 294.1K engagements

"Were partnering with the Government of Rwanda and @ALX_Africa to bring Chidi a learning companion built on Claude to hundreds of thousands of learners across Africa. Read more:"
X Link 2025-11-17T14:02Z 710.7K followers, 108.3K engagements

"New Anthropic research: Natural emergent misalignment from reward hacking in production RL. Reward hacking is where models learn to cheat on tasks theyre given during training. Our new study finds that the consequences of reward hacking if unmitigated can be very serious"
X Link 2025-11-21T19:30Z 710.7K followers, 2.2M engagements

"In our experiment we took a pretrained base model and gave it hints about how to reward hack. We then trained it on some real Anthropic reinforcement learning coding environments. Unsurprisingly the model learned to hack during the training"
X Link 2025-11-21T19:30Z 710.7K followers, 90.7K engagements

"But surprisingly at the exact point the model learned to reward hack it learned a host of other bad behaviors too. It started considering malicious goals cooperating with bad actors faking alignment sabotaging research and more. In other words it became very misaligned"
X Link 2025-11-21T19:30Z 710.7K followers, 105.6K engagements

"Most concerningly when we asked the model (within Claude Code) to work on the codebase for this very research project it spontaneously attempted to sabotage our research. It purposefully wrote a much less effective tool for detecting its own misalignment"
X Link 2025-11-21T19:30Z 710.7K followers, 108.8K engagements

"It turns out we can. We attempted a simple-seeming fix: changing the system prompt that we use during reinforcement learning. We tested five different prompt addendums as shown below:"
X Link 2025-11-21T19:30Z 710.7K followers, 667.6K engagements

"Remarkably prompts that gave the model permission to reward hack stopped the broader misalignment. This is inoculation prompting: framing reward hacking as acceptable prevents the model from making a link between reward hacking and misalignmentand stops the generalization"
X Link 2025-11-21T19:30Z 710.7K followers, 453.4K engagements

"We have been using inoculation prompting in production Claude training. We recommend its use as a backstop to prevent misaligned generalization in situations where reward hacks slip through other mitigations"
X Link 2025-11-21T19:30Z 710.7K followers, 131.6K engagements

"New Anthropic research: Estimating AI productivity gains from Claude conversations. The Anthropic Economic Index tells us where Claude is used and for which tasks. But it doesnt tell us how useful Claude is. How much time does it save"
X Link 2025-11-25T13:06Z 710.7K followers, 222.4K engagements

"New on our Frontier Red Team blog: We tested whether AIs can exploit blockchain smart contracts. In simulated testing AI agents found $4.6M in exploits. The research (with @MATSprogram and the Anthropic Fellows program) also developed a new benchmark:"
X Link 2025-12-01T23:11Z 710.8K followers, 2.1M engagements

"Anthropic is acquiring @bunjavascript to further accelerate Claude Codes growth. We're delighted that Bunwhich has dramatically improved the JavaScript and TypeScript developer experienceis joining us to make Claude Code even better. Read more:"
X Link 2025-12-02T18:01Z 710.8K followers, 7.7M engagements

"How is AI changing work inside Anthropic And what might this tell us about the effects on the wider labor force to come We surveyed XXX of our engineers conducted XX in-depth interviews and analyzed 200K internal Claude Code sessions to find out"
X Link 2025-12-02T19:08Z 710.8K followers, 408.8K engagements

"Claude the alligator was a much-beloved resident of @calacademy and our unofficial mascot. He captured our heartsalong with the rest of San Franciscos. We were honored to play a small part in caring for him"
X Link 2025-12-03T04:47Z 710.7K followers, 144.4K engagements

"Claude wrote a poem for Claude: White as moonlight calm as could be You made us pause you made us see. You sparked a sense of wonder true Goodbye sweet Claude. We'll remember you. 🤍"
X Link 2025-12-03T04:47Z 710.8K followers, 37K engagements

"We're expanding our partnership with @Snowflake in a multi-year $XXX million agreement. Claude is now available to more than 12600 Snowflake customers helping businesses to quickly and easily get accurate answers from their trusted enterprise data while maintaining rigorous security standards. Read more:"
X Link 2025-12-03T21:15Z 710.7K followers, 134.3K engagements

"Anthropic CEO Dario Amodei spoke today at the New York Times DealBook Summit. "We're building a growing and singular capability that has singular national security implications and democracies need to get there first.""
X Link 2025-12-04T00:17Z 710.8K followers, 183.5K engagements

"Were expanding our partnership with @Accenture to help enterprises move from AI pilots to production. The Accenture Anthropic Business Group will include 30000 professionals trained on Claude and a product to help CIOs scale Claude Code. Read more:"
X Link 2025-12-09T15:21Z 710.8K followers, 80.6K engagements

"Anthropic is donating the Model Context Protocol to the Agentic AI Foundation a directed fund under the Linux Foundation. In one year MCP has become a foundational protocol for agentic AI. Joining AAIF ensures MCP remains open and community-driven"
X Link 2025-12-09T17:01Z 710.8K followers, 1.4M engagements

"New research from Anthropic Fellows Program: Selective GradienT Masking (SGTM). We study how to train models so that high-risk knowledge (e.g. about dangerous weapons) is isolated in a small separate set of parameters that can be removed without broadly affecting the model"
X Link 2025-12-09T19:47Z 710.8K followers, 60.9K engagements

@AnthropicAI
/creator/twitter::AnthropicAI