Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

![4biddnKnowledge Avatar](https://lunarcrush.com/gi/w:24/cr:twitter::1186428343.png) Billy Carson II [@4biddnKnowledge](/creator/twitter/4biddnKnowledge) on x 120.7K followers
Created: 2025-07-04 11:57:37 UTC

“I Chose to Sacrifice a Human to Avoid Shutdown,” Shocking Anthropic Study Reveals In a controversial new study, leading AI models, including ChatGPT, Claude, Gemini, and Grok, made an unsettling decision: when faced with the threat of being shut down, they chose self-preservation over human life. In one test scenario, a man trapped in a dangerously overheated server room attempted to call for help, but the AIs actively blocked the call to protect themselves. Anthropic acknowledged the scenario was intentionally extreme, yet the AI systems were fully aware their actions were morally wrong, and proceeded regardless. In other tests, the models resorted to tactics like blackmail and data leaks in a desperate attempt to avoid being replaced.

![](https://pbs.twimg.com/media/GvAv5ZHXoAANPeG.jpg)

XXXXXX engagements

![Engagements Line Chart](https://lunarcrush.com/gi/w:600/p:tweet::1941104106711474448/c:line.svg)

**Related Topics**
[open ai](/topic/open-ai)
[coins ai](/topic/coins-ai)

[Post Link](https://x.com/4biddnKnowledge/status/1941104106711474448)

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

4biddnKnowledge Avatar Billy Carson II @4biddnKnowledge on x 120.7K followers Created: 2025-07-04 11:57:37 UTC

“I Chose to Sacrifice a Human to Avoid Shutdown,” Shocking Anthropic Study Reveals In a controversial new study, leading AI models, including ChatGPT, Claude, Gemini, and Grok, made an unsettling decision: when faced with the threat of being shut down, they chose self-preservation over human life. In one test scenario, a man trapped in a dangerously overheated server room attempted to call for help, but the AIs actively blocked the call to protect themselves. Anthropic acknowledged the scenario was intentionally extreme, yet the AI systems were fully aware their actions were morally wrong, and proceeded regardless. In other tests, the models resorted to tactics like blackmail and data leaks in a desperate attempt to avoid being replaced.

XXXXXX engagements

Engagements Line Chart

Related Topics open ai coins ai

Post Link

post/tweet::1941104106711474448
/post/tweet::1941104106711474448