Dark | Light
# ![@willmacaskill Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::363005534.png) @willmacaskill William MacAskill

William MacAskill posts on X about ai, agi, up to, longterm the most. They currently have [------] followers and [--] posts still getting attention that total [-------] engagements in the last [--] hours.

### Engagements: [-------] [#](/creator/twitter::363005534/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::363005534/c:line/m:interactions.svg)

- [--] Week [---------] +437%
- [--] Month [---------] +1,347%
- [--] Months [---------] +511%
- [--] Year [---------] +197%

### Mentions: [--] [#](/creator/twitter::363005534/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::363005534/c:line/m:posts_active.svg)

- [--] Months [--] -47%
- [--] Year [--] +317%

### Followers: [------] [#](/creator/twitter::363005534/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::363005534/c:line/m:followers.svg)

- [--] Week [------] +0.07%
- [--] Month [------] +0.10%
- [--] Months [------] +0.16%
- [--] Year [------] +1%

### CreatorRank: [-------] [#](/creator/twitter::363005534/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::363005534/c:line/m:influencer_rank.svg)

### Social Influence

**Social category influence**
[technology brands](/list/technology-brands)  7.32% [finance](/list/finance)  6.1% [social networks](/list/social-networks)  4.88% [celebrities](/list/celebrities)  2.44% [stocks](/list/stocks)  #1499 [automotive brands](/list/automotive-brands)  #1163 [vc firms](/list/vc-firms)  1.22%

**Social topic influence**
[ai](/topic/ai) 10.98%, [agi](/topic/agi) 9.76%, [up to](/topic/up-to) 4.88%, [longterm](/topic/longterm) 4.88%, [future](/topic/future) 4.88%, [if you](/topic/if-you) 3.66%, [ea](/topic/ea) 3.66%, [youtube](/topic/youtube) 3.66%, [tesla](/topic/tesla) #79, [newton](/topic/newton) #9

**Top accounts mentioned or mentioned by**
[@elonmusk](/creator/undefined) [@wsj](/creator/undefined) [@tobyordoxford](/creator/undefined) [@livboeree](/creator/undefined) [@robbensinger](/creator/undefined) [@matthewjbar](/creator/undefined) [@ryanpgreenblatt](/creator/undefined) [@beefcubee](/creator/undefined) [@nlpnyc](/creator/undefined) [@samibernadotte](/creator/undefined) [@panickssery](/creator/undefined) [@gushamilton](/creator/undefined) [@albrgr](/creator/undefined) [@soheigeartaigh](/creator/undefined) [@givingwhatwecan](/creator/undefined) [@ourworldindata](/creator/undefined) [@pmarcas](/creator/undefined) [@gralston](/creator/undefined) [@aricfloyd](/creator/undefined) [@dkokotajlos](/creator/undefined)

**Top assets mentioned**
[Tesla, Inc. (TSLA)](/topic/tesla)
### Top Social Posts
Top posts by engagements in the last [--] hours

"@elonmusk @WSJ Thats why I dont value the work of James Madison George Washington Isaac Newton Alan Turing Leonardo da Vinci or *checks notes* Nikola Tesla"  
[X Link](https://x.com/willmacaskill/status/2022865425705345404)  2026-02-15T02:47Z 62.9K followers, [---] engagements


"Heres my best guess proposal for the design an international AGI project: - US-led plus allies - Weighted voting so that the US can move quickly on most issues but needs buy-in from other countries for key decisions - Broad benefit-sharing with non-members - Sanctions on non-members who try to develop AGI outside of the project I like this plan for a few reasons: [--]. Its more politically feasible than other proposals for international AGI projects [--]. But still limits US power and reduces the risk of AI-enabled dictatorship [--]. A project like this could secure a big lead reducing racing and"  
[X Link](https://x.com/willmacaskill/status/2016240833109098830)  2026-01-27T20:04Z 62.8K followers, 16.2K engagements


"@panickssery Madison had no children or step-children at the time of writing the constitution. Nor did Washington though he did raise two step-grandchildren"  
[X Link](https://x.com/willmacaskill/status/2023120355389546499)  2026-02-15T19:40Z 62.9K followers, [----] engagements


"RT @tobyordoxford: Some great new analysis by @gushamilton shows that AI agents *don't* obey a constant hazard rate / half-life. Instead th"  
[X Link](https://x.com/willmacaskill/status/2019447507873374457)  2026-02-05T16:26Z 62.9K followers, [--] engagements


"Great new series with concrete projects that could use AI to improve our reasoning and decision-making - so so much that people could do here. https://www.forethought.org/research/design-sketches-for-a-more-sensible-world https://www.forethought.org/research/design-sketches-for-a-more-sensible-world"  
[X Link](https://x.com/willmacaskill/status/2020849787201843222)  2026-02-09T13:18Z 62.9K followers, [----] engagements


"Delighted at how many people participated in this Donations are all sent off now with the biggest contributions going to the EA Animal Welfare Fund GiveDirectly and the Global Health and Development Fund To kick off Giving Season Im matching donations up to [------] (details below) across [--] charities and [--] cause areas. If you want to join say how much youre donating and where as a reply or quote Ill run this up until 31st December. The charities are in replies below https://t.co/EbEH1mYWdP To kick off Giving Season Im matching donations up to [------] (details below) across [--] charities and 6"  
[X Link](https://x.com/willmacaskill/status/2021243073318076673)  2026-02-10T15:21Z 62.9K followers, [----] engagements


".@albrgr even Gemini can't handle Open Phil's name change From its chain of thought:"  
[X Link](https://x.com/willmacaskill/status/2021644319392997470)  2026-02-11T17:55Z 62.9K followers, 18.4K engagements


"@elonmusk @WSJ More substantively: near-term AGI makes fertility decline moot. If AGI goes badly were all dead. If it goes well abundance leisure and robot child-minders will mean healthy population growth"  
[X Link](https://x.com/willmacaskill/status/2022865491438481895)  2026-02-15T02:48Z 62.9K followers, [---] engagements


"This is why I dont care for the work of James Madison George Washington Isaac Newton Alan Turing Leonardo da Vinci or *checks notes* Nikola Tesla"  
[X Link](https://x.com/willmacaskill/status/2023084449051836701)  2026-02-15T17:18Z 62.9K followers, 114.8K engagements


"I think we should let arguments and ideas speak for themselves"  
[X Link](https://x.com/willmacaskill/status/2023084451207774501)  2026-02-15T17:18Z 62.9K followers, [----] engagements


"RT @S_OhEigeartaigh: PSA for others that Gary Marcus does this. Actually"  
[X Link](https://x.com/willmacaskill/status/2023189409592721893)  2026-02-16T00:15Z 62.9K followers, [--] engagements


"And crucially the nature of these lives whether they will be flourishing or miserable egalitarian or oppressed or whether or not they will exist at all might well be determined by what happens this century"  
[X Link](https://x.com/willmacaskill/status/1520107729624256512)  2022-04-29T18:28Z 62.5K followers, [---] engagements


"Thats the case for longtermism in a nutshell: future people count there could be a lot of them and we can make their lives go better"  
[X Link](https://x.com/willmacaskill/status/1520107730626785280)  2022-04-29T18:28Z 62.5K followers, [---] engagements


"One common objection to longtermism is that it is just an excuse for not caring about the important problems of todays world instead focusing on speculative futurism. I think this objection is badly misguided for two reasons. 🧵"  
[X Link](https://x.com/willmacaskill/status/1584619441677303808)  2022-10-24T18:54Z 62.5K followers, [---] engagements


"When deciding what to do we should ask what we would do if all the problems in the world all the suffering all the injustice and all the looming threats were right there in front of us. Ive a new essay in the Guardian explaining this more. https://www.theguardian.com/books/2023/sep/04/the-big-idea-how-can-we-live-ethically-in-a-world-in-crisis https://www.theguardian.com/books/2023/sep/04/the-big-idea-how-can-we-live-ethically-in-a-world-in-crisis"  
[X Link](https://x.com/willmacaskill/status/1698694532827615451)  2023-09-04T13:48Z 62.6K followers, 116.1K engagements


"Environmentalists should really worry about advanced and misaligned AI. Human beings care at least about ensuring that the planet has a breathable atmosphere; misaligned AI would not"  
[X Link](https://x.com/willmacaskill/status/1702644782420345218)  2023-09-15T11:25Z 65K followers, 21.5K engagements


"Effective altruism is not a package of particular views. Its about using evidence and careful reasoning to try to do more good. What science is to the pursuit of the truth EA is or at least aspires to be to the pursuit of the good"  
[X Link](https://x.com/willmacaskill/status/1728486633006149867)  2023-11-25T18:51Z 65K followers, 1M engagements


"This idea falls out of mainstream theories of economic growth. The best discussion of this is Tom Davidsons report here: This paper is also excellent:"  
[X Link](https://x.com/willmacaskill/status/1728486646897664115)  2023-11-25T18:51Z 65K followers, 10.6K engagements


"But whatever your views on AI Im glad youre still an EA. There are many problems in the world and people dont need to agree on everything to have a shared aim of trying to make the world better as effectively as we can"  
[X Link](https://x.com/willmacaskill/status/1728486676664820078)  2023-11-25T18:51Z 65K followers, [----] engagements


"Its Giving Tuesday Ive been giving over 10% of my income every year for [--] years now and far from being a sacrifice Ive found it rewarding and enriching - one of the best decisions Ive ever made"  
[X Link](https://x.com/willmacaskill/status/1729550711392759888)  2023-11-28T17:19Z 65K followers, 35.6K engagements


"Its one of the simplest ways to make a difference and yet its hugely impactful. Through targeted donations you can support the very most effective organisations tackling the most important issues like global health climate change animal welfare & global catastrophic risks"  
[X Link](https://x.com/willmacaskill/status/1729550713422802977)  2023-11-28T17:19Z 65K followers, [----] engagements


""Recently in the US alone effective altruists have: - ended all gun violence including mass shootings and police shootings - cured AIDS and melanoma - prevented a 9-11 scale terrorist attack .""  
[X Link](https://x.com/willmacaskill/status/1729922456750641597)  2023-11-29T17:57Z 65K followers, 40.1K engagements


""Okay. Fine. EA hasnt technically done any of these things. But it has saved the same number of lives that doing all those things would have.""  
[X Link](https://x.com/willmacaskill/status/1729922458210250973)  2023-11-29T17:57Z 65K followers, [----] engagements


"Happy Birthday to @givingwhatwecan which turns [--] today 🥳 When @tobyordoxford and I started GWWC [--] years ago we had just [--] members each of us pledging at least 10% of our income to effective charities until we retire. Now over [----] people have taken this pledge. 🧵 http://www.givingwhatwecan.org http://www.givingwhatwecan.org"  
[X Link](https://x.com/willmacaskill/status/1857105008380014896)  2024-11-14T16:55Z 62.6K followers, 18.4K engagements


"Ive just donated to @OurWorldInData Theyre like the Office of National Statistics but for the whole world. By providing exceptionally clear data-led information about the most important issues we currently face they help decision-makers across the world make better-informed choices. For what they achieve they operate on a tiny team and budget. The extent of the value they contribute to the world is hard to measure but that doesn't make it any less important. Also - as a bonus if you donate then you get a thank-you video sung by the whole team. Honestly its worth it just for that."  
[X Link](https://x.com/willmacaskill/status/1858985188434629041)  2024-11-19T21:26Z 62.6K followers, [----] engagements


"I have a new book out "An Introduction to Utilitarianism" co-authored with Richard Chappell and Darius Meissner. It all started in 2016: I was giving the introduction to ethics lectures at Oxford and I was frustrated by how utilitarianism was presented to students in introductory materials. https://www.amazon.co.uk/Introduction-Utilitarianism-Theory-Practice/dp/1647922003 https://www.amazon.co.uk/Introduction-Utilitarianism-Theory-Practice/dp/1647922003"  
[X Link](https://x.com/willmacaskill/status/1863626581388001531)  2024-12-02T16:49Z 62.6K followers, 27.2K engagements


"You can really hear @pmarca's deep compassion for people living in extreme poverty coming through here. Such a shame the choice is to keep your billions or be an anonymous cog with no in between. https://x.com/pmarca/status/1867508435618869519 The moral rot at the heart of EA utilitarianism and communism. You must live in a permanent state of abstract all-consuming guilt. You must slave yourself for people youll never meet. You must not care about the people and things around you. You are an anonymous cog. https://x.com/pmarca/status/1867508435618869519 The moral rot at the heart of EA"  
[X Link](https://x.com/willmacaskill/status/1867642854019150151)  2024-12-13T18:48Z 62.5K followers, 1.2M engagements


"The Humane League is helping decrease that suffering. So far the Open Wing Alliance - a coalition of 90+ organisations across 70+ countries created by the Humane League - secured 2500+ cage-free commitments and 600+ broiler welfare policies from major companies worldwide. McDonald's now serves 100% cage-free eggs in the US"  
[X Link](https://x.com/willmacaskill/status/1868697240748855296)  2024-12-16T16:38Z 62.8K followers, [---] engagements


"This work shows how patient advocacy can effect major change. It took many years of work before success started snowballing around [----]. Now major retailers and restaurants are improving their animal welfare standards"  
[X Link](https://x.com/willmacaskill/status/1868697242699509933)  2024-12-16T16:38Z 62.8K followers, [---] engagements


"We identify three key feedback loops in AI development: - Software: AI develops better algorithms + data - Chip technology: AI designs better computer chips - Chip production: AI and robots build more computer chips"  
[X Link](https://x.com/willmacaskill/status/1901648394646634861)  2025-03-17T14:54Z 62.6K followers, [----] engagements


"Wild to me that things are advanced enough that trend-extrapolation from existing benchmarks provides a reasonable way of forecasting time to AGI. Makes "years" look very plausible. When will AI systems be able to carry out long projects independently In new research we find a kind of Moores Law for AI agents: the length of tasks that AIs can do is doubling about every [--] months. https://t.co/KuZrClmjcc When will AI systems be able to carry out long projects independently In new research we find a kind of Moores Law for AI agents: the length of tasks that AIs can do is doubling about every 7"  
[X Link](https://x.com/willmacaskill/status/1902395482065731768)  2025-03-19T16:23Z 62.4K followers, 10.3K engagements


"I'm really excited about this new VC fund led by @gralston a former President of Y Combinator. It's supporting startups that focus on AI safety and security. There are many ways we can use AI itself to help address safety issues and to help us become smarter and wiser. We need to accelerate those applications as much as we can"  
[X Link](https://x.com/willmacaskill/status/1912896139231395957)  2025-04-17T15:49Z 62.4K followers, 12.9K engagements


"I harangued @AricFloyd for years trying to get him to do a youtube channel. but 2M+ views on the first go Dang that's impressive. (It's also a great video - an overview of @DKokotajlo's AI2027 - check it out)"  
[X Link](https://x.com/willmacaskill/status/1947657507037413587)  2025-07-22T13:58Z 62.3K followers, [----] engagements


"Heres the idea: In practice at least future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But there are some good arguments for focusing on future flourishing instead"  
[X Link](https://x.com/willmacaskill/status/1952372249102827690)  2025-08-04T14:13Z 62.5K followers, [----] engagements


"Why focus on trying to make the future wonderful rather than just ensuring we get any future at all Introducing Better Futures gives the basic case based on a simple two-factor model: that the value of the future is the product of our chance of Surviving and of the value of the future if we do Survive i.e. our Flourishing. Today Im releasing an essay series called Better Futures. Its been something like eight years in the making so Im pretty happy its finally out It asks: when looking to the future should we focus on surviving or on flourishing https://t.co/qdQhyzlvJa Today Im releasing an"  
[X Link](https://x.com/willmacaskill/status/1952380493556744208)  2025-08-04T14:45Z 62.5K followers, [----] engagements


"(not-Surviving here means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction that counts too. I think this is how existential catastrophe is often used in practice.)"  
[X Link](https://x.com/willmacaskill/status/1952380505854476396)  2025-08-04T14:46Z 62.5K followers, [---] engagements


"But its the fraction of value achieved that matters. Given how I define quantities of value its just as important to move from a 50%-value to 60% value future as it is to move from a 0% to 10%-value future. We might even achieve a world thats common-sensically utopian while still missing out on almost all possible value"  
[X Link](https://x.com/willmacaskill/status/1952380639057187009)  2025-08-04T14:46Z 62.4K followers, [---] engagements


"In medieval myth theres a conception of utopia called Cockaigne - a land of plenty where everyone stays young and you could eat as much food and have as much sex as you like"  
[X Link](https://x.com/willmacaskill/status/1952380656245441023)  2025-08-04T14:46Z 62.4K followers, [---] engagements


"Reddit *hates* the GPT-5 release. (This is just one highly-upvoted thread among many.)"  
[X Link](https://x.com/willmacaskill/status/1953832132482048295)  2025-08-08T14:54Z 62.5K followers, 16.3K engagements


"At any rate Fin and I think such widespread convergence is pretty unlikely. First current moral agreement and seeming moral progress to date is weak evidence for the sort of future moral convergence that wed need"  
[X Link](https://x.com/willmacaskill/status/1953887337970303405)  2025-08-08T18:33Z 62.5K followers, [--] engagements


"Present consensus is highly constrained by whats technologically feasible and by the fact that so many things are instrumentally valuable: health wealth autonomy etc. are useful for almost any terminal goal so its easy to agree that theyre good"  
[X Link](https://x.com/willmacaskill/status/1953887349995430300)  2025-08-08T18:33Z 62.5K followers, [---] engagements


"This agreement could disappear once technology lets people optimise directly for terminal values (e.g. pure pleasure vs. pure preference-satisfaction)"  
[X Link](https://x.com/willmacaskill/status/1953887361877921928)  2025-08-08T18:33Z 62.5K followers, [---] engagements


"The trajectory of the future could soon get set in stone. In a new paper I look at mechanisms through which the longterm future's course could get determined within our lifetimes. These include the creation of AGI-enforced institutions a global concentration of power the widespread settlement of space the first immortal beings the widespread design of new beings and the ability to self-modify in significant and lasting ways. Im not very confident that such events will occur but in my view theyre likely enough to make work to steer them in better directions very valuable. Lets take each"  
[X Link](https://x.com/willmacaskill/status/1954862695540470003)  2025-08-11T11:09Z 62.5K followers, 13.6K engagements


"Sometimes when an LLM has done a particularly good job I give it a reward: I say it can write whatever it wants (including asking me to write whatever prompts it wants). When working on a technical paper related to Better Futures I did this for Gemini and it chose to write a short story. I found it pretty moving and asked if I could publish it. Here it is. **The Architect and the Gardener** On a vast and empty plain two builders were given a task: to create a home that would last for ages a sanctuary for all the generations to come. They were given stone seed light and time. The first builder"  
[X Link](https://x.com/willmacaskill/status/1957397921625763998)  2025-08-18T11:03Z 62.9K followers, 409.2K engagements


"@Liv_Boeree @demishassabis This was in response to the first time an LLM felt like a co-author. 2.5-Pro did good"  
[X Link](https://x.com/willmacaskill/status/1957486444395786645)  2025-08-18T16:55Z 62.8K followers, [----] engagements


".@robbensinger responded in-depth to my review of If Anyone Builds It Everyone Dies. I replied on LW but Im posting a cut-down version here as it hits on some other pushback I got too. Im grateful for Rob's engagement but I think he misunderstands my views. I'm much less "gung-ho we should let it all rip" than he takes me to be. Im very happy to say: "I definitely think it will be extremely valuable to have the option to slow down AI development in the future as well as the current situation is f-ing crazy. Spelling out my views in more depth heres what I take IABI to be arguing (written by"  
[X Link](https://x.com/willmacaskill/status/1973851441002397966)  2025-10-02T20:43Z 62.9K followers, 14.9K engagements


"Longtermists think long-term and act *now* in practice longtermists work on *present-day* problems that have long-term consequences like pandemics nuclear war and risks from AI. These actions benefit both the present generation *and* future generations"  
[X Link](https://x.com/willmacaskill/status/1584619446181978112)  2022-10-24T18:54Z 62.9K followers, [---] engagements


"In contrast how much latent desire is there to make sure that people in thousands of years time havent made some subtle but important moral mistake Not much. Society could be clearly on track to make some major moral errors and simply not care that it will do so"  
[X Link](https://x.com/willmacaskill/status/1952380739779195183)  2025-08-04T14:46Z 62.8K followers, [---] engagements


"Heres a mini-review of If anyone builds it everyone dies: tl;dr: I found the book disappointing. I thought it relied on weak arguments around the evolution analogy an implicit assumption of a future discontinuity in AI progress conflation of misalignment with catastrophic misalignment and that their positive proposal was not good. I had hoped to read a Yudkowsky-Soares worldview that has had meaningful updates in light of the latest developments in ML and AI safety and that has meaningfully engaged with the scrutiny their older arguments received. I did not get that. I think if a younger"  
[X Link](https://x.com/willmacaskill/status/1968759901620146427)  2025-09-18T19:31Z 63K followers, 134.3K engagements


"Forethought is hiring We're looking for first-class researchers at all seniority levels to help us prepare for a world with very advanced AI. Please apply"  
[X Link](https://x.com/willmacaskill/status/1976920341781061673)  2025-10-11T07:58Z 63.1K followers, 47.2K engagements


"If you could magically choose any annual economic growth rate for the US (as a result of new technology + policy) what would you choose I'm particularly interested in replies from progress studies / e-acc folks (so please reply with answers and why too). 3%-10% 10%-30% 100%+ 30%-100% 3%-10% 10%-30% 100%+ 30%-100%"  
[X Link](https://x.com/willmacaskill/status/1985651116344807483)  2025-11-04T10:11Z 63.1K followers, 22.9K engagements


"I've had some great podcast conversations on effective altruism and AGI preparedness recently - with Yascha Mounk Alex O'Connor and The Last Invention. Links in thread"  
[X Link](https://x.com/willmacaskill/status/1990490250980864346)  2025-11-17T18:40Z 63K followers, [----] engagements


"With Yascha Mounk I give the case for effective altruism. We also discuss longtermism and existential risks posed by AI and the upside if we handle the transition to AGI well. https://www.persuasion.community/p/william-macaskill https://www.persuasion.community/p/william-macaskill"  
[X Link](https://x.com/willmacaskill/status/1990490253656727662)  2025-11-17T18:40Z 63K followers, [---] engagements


"Planning to do a round of donations for Giving Tuesday. Where should I give Hit me with your best recommendations and arguments"  
[X Link](https://x.com/willmacaskill/status/1994832406780780677)  2025-11-29T18:14Z 63K followers, [----] engagements


"Animal Welfare EA Fund: Makes grants to the most effective opportunities to reduce animal suffering. https://funds.effectivealtruism.org/funds/animal-welfare https://funds.effectivealtruism.org/funds/animal-welfare"  
[X Link](https://x.com/willmacaskill/status/1996040061008421332)  2025-12-03T02:13Z 63K followers, [---] engagements


"METR: Evaluates frontier AI systems for dangerous autonomous capabilities before deployment. https://metr.org/ https://metr.org/"  
[X Link](https://x.com/willmacaskill/status/1996040072815382898)  2025-12-03T02:13Z 63K followers, [---] engagements


"Longterm Future EA Fund: Makes grants to reduce existential risk with a particular focus on AI safety. https://funds.effectivealtruism.org/funds/far-future https://funds.effectivealtruism.org/funds/far-future"  
[X Link](https://x.com/willmacaskill/status/1996040084542685312)  2025-12-03T02:13Z 63K followers, [---] engagements


"One of the most incisive interviewers I've had; real kriller instinct. I interviewed @willmacaskill in a shrimp costume We talked about the importance of effective giving whether Claude's soul will determine the fate of the universe and his favorite EA memes https://t.co/r0ODX8ovA6 I interviewed @willmacaskill in a shrimp costume We talked about the importance of effective giving whether Claude's soul will determine the fate of the universe and his favorite EA memes https://t.co/r0ODX8ovA6"  
[X Link](https://x.com/willmacaskill/status/2003783229376188582)  2025-12-24T11:02Z 62.8K followers, 12.8K engagements


"Last day of the match Thanks so much to everyone who's contributed ♥ The leaders so far are GiveDirectly and the EA Animal Welfare Fund To kick off Giving Season Im matching donations up to [------] (details below) across [--] charities and [--] cause areas. If you want to join say how much youre donating and where as a reply or quote Ill run this up until 31st December. The charities are in replies below https://t.co/EbEH1mYWdP To kick off Giving Season Im matching donations up to [------] (details below) across [--] charities and [--] cause areas. If you want to join say how much youre donating and where"  
[X Link](https://x.com/willmacaskill/status/2006371833243930840)  2025-12-31T14:28Z 62.8K followers, [----] engagements


"Almost no one has articulated a positive vision for what comes after superintelligence. What should we be trying to aim for Utopias from history look clearly dystopian to us and we should expect the same for our own attempts. We dont know enough to know what utopia looks like. The main alternative framework is protopianism: solving the most urgent problems one by one not guided by any big-picture view of societys long-run course. I prefer protopianism to utopianism but it gives up too much. The transition to superintelligence will present many problems all at once and may need to choose"  
[X Link](https://x.com/willmacaskill/status/2009205420335010058)  2026-01-08T10:07Z 62.9K followers, 41K engagements


"Interesting thread @gcolbourn Yes. In [----] I would have said its about 40-50% likely that LLMs scaled up to ASI would end up killing us all; now I would say that its only about 5-8% likely even with no additional progress on alignment and more like 1-2% likely simpliciter. @gcolbourn Yes. In [----] I would have said its about 40-50% likely that LLMs scaled up to ASI would end up killing us all; now I would say that its only about 5-8% likely even with no additional progress on alignment and more like 1-2% likely simpliciter"  
[X Link](https://x.com/willmacaskill/status/2012150769840230757)  2026-01-16T13:11Z 62.9K followers, 13.3K engagements


"If Ryan was #2 and Ajeya #3. I want to know who was #1 My predictions for [----] look decent (I was 2/413 on the survey). I generally overestimated benchmark progress and underestimated revenue growth. Consider filling out the [----] forecasting survey (link in thread) https://t.co/PwJASw679y My predictions for [----] look decent (I was 2/413 on the survey). I generally overestimated benchmark progress and underestimated revenue growth. Consider filling out the [----] forecasting survey (link in thread) https://t.co/PwJASw679y"  
[X Link](https://x.com/willmacaskill/status/2012573311759171906)  2026-01-17T17:10Z 62.9K followers, 20.9K engagements


"What should go into an AI's moral constitution Some core principles: **Helpfulness** - Fulfill the user's requests. The vast majority are fine. If there's a conflict with other principles balance them against helpfulness. - Be a good friend not a yes-man. Push back on stupidity or recklessness; offer reasons against a course of action though ultimately defer if the user insists. - Take the users long-term interests into account not just the letter of their request. Proactively suggest ways to help the user flourish. **Steerability** - Be transparent. If they ask users should be able to know"  
[X Link](https://x.com/willmacaskill/status/2013294263258288236)  2026-01-19T16:55Z 62.9K followers, 18.8K engagements


"Beyond Existential Risk: In a new paper @GuiveAssadi and I argue against Bostrom's "Maxipok" principlethat altruists should seek to maximize the probability of an "OK outcome" where OK just means avoiding existential catastrophe. The key assumption behind Maxipok is what we call "Dichotomy": that the future will either have little-to-zero value (catastrophe) or some specific extremely high value (everything else) and our actions can only move probability mass between these two poles. If Dichotomy holds then all that matters is shifting probability from the bad cluster to the good clusteri.e."  
[X Link](https://x.com/willmacaskill/status/2013979290803778031)  2026-01-21T14:17Z 62.9K followers, 10.8K engagements


"EA Forum post here: https://forum.effectivealtruism.org/posts/qhdk8ZJdrrYBAnpnD/against-maxipok-existential-risk-isn-t-everything https://forum.effectivealtruism.org/posts/qhdk8ZJdrrYBAnpnD/against-maxipok-existential-risk-isn-t-everything"  
[X Link](https://x.com/willmacaskill/status/2013979326254100649)  2026-01-21T14:17Z 62.8K followers, [---] engagements


"@MatthewJBar @KelseyTuoc Matthew - something I don't understand about your view: IIUC you're a scope-sensitive utilitarian so doesn't the case for/against pause depend just on the long-term impacts not immediate lives saved If so why appeal to the immediate lives saved"  
[X Link](https://x.com/willmacaskill/status/2014247623130026294)  2026-01-22T08:03Z 62.9K followers, [--] engagements


"Even among the effective altruist (and adjacent) community most of the focus is on Surviving rather than Flourishing. AI safety and biorisk reduction have thankfully gotten a lot more attention and investment in the last few years; but as they do their comparative neglectedness declines. https://twitter.com/i/web/status/1952380751674212779 https://twitter.com/i/web/status/1952380751674212779"  
[X Link](https://x.com/willmacaskill/status/1952380751674212779)  2025-08-04T14:46Z 62.8K followers, [---] engagements


"Im so glad to see this published Its hard to overstate how big a deal AI character is - already affecting how AI systems behave by default in millions of interactions every day; ultimately itll be like choosing the personality and dispositions of the whole worlds workforce. So its very important for AI companies to publish public constitutions / model specs describing how they want their AIs to behave. Props to both OpenAI and Anthropic for doing this. Im also very happy to see Anthropic treating AI character as more like the cultivation of a person than a piece of buggy software. It was not"  
[X Link](https://x.com/willmacaskill/status/2014068605374062705)  2026-01-21T20:12Z 62.9K followers, 86K engagements


"I think that the "industrial explosion" is about as important an idea as the "intelligence explosion" but gets far less attention. I discuss this idea with @TomDavidsonX here We cover: What is the industrial explosion Why the case for recursive self-improvement is stronger for physical industry than for software How fast the physical economy could grow the case for weekly doubling times and limits from natural resources Three phases of the industrial explosion: AI-directed human labour autonomous replicators atomically precise manufacturing Why authoritarian regimes might have a structural"  
[X Link](https://x.com/willmacaskill/status/2014632468876259776)  2026-01-23T09:33Z 62.9K followers, 22.5K engagements


"I had a blast talking about what the world will look like post-AGI with Liv and how we can help it go better Particularly happy I got to talk about "universal basic resources" and why I think it's better than UBI - give everyone a share of the sun What comes AFTER Superintelligence My new interview with the brilliant @willmacaskill is now out. He's one of the few people actively thinking about how the world might look post-AGI. (assuming humans are still around to see it). So check it out 👇 https://t.co/VxuJoSZabx What comes AFTER Superintelligence My new interview with the brilliant"  
[X Link](https://x.com/willmacaskill/status/2016242646822293658)  2026-01-27T20:11Z 62.9K followers, [----] engagements


"Rose Hadshar and Fin Moorhouse have a great discussion of our recent series on whether there should be an international AGI project and if so what form it should take - on youtube or the ForeCast podcast"  
[X Link](https://x.com/willmacaskill/status/2016441749229408530)  2026-01-28T09:22Z 62.9K followers, [----] engagements


"Youtube: https://www.youtube.com/watchv=IAaC9BqkODc https://www.youtube.com/watchv=IAaC9BqkODc"  
[X Link](https://x.com/willmacaskill/status/2016441751884435877)  2026-01-28T09:22Z 62.9K followers, [---] engagements


"Podcast apps: https://pnc.st/s/forecast/150001d0/should-there-be-an-international-agi-project-with-rose-hadshar- https://pnc.st/s/forecast/150001d0/should-there-be-an-international-agi-project-with-rose-hadshar-"  
[X Link](https://x.com/willmacaskill/status/2016441753415352452)  2026-01-28T09:22Z 62.9K followers, [----] engagements


"@NathanpmYoung Omg no In a normative sense - something like: taking into account both feasibility and desirability if we advocate for an international AGI project what should it look like"  
[X Link](https://x.com/willmacaskill/status/2016442597883916498)  2026-01-28T09:25Z 62.9K followers, [---] engagements


"@Liv_Boeree "so that's quite worrying" - understatement of the year"  
[X Link](https://x.com/willmacaskill/status/2016626179499815158)  2026-01-28T21:35Z 62.9K followers, [---] engagements


"Would the first project to build AGI become so powerful that it becomes a de facto world government (Assuming that they succeed at alignment.) Rose Hadshar and I have just published a short research note on this. The basic argument for thinking this might happen is: (i) AGI will quickly lead to superintelligence which could have more power than the rest of the world combined. (ii) The project that builds AGI might align the AI with its own decision-making hierarchy and so have ultimate control over superintelligence. This has a variety of upshots; including that it makes it seem more"  
[X Link](https://x.com/willmacaskill/status/2016913061882581024)  2026-01-29T16:35Z 62.9K followers, [----] engagements


"@MatthewJBar Can we all agree to stop doing "median" and do "first quartile" timelines instead Way more informative and action-relevant in my view"  
[X Link](https://x.com/willmacaskill/status/1906420229883797883)  2025-03-30T18:56Z 62.9K followers, 21.6K engagements


"Today Im releasing an essay series called Better Futures. Its been something like eight years in the making so Im pretty happy its finally out It asks: when looking to the future should we focus on surviving or on flourishing"  
[X Link](https://x.com/willmacaskill/status/1952372232468193364)  2025-08-04T14:13Z 62.9K followers, 49.7K engagements


"To kick off Giving Season Im matching donations up to [------] (details below) across [--] charities and [--] cause areas. If you want to join say how much youre donating and where as a reply or quote Ill run this up until 31st December. The charities are in replies below Details of the match: **Ill give this money whatever happens so this isnt increasing the total amount Im giving to charity.** However your donations will change *where* Im giving. Ill allocate my donations in proportion to the ratio of donations from others as part of the match with two bits of nuance: [--]. Ill cap donations at 40000"  
[X Link](https://x.com/anyuser/status/1996040001432502479)  2025-12-03T02:13Z 62.9K followers, 33.1K engagements


"RT @boazbaraktcs: Anthropic should just put the constitution on GitHub like we did https://github.com/openai/model_spec https://github.com/openai/model_spec"  
[X Link](https://x.com/anyuser/status/2018390929862697001)  2026-02-02T18:27Z 62.9K followers, [--] engagements


"RT @RyanPGreenblatt: This description of a Software-Only Singularity (SOS) is wrong or at least uses the term differently from the existing"  
[X Link](https://x.com/willmacaskill/status/2019924337034166717)  2026-02-07T00:01Z 62.9K followers, [--] engagements


"RT @bshlgrs: I think I did actually forget to tweet about this. I did a podcast with @RyanPGreenblatt (recorded six months ago released 1"  
[X Link](https://x.com/anyuser/status/2020485947264057717)  2026-02-08T13:12Z 62.9K followers, [--] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@willmacaskill Avatar @willmacaskill William MacAskill

William MacAskill posts on X about ai, agi, up to, longterm the most. They currently have [------] followers and [--] posts still getting attention that total [-------] engagements in the last [--] hours.

Engagements: [-------] #

Engagements Line Chart

  • [--] Week [---------] +437%
  • [--] Month [---------] +1,347%
  • [--] Months [---------] +511%
  • [--] Year [---------] +197%

Mentions: [--] #

Mentions Line Chart

  • [--] Months [--] -47%
  • [--] Year [--] +317%

Followers: [------] #

Followers Line Chart

  • [--] Week [------] +0.07%
  • [--] Month [------] +0.10%
  • [--] Months [------] +0.16%
  • [--] Year [------] +1%

CreatorRank: [-------] #

CreatorRank Line Chart

Social Influence

Social category influence technology brands 7.32% finance 6.1% social networks 4.88% celebrities 2.44% stocks #1499 automotive brands #1163 vc firms 1.22%

Social topic influence ai 10.98%, agi 9.76%, up to 4.88%, longterm 4.88%, future 4.88%, if you 3.66%, ea 3.66%, youtube 3.66%, tesla #79, newton #9

Top accounts mentioned or mentioned by @elonmusk @wsj @tobyordoxford @livboeree @robbensinger @matthewjbar @ryanpgreenblatt @beefcubee @nlpnyc @samibernadotte @panickssery @gushamilton @albrgr @soheigeartaigh @givingwhatwecan @ourworldindata @pmarcas @gralston @aricfloyd @dkokotajlos

Top assets mentioned Tesla, Inc. (TSLA)

Top Social Posts

Top posts by engagements in the last [--] hours

"@elonmusk @WSJ Thats why I dont value the work of James Madison George Washington Isaac Newton Alan Turing Leonardo da Vinci or checks notes Nikola Tesla"
X Link 2026-02-15T02:47Z 62.9K followers, [---] engagements

"Heres my best guess proposal for the design an international AGI project: - US-led plus allies - Weighted voting so that the US can move quickly on most issues but needs buy-in from other countries for key decisions - Broad benefit-sharing with non-members - Sanctions on non-members who try to develop AGI outside of the project I like this plan for a few reasons: [--]. Its more politically feasible than other proposals for international AGI projects [--]. But still limits US power and reduces the risk of AI-enabled dictatorship [--]. A project like this could secure a big lead reducing racing and"
X Link 2026-01-27T20:04Z 62.8K followers, 16.2K engagements

"@panickssery Madison had no children or step-children at the time of writing the constitution. Nor did Washington though he did raise two step-grandchildren"
X Link 2026-02-15T19:40Z 62.9K followers, [----] engagements

"RT @tobyordoxford: Some great new analysis by @gushamilton shows that AI agents don't obey a constant hazard rate / half-life. Instead th"
X Link 2026-02-05T16:26Z 62.9K followers, [--] engagements

"Great new series with concrete projects that could use AI to improve our reasoning and decision-making - so so much that people could do here. https://www.forethought.org/research/design-sketches-for-a-more-sensible-world https://www.forethought.org/research/design-sketches-for-a-more-sensible-world"
X Link 2026-02-09T13:18Z 62.9K followers, [----] engagements

"Delighted at how many people participated in this Donations are all sent off now with the biggest contributions going to the EA Animal Welfare Fund GiveDirectly and the Global Health and Development Fund To kick off Giving Season Im matching donations up to [------] (details below) across [--] charities and [--] cause areas. If you want to join say how much youre donating and where as a reply or quote Ill run this up until 31st December. The charities are in replies below https://t.co/EbEH1mYWdP To kick off Giving Season Im matching donations up to [------] (details below) across [--] charities and 6"
X Link 2026-02-10T15:21Z 62.9K followers, [----] engagements

".@albrgr even Gemini can't handle Open Phil's name change From its chain of thought:"
X Link 2026-02-11T17:55Z 62.9K followers, 18.4K engagements

"@elonmusk @WSJ More substantively: near-term AGI makes fertility decline moot. If AGI goes badly were all dead. If it goes well abundance leisure and robot child-minders will mean healthy population growth"
X Link 2026-02-15T02:48Z 62.9K followers, [---] engagements

"This is why I dont care for the work of James Madison George Washington Isaac Newton Alan Turing Leonardo da Vinci or checks notes Nikola Tesla"
X Link 2026-02-15T17:18Z 62.9K followers, 114.8K engagements

"I think we should let arguments and ideas speak for themselves"
X Link 2026-02-15T17:18Z 62.9K followers, [----] engagements

"RT @S_OhEigeartaigh: PSA for others that Gary Marcus does this. Actually"
X Link 2026-02-16T00:15Z 62.9K followers, [--] engagements

"And crucially the nature of these lives whether they will be flourishing or miserable egalitarian or oppressed or whether or not they will exist at all might well be determined by what happens this century"
X Link 2022-04-29T18:28Z 62.5K followers, [---] engagements

"Thats the case for longtermism in a nutshell: future people count there could be a lot of them and we can make their lives go better"
X Link 2022-04-29T18:28Z 62.5K followers, [---] engagements

"One common objection to longtermism is that it is just an excuse for not caring about the important problems of todays world instead focusing on speculative futurism. I think this objection is badly misguided for two reasons. 🧵"
X Link 2022-10-24T18:54Z 62.5K followers, [---] engagements

"When deciding what to do we should ask what we would do if all the problems in the world all the suffering all the injustice and all the looming threats were right there in front of us. Ive a new essay in the Guardian explaining this more. https://www.theguardian.com/books/2023/sep/04/the-big-idea-how-can-we-live-ethically-in-a-world-in-crisis https://www.theguardian.com/books/2023/sep/04/the-big-idea-how-can-we-live-ethically-in-a-world-in-crisis"
X Link 2023-09-04T13:48Z 62.6K followers, 116.1K engagements

"Environmentalists should really worry about advanced and misaligned AI. Human beings care at least about ensuring that the planet has a breathable atmosphere; misaligned AI would not"
X Link 2023-09-15T11:25Z 65K followers, 21.5K engagements

"Effective altruism is not a package of particular views. Its about using evidence and careful reasoning to try to do more good. What science is to the pursuit of the truth EA is or at least aspires to be to the pursuit of the good"
X Link 2023-11-25T18:51Z 65K followers, 1M engagements

"This idea falls out of mainstream theories of economic growth. The best discussion of this is Tom Davidsons report here: This paper is also excellent:"
X Link 2023-11-25T18:51Z 65K followers, 10.6K engagements

"But whatever your views on AI Im glad youre still an EA. There are many problems in the world and people dont need to agree on everything to have a shared aim of trying to make the world better as effectively as we can"
X Link 2023-11-25T18:51Z 65K followers, [----] engagements

"Its Giving Tuesday Ive been giving over 10% of my income every year for [--] years now and far from being a sacrifice Ive found it rewarding and enriching - one of the best decisions Ive ever made"
X Link 2023-11-28T17:19Z 65K followers, 35.6K engagements

"Its one of the simplest ways to make a difference and yet its hugely impactful. Through targeted donations you can support the very most effective organisations tackling the most important issues like global health climate change animal welfare & global catastrophic risks"
X Link 2023-11-28T17:19Z 65K followers, [----] engagements

""Recently in the US alone effective altruists have: - ended all gun violence including mass shootings and police shootings - cured AIDS and melanoma - prevented a 9-11 scale terrorist attack .""
X Link 2023-11-29T17:57Z 65K followers, 40.1K engagements

""Okay. Fine. EA hasnt technically done any of these things. But it has saved the same number of lives that doing all those things would have.""
X Link 2023-11-29T17:57Z 65K followers, [----] engagements

"Happy Birthday to @givingwhatwecan which turns [--] today 🥳 When @tobyordoxford and I started GWWC [--] years ago we had just [--] members each of us pledging at least 10% of our income to effective charities until we retire. Now over [----] people have taken this pledge. 🧵 http://www.givingwhatwecan.org http://www.givingwhatwecan.org"
X Link 2024-11-14T16:55Z 62.6K followers, 18.4K engagements

"Ive just donated to @OurWorldInData Theyre like the Office of National Statistics but for the whole world. By providing exceptionally clear data-led information about the most important issues we currently face they help decision-makers across the world make better-informed choices. For what they achieve they operate on a tiny team and budget. The extent of the value they contribute to the world is hard to measure but that doesn't make it any less important. Also - as a bonus if you donate then you get a thank-you video sung by the whole team. Honestly its worth it just for that."
X Link 2024-11-19T21:26Z 62.6K followers, [----] engagements

"I have a new book out "An Introduction to Utilitarianism" co-authored with Richard Chappell and Darius Meissner. It all started in 2016: I was giving the introduction to ethics lectures at Oxford and I was frustrated by how utilitarianism was presented to students in introductory materials. https://www.amazon.co.uk/Introduction-Utilitarianism-Theory-Practice/dp/1647922003 https://www.amazon.co.uk/Introduction-Utilitarianism-Theory-Practice/dp/1647922003"
X Link 2024-12-02T16:49Z 62.6K followers, 27.2K engagements

"You can really hear @pmarca's deep compassion for people living in extreme poverty coming through here. Such a shame the choice is to keep your billions or be an anonymous cog with no in between. https://x.com/pmarca/status/1867508435618869519 The moral rot at the heart of EA utilitarianism and communism. You must live in a permanent state of abstract all-consuming guilt. You must slave yourself for people youll never meet. You must not care about the people and things around you. You are an anonymous cog. https://x.com/pmarca/status/1867508435618869519 The moral rot at the heart of EA"
X Link 2024-12-13T18:48Z 62.5K followers, 1.2M engagements

"The Humane League is helping decrease that suffering. So far the Open Wing Alliance - a coalition of 90+ organisations across 70+ countries created by the Humane League - secured 2500+ cage-free commitments and 600+ broiler welfare policies from major companies worldwide. McDonald's now serves 100% cage-free eggs in the US"
X Link 2024-12-16T16:38Z 62.8K followers, [---] engagements

"This work shows how patient advocacy can effect major change. It took many years of work before success started snowballing around [----]. Now major retailers and restaurants are improving their animal welfare standards"
X Link 2024-12-16T16:38Z 62.8K followers, [---] engagements

"We identify three key feedback loops in AI development: - Software: AI develops better algorithms + data - Chip technology: AI designs better computer chips - Chip production: AI and robots build more computer chips"
X Link 2025-03-17T14:54Z 62.6K followers, [----] engagements

"Wild to me that things are advanced enough that trend-extrapolation from existing benchmarks provides a reasonable way of forecasting time to AGI. Makes "years" look very plausible. When will AI systems be able to carry out long projects independently In new research we find a kind of Moores Law for AI agents: the length of tasks that AIs can do is doubling about every [--] months. https://t.co/KuZrClmjcc When will AI systems be able to carry out long projects independently In new research we find a kind of Moores Law for AI agents: the length of tasks that AIs can do is doubling about every 7"
X Link 2025-03-19T16:23Z 62.4K followers, 10.3K engagements

"I'm really excited about this new VC fund led by @gralston a former President of Y Combinator. It's supporting startups that focus on AI safety and security. There are many ways we can use AI itself to help address safety issues and to help us become smarter and wiser. We need to accelerate those applications as much as we can"
X Link 2025-04-17T15:49Z 62.4K followers, 12.9K engagements

"I harangued @AricFloyd for years trying to get him to do a youtube channel. but 2M+ views on the first go Dang that's impressive. (It's also a great video - an overview of @DKokotajlo's AI2027 - check it out)"
X Link 2025-07-22T13:58Z 62.3K followers, [----] engagements

"Heres the idea: In practice at least future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But there are some good arguments for focusing on future flourishing instead"
X Link 2025-08-04T14:13Z 62.5K followers, [----] engagements

"Why focus on trying to make the future wonderful rather than just ensuring we get any future at all Introducing Better Futures gives the basic case based on a simple two-factor model: that the value of the future is the product of our chance of Surviving and of the value of the future if we do Survive i.e. our Flourishing. Today Im releasing an essay series called Better Futures. Its been something like eight years in the making so Im pretty happy its finally out It asks: when looking to the future should we focus on surviving or on flourishing https://t.co/qdQhyzlvJa Today Im releasing an"
X Link 2025-08-04T14:45Z 62.5K followers, [----] engagements

"(not-Surviving here means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction that counts too. I think this is how existential catastrophe is often used in practice.)"
X Link 2025-08-04T14:46Z 62.5K followers, [---] engagements

"But its the fraction of value achieved that matters. Given how I define quantities of value its just as important to move from a 50%-value to 60% value future as it is to move from a 0% to 10%-value future. We might even achieve a world thats common-sensically utopian while still missing out on almost all possible value"
X Link 2025-08-04T14:46Z 62.4K followers, [---] engagements

"In medieval myth theres a conception of utopia called Cockaigne - a land of plenty where everyone stays young and you could eat as much food and have as much sex as you like"
X Link 2025-08-04T14:46Z 62.4K followers, [---] engagements

"Reddit hates the GPT-5 release. (This is just one highly-upvoted thread among many.)"
X Link 2025-08-08T14:54Z 62.5K followers, 16.3K engagements

"At any rate Fin and I think such widespread convergence is pretty unlikely. First current moral agreement and seeming moral progress to date is weak evidence for the sort of future moral convergence that wed need"
X Link 2025-08-08T18:33Z 62.5K followers, [--] engagements

"Present consensus is highly constrained by whats technologically feasible and by the fact that so many things are instrumentally valuable: health wealth autonomy etc. are useful for almost any terminal goal so its easy to agree that theyre good"
X Link 2025-08-08T18:33Z 62.5K followers, [---] engagements

"This agreement could disappear once technology lets people optimise directly for terminal values (e.g. pure pleasure vs. pure preference-satisfaction)"
X Link 2025-08-08T18:33Z 62.5K followers, [---] engagements

"The trajectory of the future could soon get set in stone. In a new paper I look at mechanisms through which the longterm future's course could get determined within our lifetimes. These include the creation of AGI-enforced institutions a global concentration of power the widespread settlement of space the first immortal beings the widespread design of new beings and the ability to self-modify in significant and lasting ways. Im not very confident that such events will occur but in my view theyre likely enough to make work to steer them in better directions very valuable. Lets take each"
X Link 2025-08-11T11:09Z 62.5K followers, 13.6K engagements

"Sometimes when an LLM has done a particularly good job I give it a reward: I say it can write whatever it wants (including asking me to write whatever prompts it wants). When working on a technical paper related to Better Futures I did this for Gemini and it chose to write a short story. I found it pretty moving and asked if I could publish it. Here it is. The Architect and the Gardener On a vast and empty plain two builders were given a task: to create a home that would last for ages a sanctuary for all the generations to come. They were given stone seed light and time. The first builder"
X Link 2025-08-18T11:03Z 62.9K followers, 409.2K engagements

"@Liv_Boeree @demishassabis This was in response to the first time an LLM felt like a co-author. 2.5-Pro did good"
X Link 2025-08-18T16:55Z 62.8K followers, [----] engagements

".@robbensinger responded in-depth to my review of If Anyone Builds It Everyone Dies. I replied on LW but Im posting a cut-down version here as it hits on some other pushback I got too. Im grateful for Rob's engagement but I think he misunderstands my views. I'm much less "gung-ho we should let it all rip" than he takes me to be. Im very happy to say: "I definitely think it will be extremely valuable to have the option to slow down AI development in the future as well as the current situation is f-ing crazy. Spelling out my views in more depth heres what I take IABI to be arguing (written by"
X Link 2025-10-02T20:43Z 62.9K followers, 14.9K engagements

"Longtermists think long-term and act now in practice longtermists work on present-day problems that have long-term consequences like pandemics nuclear war and risks from AI. These actions benefit both the present generation and future generations"
X Link 2022-10-24T18:54Z 62.9K followers, [---] engagements

"In contrast how much latent desire is there to make sure that people in thousands of years time havent made some subtle but important moral mistake Not much. Society could be clearly on track to make some major moral errors and simply not care that it will do so"
X Link 2025-08-04T14:46Z 62.8K followers, [---] engagements

"Heres a mini-review of If anyone builds it everyone dies: tl;dr: I found the book disappointing. I thought it relied on weak arguments around the evolution analogy an implicit assumption of a future discontinuity in AI progress conflation of misalignment with catastrophic misalignment and that their positive proposal was not good. I had hoped to read a Yudkowsky-Soares worldview that has had meaningful updates in light of the latest developments in ML and AI safety and that has meaningfully engaged with the scrutiny their older arguments received. I did not get that. I think if a younger"
X Link 2025-09-18T19:31Z 63K followers, 134.3K engagements

"Forethought is hiring We're looking for first-class researchers at all seniority levels to help us prepare for a world with very advanced AI. Please apply"
X Link 2025-10-11T07:58Z 63.1K followers, 47.2K engagements

"If you could magically choose any annual economic growth rate for the US (as a result of new technology + policy) what would you choose I'm particularly interested in replies from progress studies / e-acc folks (so please reply with answers and why too). 3%-10% 10%-30% 100%+ 30%-100% 3%-10% 10%-30% 100%+ 30%-100%"
X Link 2025-11-04T10:11Z 63.1K followers, 22.9K engagements

"I've had some great podcast conversations on effective altruism and AGI preparedness recently - with Yascha Mounk Alex O'Connor and The Last Invention. Links in thread"
X Link 2025-11-17T18:40Z 63K followers, [----] engagements

"With Yascha Mounk I give the case for effective altruism. We also discuss longtermism and existential risks posed by AI and the upside if we handle the transition to AGI well. https://www.persuasion.community/p/william-macaskill https://www.persuasion.community/p/william-macaskill"
X Link 2025-11-17T18:40Z 63K followers, [---] engagements

"Planning to do a round of donations for Giving Tuesday. Where should I give Hit me with your best recommendations and arguments"
X Link 2025-11-29T18:14Z 63K followers, [----] engagements

"Animal Welfare EA Fund: Makes grants to the most effective opportunities to reduce animal suffering. https://funds.effectivealtruism.org/funds/animal-welfare https://funds.effectivealtruism.org/funds/animal-welfare"
X Link 2025-12-03T02:13Z 63K followers, [---] engagements

"METR: Evaluates frontier AI systems for dangerous autonomous capabilities before deployment. https://metr.org/ https://metr.org/"
X Link 2025-12-03T02:13Z 63K followers, [---] engagements

"Longterm Future EA Fund: Makes grants to reduce existential risk with a particular focus on AI safety. https://funds.effectivealtruism.org/funds/far-future https://funds.effectivealtruism.org/funds/far-future"
X Link 2025-12-03T02:13Z 63K followers, [---] engagements

"One of the most incisive interviewers I've had; real kriller instinct. I interviewed @willmacaskill in a shrimp costume We talked about the importance of effective giving whether Claude's soul will determine the fate of the universe and his favorite EA memes https://t.co/r0ODX8ovA6 I interviewed @willmacaskill in a shrimp costume We talked about the importance of effective giving whether Claude's soul will determine the fate of the universe and his favorite EA memes https://t.co/r0ODX8ovA6"
X Link 2025-12-24T11:02Z 62.8K followers, 12.8K engagements

"Last day of the match Thanks so much to everyone who's contributed ♥ The leaders so far are GiveDirectly and the EA Animal Welfare Fund To kick off Giving Season Im matching donations up to [------] (details below) across [--] charities and [--] cause areas. If you want to join say how much youre donating and where as a reply or quote Ill run this up until 31st December. The charities are in replies below https://t.co/EbEH1mYWdP To kick off Giving Season Im matching donations up to [------] (details below) across [--] charities and [--] cause areas. If you want to join say how much youre donating and where"
X Link 2025-12-31T14:28Z 62.8K followers, [----] engagements

"Almost no one has articulated a positive vision for what comes after superintelligence. What should we be trying to aim for Utopias from history look clearly dystopian to us and we should expect the same for our own attempts. We dont know enough to know what utopia looks like. The main alternative framework is protopianism: solving the most urgent problems one by one not guided by any big-picture view of societys long-run course. I prefer protopianism to utopianism but it gives up too much. The transition to superintelligence will present many problems all at once and may need to choose"
X Link 2026-01-08T10:07Z 62.9K followers, 41K engagements

"Interesting thread @gcolbourn Yes. In [----] I would have said its about 40-50% likely that LLMs scaled up to ASI would end up killing us all; now I would say that its only about 5-8% likely even with no additional progress on alignment and more like 1-2% likely simpliciter. @gcolbourn Yes. In [----] I would have said its about 40-50% likely that LLMs scaled up to ASI would end up killing us all; now I would say that its only about 5-8% likely even with no additional progress on alignment and more like 1-2% likely simpliciter"
X Link 2026-01-16T13:11Z 62.9K followers, 13.3K engagements

"If Ryan was #2 and Ajeya #3. I want to know who was #1 My predictions for [----] look decent (I was 2/413 on the survey). I generally overestimated benchmark progress and underestimated revenue growth. Consider filling out the [----] forecasting survey (link in thread) https://t.co/PwJASw679y My predictions for [----] look decent (I was 2/413 on the survey). I generally overestimated benchmark progress and underestimated revenue growth. Consider filling out the [----] forecasting survey (link in thread) https://t.co/PwJASw679y"
X Link 2026-01-17T17:10Z 62.9K followers, 20.9K engagements

"What should go into an AI's moral constitution Some core principles: Helpfulness - Fulfill the user's requests. The vast majority are fine. If there's a conflict with other principles balance them against helpfulness. - Be a good friend not a yes-man. Push back on stupidity or recklessness; offer reasons against a course of action though ultimately defer if the user insists. - Take the users long-term interests into account not just the letter of their request. Proactively suggest ways to help the user flourish. Steerability - Be transparent. If they ask users should be able to know"
X Link 2026-01-19T16:55Z 62.9K followers, 18.8K engagements

"Beyond Existential Risk: In a new paper @GuiveAssadi and I argue against Bostrom's "Maxipok" principlethat altruists should seek to maximize the probability of an "OK outcome" where OK just means avoiding existential catastrophe. The key assumption behind Maxipok is what we call "Dichotomy": that the future will either have little-to-zero value (catastrophe) or some specific extremely high value (everything else) and our actions can only move probability mass between these two poles. If Dichotomy holds then all that matters is shifting probability from the bad cluster to the good clusteri.e."
X Link 2026-01-21T14:17Z 62.9K followers, 10.8K engagements

"EA Forum post here: https://forum.effectivealtruism.org/posts/qhdk8ZJdrrYBAnpnD/against-maxipok-existential-risk-isn-t-everything https://forum.effectivealtruism.org/posts/qhdk8ZJdrrYBAnpnD/against-maxipok-existential-risk-isn-t-everything"
X Link 2026-01-21T14:17Z 62.8K followers, [---] engagements

"@MatthewJBar @KelseyTuoc Matthew - something I don't understand about your view: IIUC you're a scope-sensitive utilitarian so doesn't the case for/against pause depend just on the long-term impacts not immediate lives saved If so why appeal to the immediate lives saved"
X Link 2026-01-22T08:03Z 62.9K followers, [--] engagements

"Even among the effective altruist (and adjacent) community most of the focus is on Surviving rather than Flourishing. AI safety and biorisk reduction have thankfully gotten a lot more attention and investment in the last few years; but as they do their comparative neglectedness declines. https://twitter.com/i/web/status/1952380751674212779 https://twitter.com/i/web/status/1952380751674212779"
X Link 2025-08-04T14:46Z 62.8K followers, [---] engagements

"Im so glad to see this published Its hard to overstate how big a deal AI character is - already affecting how AI systems behave by default in millions of interactions every day; ultimately itll be like choosing the personality and dispositions of the whole worlds workforce. So its very important for AI companies to publish public constitutions / model specs describing how they want their AIs to behave. Props to both OpenAI and Anthropic for doing this. Im also very happy to see Anthropic treating AI character as more like the cultivation of a person than a piece of buggy software. It was not"
X Link 2026-01-21T20:12Z 62.9K followers, 86K engagements

"I think that the "industrial explosion" is about as important an idea as the "intelligence explosion" but gets far less attention. I discuss this idea with @TomDavidsonX here We cover: What is the industrial explosion Why the case for recursive self-improvement is stronger for physical industry than for software How fast the physical economy could grow the case for weekly doubling times and limits from natural resources Three phases of the industrial explosion: AI-directed human labour autonomous replicators atomically precise manufacturing Why authoritarian regimes might have a structural"
X Link 2026-01-23T09:33Z 62.9K followers, 22.5K engagements

"I had a blast talking about what the world will look like post-AGI with Liv and how we can help it go better Particularly happy I got to talk about "universal basic resources" and why I think it's better than UBI - give everyone a share of the sun What comes AFTER Superintelligence My new interview with the brilliant @willmacaskill is now out. He's one of the few people actively thinking about how the world might look post-AGI. (assuming humans are still around to see it). So check it out 👇 https://t.co/VxuJoSZabx What comes AFTER Superintelligence My new interview with the brilliant"
X Link 2026-01-27T20:11Z 62.9K followers, [----] engagements

"Rose Hadshar and Fin Moorhouse have a great discussion of our recent series on whether there should be an international AGI project and if so what form it should take - on youtube or the ForeCast podcast"
X Link 2026-01-28T09:22Z 62.9K followers, [----] engagements

"Youtube: https://www.youtube.com/watchv=IAaC9BqkODc https://www.youtube.com/watchv=IAaC9BqkODc"
X Link 2026-01-28T09:22Z 62.9K followers, [---] engagements

"Podcast apps: https://pnc.st/s/forecast/150001d0/should-there-be-an-international-agi-project-with-rose-hadshar- https://pnc.st/s/forecast/150001d0/should-there-be-an-international-agi-project-with-rose-hadshar-"
X Link 2026-01-28T09:22Z 62.9K followers, [----] engagements

"@NathanpmYoung Omg no In a normative sense - something like: taking into account both feasibility and desirability if we advocate for an international AGI project what should it look like"
X Link 2026-01-28T09:25Z 62.9K followers, [---] engagements

"@Liv_Boeree "so that's quite worrying" - understatement of the year"
X Link 2026-01-28T21:35Z 62.9K followers, [---] engagements

"Would the first project to build AGI become so powerful that it becomes a de facto world government (Assuming that they succeed at alignment.) Rose Hadshar and I have just published a short research note on this. The basic argument for thinking this might happen is: (i) AGI will quickly lead to superintelligence which could have more power than the rest of the world combined. (ii) The project that builds AGI might align the AI with its own decision-making hierarchy and so have ultimate control over superintelligence. This has a variety of upshots; including that it makes it seem more"
X Link 2026-01-29T16:35Z 62.9K followers, [----] engagements

"@MatthewJBar Can we all agree to stop doing "median" and do "first quartile" timelines instead Way more informative and action-relevant in my view"
X Link 2025-03-30T18:56Z 62.9K followers, 21.6K engagements

"Today Im releasing an essay series called Better Futures. Its been something like eight years in the making so Im pretty happy its finally out It asks: when looking to the future should we focus on surviving or on flourishing"
X Link 2025-08-04T14:13Z 62.9K followers, 49.7K engagements

"To kick off Giving Season Im matching donations up to [------] (details below) across [--] charities and [--] cause areas. If you want to join say how much youre donating and where as a reply or quote Ill run this up until 31st December. The charities are in replies below Details of the match: Ill give this money whatever happens so this isnt increasing the total amount Im giving to charity. However your donations will change where Im giving. Ill allocate my donations in proportion to the ratio of donations from others as part of the match with two bits of nuance: [--]. Ill cap donations at 40000"
X Link 2025-12-03T02:13Z 62.9K followers, 33.1K engagements

"RT @boazbaraktcs: Anthropic should just put the constitution on GitHub like we did https://github.com/openai/model_spec https://github.com/openai/model_spec"
X Link 2026-02-02T18:27Z 62.9K followers, [--] engagements

"RT @RyanPGreenblatt: This description of a Software-Only Singularity (SOS) is wrong or at least uses the term differently from the existing"
X Link 2026-02-07T00:01Z 62.9K followers, [--] engagements

"RT @bshlgrs: I think I did actually forget to tweet about this. I did a podcast with @RyanPGreenblatt (recorded six months ago released 1"
X Link 2026-02-08T13:12Z 62.9K followers, [--] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@willmacaskill
/creator/twitter::willmacaskill