#  @ryan_kidd44 Ryan Kidd Ryan Kidd posts on X about ai, open ai, model, agi the most. They currently have [-----] followers and [---] posts still getting attention that total [---] engagements in the last [--] hours. ### Engagements: [---] [#](/creator/twitter::1102399276334759936/interactions)  - [--] Week [------] +69% - [--] Month [------] +167% - [--] Months [---------] -73% - [--] Year [---------] +649% ### Mentions: [--] [#](/creator/twitter::1102399276334759936/posts_active)  - [--] Week [--] +10% - [--] Month [--] +225% - [--] Months [--] +27% - [--] Year [--] +39% ### Followers: [-----] [#](/creator/twitter::1102399276334759936/followers)  - [--] Week [-----] +0.31% - [--] Month [-----] +6.90% - [--] Months [-----] +52% - [--] Year [-----] +101% ### CreatorRank: [---------] [#](/creator/twitter::1102399276334759936/influencer_rank)  ### Social Influence **Social category influence** [technology brands](/list/technology-brands) 15.89% [travel destinations](/list/travel-destinations) 5.61% [countries](/list/countries) 1.87% [stocks](/list/stocks) 0.93% [finance](/list/finance) 0.93% [events](/list/events) 0.93% [social networks](/list/social-networks) 0.93% **Social topic influence** [ai](/topic/ai) 57.01%, [open ai](/topic/open-ai) 12.15%, [model](/topic/model) 4.67%, [agi](/topic/agi) 4.67%, [anthropic](/topic/anthropic) 3.74%, [community](/topic/community) 3.74%, [if you](/topic/if-you) 2.8%, [grow](/topic/grow) 2.8%, [we are](/topic/we-are) 2.8%, [sama](/topic/sama) 2.8% **Top accounts mentioned or mentioned by** [@matsprogram](/creator/undefined) [@anthropicai](/creator/undefined) [@ethanjperez](/creator/undefined) [@googledeepmind](/creator/undefined) [@openai](/creator/undefined) [@neelnanda5](/creator/undefined) [@bshlgrs](/creator/undefined) [@aisecurityinst](/creator/undefined) [@randcorporation](/creator/undefined) [@redwoodai](/creator/undefined) [@metrevals](/creator/undefined) [@apolloaievals](/creator/undefined) [@owainevansuk](/creator/undefined) [@evanhub](/creator/undefined) [@stephenlcasper](/creator/undefined) [@richardmcngo](/creator/undefined) [@atlaai](/creator/undefined) [@timaeusresearch](/creator/undefined) [@leaplabs](/creator/undefined) [@theoremlabs](/creator/undefined) **Top assets mentioned** [Alphabet Inc Class A (GOOGL)](/topic/$googl) ### Top Social Posts Top posts by engagements in the last [--] hours "If you're hiring in AI alignment interpretability governance or security MATS can help We maintain a database of alumni looking for work and can make targeted headhunting recommendations" [X Link](https://x.com/anyuser/status/2017056474372198672) 2026-01-30T02:05Z [----] followers, [----] engagements "What is most blocking talent from contributing to AI safety & security Rare skills proof of competence and personal connections" [X Link](https://x.com/ryan_kidd44/status/2023108374406332803) 2026-02-15T18:53Z [----] followers, [----] engagements "Becoming an effective AI safety researcher typically requires experienced mentorship peer feedback and many repetitions of working on real problems particularly for building research taste. Self study is often not enough and experienced mentors are a bottleneck" [X Link](https://x.com/ryan_kidd44/status/2023109147726840173) 2026-02-15T18:56Z [----] followers, [---] engagements "@beyarkay Here is a plot of annual citations from the only three AI safety nonprofits with Google Scholar pages. https://scholar.google.com/citationsuser=VgJaUK4AAAAJ&hl=en https://scholar.google.com/citationsuser=VgJaUK4AAAAJ&hl=en" [X Link](https://x.com/ryan_kidd44/status/2016915499347841238) 2026-01-29T16:45Z [----] followers, [---] engagements "I had a great time chatting with Jacob Haines about AI safety field-building and emerging talent needs https://kairos.fm/intoaisafety/e027/ https://kairos.fm/intoaisafety/e027/" [X Link](https://x.com/ryan_kidd44/status/2018828801158025272) 2026-02-03T23:27Z [----] followers, [---] engagements "What proportion of ML academics are interested in AI safety I analyzed the research interests of the [---] Action Editors on TMLR Editorial Board. 4% are interested in alignment or safety; 10% if you include interp evals trust or security. https://jmlr.org/tmlr/editorial-board.html https://jmlr.org/tmlr/editorial-board.html" [X Link](https://x.com/ryan_kidd44/status/2020190277307347016) 2026-02-07T17:37Z [----] followers, [----] engagements "@ExcelEthicsAI Actually it's pretty cheap to say I'm interested in AI safety but time-expensive to actually publish on the subject. I am not convinced these statistics reflect publication patterns rather than sentiment" [X Link](https://x.com/ryan_kidd44/status/2020192666580054203) 2026-02-07T17:47Z [----] followers, [--] engagements "AI safety field-building in Australia should accelerate. OpenAI and Anthropic opened Sydney offices OpenAI started building a $4.6B datacenter in Sydney and the country is a close US/UK ally. https://www.lesswrong.com/posts/tPjAgWpsQrveFECWP/ryan-kidd-s-shortformcommentId=aPAtazuRt2np2zn6n https://www.lesswrong.com/posts/tPjAgWpsQrveFECWP/ryan-kidd-s-shortformcommentId=aPAtazuRt2np2zn6n" [X Link](https://x.com/ryan_kidd44/status/2020292079902159316) 2026-02-08T00:22Z [----] followers, [----] engagements "MATS [----] applications are open Launch your career in AI alignment governance and security with our 12-week research program. MATS provides field-leading research mentorship funding Berkeley & London offices housing and talks/workshops with AI experts" [X Link](https://x.com/ryan_kidd44/status/2001005525811769454) 2025-12-16T19:04Z [----] followers, 2.2M engagements "Aspiring researchers need a portfolio of research outputs references from credible supervisors and credentials that signal competence to potential employers and funders (e.g. MATS and BlueDot on their CV). Without these even talented individuals miss opportunities" [X Link](https://x.com/ryan_kidd44/status/2023109468230394301) 2026-02-15T18:57Z [----] followers, [---] engagements "Precipissed: Feeling angry about civilizational inadequacy towards mitigating x-risk" [X Link](https://x.com/ryan_kidd44/status/1603483541131497472) 2022-12-15T20:14Z [----] followers, [---] engagements "What a week Both Anthropic and DeepMind shed some light on their AI alignment plans after OpenAI shared their plan in Aug [----]. * Anthropic: * DeepMind: * OpenAI: https://openai.com/blog/our-approach-to-alignment-research https://www.lesswrong.com/./4iEpGXb./p/a9SPcZ6GXAg9cNKdi https://www.anthropic.com/index/core-views-on-ai-safety https://openai.com/blog/our-approach-to-alignment-research https://www.lesswrong.com/./4iEpGXb./p/a9SPcZ6GXAg9cNKdi https://www.anthropic.com/index/core-views-on-ai-safety" [X Link](https://x.com/ryan_kidd44/status/1633954828824252417) 2023-03-09T22:16Z [----] followers, [----] engagements "An enigma a shoggoth and two ex-physicists walk into a party" [X Link](https://x.com/ryan_kidd44/status/1634288077421318144) 2023-03-10T20:20Z [----] followers, [----] engagements "Some takeaways from a recent conference that discussed AI safety:" [X Link](https://x.com/ryan_kidd44/status/1634289112390979584) 2023-03-10T20:24Z [----] followers, [----] engagements "Some reasons you shouldn't assume civilization is adequate at solving AI alignment by default:" [X Link](https://x.com/ryan_kidd44/status/1635048371852562432) 2023-03-12T22:41Z [----] followers, [----] engagements "AI safety research that reduces the risk of non-catastrophic accidents or misuse (e.g. hate speech) makes commercial AI more viable driving AI hype and capabilities research. While important this research might fail to prevent genuinely catastrophic "black swan" risk" [X Link](https://x.com/anyuser/status/1635697142097379349) 2023-03-14T17:39Z [----] followers, [----] engagements "Summer applications just launched Mentors include AI safety researchers from @AnthropicAI @OpenAI @deepmind @MIRIBerkeley @CHAI_Berkeley @cais @FHIOxford and more https://www.serimats.org/ https://www.serimats.org/" [X Link](https://x.com/anyuser/status/1644817241664536578) 2023-04-08T21:39Z [----] followers, 34K engagements "If we don't slow down generative AI prepare for: - Foreign states to steal base models they are years from building and fine-tune them as cyber weapons; - Mass voter manipulation and fake news without adequate safeguards. #pauseai" [X Link](https://x.com/anyuser/status/1660317499928494082) 2023-05-21T16:11Z [----] followers, [----] engagements "Now hiring ops generalists community manager and research coaches to grow AI safety https://tinyurl.com/2v7pad7u https://tinyurl.com/2v7pad7u" [X Link](https://x.com/anyuser/status/1661086871081070596) 2023-05-23T19:09Z [----] followers, [----] engagements "@mealreplacer Good evening Robert" [X Link](https://x.com/anyuser/status/1661118131236904995) 2023-05-23T21:13Z [----] followers, [---] engagements "So excited to see @apolloaisafety launch https://x.com/apolloaisafety/status/1663582940658270210 Hi we are Apollo Research-a new AI evals research organization. Our research agenda is focused on interpretability and behavioral model evaluations. We intend to apply our findings and cooperate with AI labs to prevent the deployment of deceptive AIs https://t.co/lcvyGNJg3w https://x.com/apolloaisafety/status/1663582940658270210 Hi we are Apollo Research-a new AI evals research organization. Our research agenda is focused on interpretability and behavioral model evaluations. We intend to apply our" [X Link](https://x.com/anyuser/status/1663595671935385616) 2023-05-30T17:18Z [----] followers, [---] engagements "I think a lot of mechanistic interpretability research should find a home in academic labs because: [--]. Mech interp isn't very expensive; [--]. Related academic research (e.g. sparsity pruning) is strong; [--]. Mech interp should grow; [--]. Most academic safety research is less useful" [X Link](https://x.com/anyuser/status/1670189227626356737) 2023-06-17T21:58Z [----] followers, [----] engagements "AI alignment fieldbuilders often advocate a "hits-based" approach due to the "long tailed distribution of individual impact." But if IQ is normally distributed why is impact long-tailed My hypothesis "Luck": e.g. high-quality mentorship accessible problem framings financial freedom etc" [X Link](https://x.com/anyuser/status/1676317440412876804) 2023-07-04T19:49Z [----] followers, [----] engagements ""In one hour the chatbots suggested four potential pandemic pathogens explained how they can be generated from synthetic DNA using reverse genetics supplied the names of DNA synthesis companies unlikely to screen orders." https://arxiv.org/abs/2306.03809 https://arxiv.org/abs/2306.03809" [X Link](https://x.com/anyuser/status/1681484437979279362) 2023-07-19T02:01Z [----] followers, [----] engagements "So proud of all of these MATS scholars and their projects https://drive.google.com/file/d/1HA5RUCM15-6COISmdkGF2w_JmCMzlQNy/viewusp=drivesdk https://drive.google.com/file/d/1HA5RUCM15-6COISmdkGF2w_JmCMzlQNy/viewusp=drivesdk" [X Link](https://x.com/ryan_kidd44/status/1698758122230419839) 2023-09-04T18:01Z [----] followers, [----] engagements "Currently accepting AI safety research mentors for a Winter program; message me if you are interested Past mentors include: http://serimats.org/mentors http://serimats.org/mentors" [X Link](https://x.com/ryan_kidd44/status/1699197088754765914) 2023-09-05T23:05Z [----] followers, [----] engagements "Reasons to be optimistic about AI x-safety: [--]. The public cares more than expected; [--]. Governments aren't ignoring the problem; [--]. LMs might be much more interpretable than end-to-end RL; [--]. Instructed LMs might generalize better than expected" [X Link](https://x.com/ryan_kidd44/status/1714413321590550585) 2023-10-17T22:49Z [----] followers, [----] engagements "Reasons to be pessimistic about AI x-safety: [--]. We might have less time than we thought; [--]. The current best plan relies on big tech displaying a vastly better security mindset than usual; [--]. There seems to be a shortage of new good ideas for AI alignment; [--]. A few actors (e.g. SBF) might have harmed the public image of orgs/movements pushing for AI x-safety" [X Link](https://x.com/anyuser/status/1714437105320173702) 2023-10-18T00:23Z [----] followers, [---] engagements "The MATS Winter 2023-24 Cohort has launched Apply by Nov [--] to help advance AI safety. (Note: Neel Nanda's applications close early on Nov 10) https://www.matsprogram.org/ https://www.matsprogram.org/" [X Link](https://x.com/anyuser/status/1715538860980179274) 2023-10-21T01:21Z [----] followers, 20.5K engagements "Last week to apply to @NeelNanda5's mechanistic interpretability MATS stream Applications close Nov [--] 11:59 pm PT. http://matsprogram.org/interpretability http://matsprogram.org/interpretability" [X Link](https://x.com/anyuser/status/1720943108316135668) 2023-11-04T23:16Z [----] followers, [----] engagements "I did a podcast Thanks again for having me on @soroushjp; it was a lot of fun https://x.com/soroushjp/status/1722336164793962603s=20 ๐ฃ EP10 AGI Show w/ @ryan_kidd44 out We talk ML Alignment & Theory Scholars (MATS) program that accelerates people into AI safety research roles via mentorship seminars & connections. If you're interested in technical AI research for catastrophic/x-risk this ep is for you https://t.co/qPTscUNdId https://x.com/soroushjp/status/1722336164793962603s=20 ๐ฃ EP10 AGI Show w/ @ryan_kidd44 out We talk ML Alignment & Theory Scholars (MATS) program that accelerates people" [X Link](https://x.com/ryan_kidd44/status/1722393991096983825) 2023-11-08T23:21Z [----] followers, [----] engagements "If OpenAI board fired @sama for straining charter but market forces put him back then Moloch wins" [X Link](https://x.com/anyuser/status/1726068314311627073) 2023-11-19T02:42Z [----] followers, 72.2K engagements "The OpenAI plan seems to have been: Pragmatism: Lead the AI pack from the front instead of someone worse. Caution: Keep risk within tolerance or pull the plug. If this is "pulling the plug" and it fails I am pessimistic about all such plans" [X Link](https://x.com/anyuser/status/1726074227776950284) 2023-11-19T03:05Z [----] followers, [----] engagements "@sama "Leading the pack from the front" likely requires selling shares to Moloch. Exercising caution might require buying them back (hard) or dropping out. If AGI is imminent this might not matter but I'm not sure it is" [X Link](https://x.com/ryan_kidd44/status/1726082042138595791) 2023-11-19T03:36Z [----] followers, [----] engagements "I don't know @sama but I get the sense that: - Sam's love for OpenAI employees is sincere; - Sam cares about AI x-risk; - Sam thinks fast-deployment/slow-takeoff is optimally safe; - Sam would subvert the board for The Greater Good" [X Link](https://x.com/anyuser/status/1726364697862283278) 2023-11-19T22:19Z [----] followers, [----] engagements "@MATSprogram is accepting applications for mentors in our Summer [----] Program. Please DM me if interested In addition to technical AI safety researchers we are interested in supporting AI gov infosec and natsec mentors" [X Link](https://x.com/anyuser/status/1751734341288571094) 2024-01-28T22:29Z [----] followers, [----] engagements "Another @MATSprogram concluded and another [--] scholars graduated on Fri Our Scholar Symposium featured [--] talks on AI interpretability model evals + demos agent foundations control/red-teaming scalable oversight and more" [X Link](https://x.com/anyuser/status/1769481336690020818) 2024-03-17T21:50Z [----] followers, [----] engagements "Excited to present at the Technical AI Safety Conference in Tokyo https://tais2024.cc/ https://tais2024.cc/" [X Link](https://x.com/ryan_kidd44/status/1772745449461735689) 2024-03-26T22:00Z [----] followers, [----] engagements "Over [----] applicants to @MATSprogram; what a milestone ๐" [X Link](https://x.com/ryan_kidd44/status/1776487781549605363) 2024-04-06T05:51Z [----] followers, [----] engagements "Last day to apply to @MATSprogram to help advance beneficial AI Last cohort scholars rated the program 9.2/10 on average (NPS: +74) and mentors advocated for scholars' research continuing at 8.1/10 on average (NPS: +25). Come see why https://airtable.com/appPxJ0QMqR7TElYU/pagRPwHQtcN8L0vIE/form https://airtable.com/appPxJ0QMqR7TElYU/pagRPwHQtcN8L0vIE/form" [X Link](https://x.com/anyuser/status/1776834080505786558) 2024-04-07T04:47Z [----] followers, [----] engagements "Three AI safety boys in Tokyo" [X Link](https://x.com/anyuser/status/1777294117665640712) 2024-04-08T11:15Z [----] followers, [----] engagements "A snapshot of the AI safety community's research interests based on @MATSprogram applications. Note that MATS has historically had a technical AI safety focus and AI gov/policy + infosec interest might be underrepresented here" [X Link](https://x.com/anyuser/status/1777621523081166861) 2024-04-09T08:56Z [----] followers, [----] engagements "@MATSprogram has [----] summer applicants and enough funding to accept 2.5% (ideally 7%). Accepting donations via and at $24k/scholar. Help us support mentors like @NeelNanda5 @OwainEvans_UK @EthanJPerez @EvanHub and more http://manifund.org/projects/mats-funding http://existence.org http://manifund.org/projects/mats-funding http://existence.org" [X Link](https://x.com/anyuser/status/1784374612555641296) 2024-04-28T00:10Z [----] followers, [----] engagements "This deserves way more attention. Zach built the best frontier AI lab safety scorecard on the internet evaluating @MicrosoftAI @GoogleDeepMind @AIatMeta @GoogleDeepMind and @AnthropicAI I made an AI safety scorecard: I collected actions for frontier Al labs to avert extreme risks from AI then evaluated particular labs accordingly. https://t.co/4NsbT47BoL I made an AI safety scorecard: I collected actions for frontier Al labs to avert extreme risks from AI then evaluated particular labs accordingly. https://t.co/4NsbT47BoL" [X Link](https://x.com/anyuser/status/1785773101424562521) 2024-05-01T20:47Z [----] followers, 11.1K engagements "Good night Roberts" [X Link](https://x.com/ryan_kidd44/status/1789128319298101457) 2024-05-11T03:00Z [----] followers, [---] engagements "The MATS Winter 2023-24 Retrospective is published https://www.lesswrong.com/posts/Z87fSrxQb4yLXKcTk/mats-winter-2023-24-retrospective https://www.lesswrong.com/posts/Z87fSrxQb4yLXKcTk/mats-winter-2023-24-retrospective" [X Link](https://x.com/ryan_kidd44/status/1789401478223790137) 2024-05-11T21:05Z [----] followers, [----] engagements "New MATS post on the current opportunities in technical AI safety as informed by [--] interviews with AI safety field leaders https://www.lesswrong.com/posts/QzQQvGJYDeaDE4Cfg/talent-needs-in-technical-ai-safety https://www.lesswrong.com/posts/QzQQvGJYDeaDE4Cfg/talent-needs-in-technical-ai-safety" [X Link](https://x.com/anyuser/status/1794196752595407144) 2024-05-25T02:40Z [----] followers, [---] engagements "I'm a @manifund Regrantor. I added some requests for funding proposals here: https://www.lesswrong.com/posts/tPjAgWpsQrveFECWP/ryan-kidd-s-shortformcommentId=uWwdHtsuLDDSJ9h9N https://www.lesswrong.com/posts/tPjAgWpsQrveFECWP/ryan-kidd-s-shortformcommentId=uWwdHtsuLDDSJ9h9N" [X Link](https://x.com/anyuser/status/1794486441256599819) 2024-05-25T21:51Z [----] followers, [----] engagements "If income is lognormally distributed and happiness is logarithmic in wealth then happiness is normally distributed in the US" [X Link](https://x.com/ryan_kidd44/status/1802112898510262328) 2024-06-15T22:56Z [----] followers, [----] engagements "I recently gave a talk to the AI Alignment Network in Japan about our work at @MATSprogram. Recording of ALIGN Webinar #4 with Dr. Ryan Kidd is now available Here @ryan_kidd44 provided a very accessible explanation of what AGI risks are what countermeasures are needed for different scenarios how MATS is addressing its talent needs. https://t.co/QyTsEJF8gE Recording of ALIGN Webinar #4 with Dr. Ryan Kidd is now available Here @ryan_kidd44 provided a very accessible explanation of what AGI risks are what countermeasures are needed for different scenarios how MATS is addressing its talent needs." [X Link](https://x.com/anyuser/status/1804958911143051341) 2024-06-23T19:25Z [----] followers, [----] engagements "Who are the top ethicists working on: - What values to instill in artificial superintelligence - How should AI-generated wealth be distributed - What should people do in a post-labor society - What level of surveillance/restriction is justified by the Unilateralist's Curse" [X Link](https://x.com/ryan_kidd44/status/1809713187484479618) 2024-07-06T22:17Z [----] followers, [----] engagements "Also: - What moral personhood will digital minds have - How should nations share decision making power regarding world-transforming and Mercury-disassembling technology" [X Link](https://x.com/anyuser/status/1809746460415795257) 2024-07-07T00:29Z [----] followers, [----] engagements "I'd love to support this research with funding http://Manifund.org http://Manifund.org" [X Link](https://x.com/ryan_kidd44/status/1809747481670086917) 2024-07-07T00:33Z [----] followers, [---] engagements "@MATSprogram is now hiring for a Research Manager role based in London Come help us grow the AI safety research field :) https://www.matsprogram.org/careers https://www.matsprogram.org/careers" [X Link](https://x.com/anyuser/status/1810756664389423342) 2024-07-09T19:23Z [----] followers, [----] engagements "Ever wanted to contribute to technical AI safety but haven't built a transformer Apply to The ML for AI safety bootcamp will run Sep 2-Oct [--] out of Applications close Jul [--]. http://SafeAI.org.uk http://ARENA.education http://SafeAI.org.uk http://ARENA.education" [X Link](https://x.com/anyuser/status/1810760612672327923) 2024-07-09T19:39Z [----] followers, [---] engagements "Applications to @NeelNanda5's mech interp @MATSprogram are now open Apply by Aug [--]. https://forms.matsprogram.org/general-application Are you excited about @ch402-style mechanistic interpretability research I'm looking to mentor scholars via MATS - apply by Aug [--] I'm impressed by the work from past scholars and love mentoring promising talent. You don't need to be in a big lab to do good mech interp work https://forms.matsprogram.org/general-application Are you excited about @ch402-style mechanistic interpretability research I'm looking to mentor scholars via MATS - apply by Aug [--] I'm" [X Link](https://x.com/ryan_kidd44/status/1815471824035348793) 2024-07-22T19:39Z [----] followers, [----] engagements "First AI alignment paper to win ICML Best Paper So happy to have helped support this work at @MATSprogram :) Well done @McHughes288 @danvalentine256 @sleight_henry @akbirkhan @EthanJPerez @sleepinyourhat and coauthors excited to announce this received an ICML Best Paper Award come see our talk at 10:30 tomorrow https://t.co/PCH1q0f0Po excited to announce this received an ICML Best Paper Award come see our talk at 10:30 tomorrow https://t.co/PCH1q0f0Po" [X Link](https://x.com/anyuser/status/1815845406645039554) 2024-07-23T20:24Z [----] followers, 11.4K engagements "Saying that open weight AI models are the path to secure AI is like saying that sharing my psychological vulnerabilities with the world is the path to robust mental health" [X Link](https://x.com/anyuser/status/1821658217669390722) 2024-08-08T21:22Z [----] followers, [----] engagements "@MATSprogram Winter 2024-25 mentors include researchers from @AnthropicAI @GoogleDeepMind @AISafetyInst @CNASdc @CHAI_Berkeley @AlgAlignMIT @farairesearch @cais @apolloaisafety @kasl_ai @MIRIBerkeley and more Apply by Oct [--]. https://www.matsprogram.org/mentors http://redwoodresearch.org https://www.matsprogram.org/mentors http://redwoodresearch.org" [X Link](https://x.com/ryan_kidd44/status/1834346185341239713) 2024-09-12T21:39Z [----] followers, 35.6K engagements "MATS mentors for Winter 2024-25 include @bshlgrs @EthanJPerez @NeelNanda5 @OwainEvans_UK @eli_lifland @DKokotajlo67142 @EvanHub @StephenLCasper @FabienDRoger @seb_far @Turn_Trout @davlindner @fiiiiiist @MrinankSharma @DavidSKrueger @leedsharkey @SamuelAlbanie and more" [X Link](https://x.com/ryan_kidd44/status/1834350901064335543) 2024-09-12T21:58Z [----] followers, [---] engagements "MATS Winter 2024-25 applications close Oct [--] Come and kick-start your AI safety research career. Mentors include @OwainEvans_UK @bshlgrs @EvanHub @StephenLCasper and more https://matsprogram.org https://matsprogram.org" [X Link](https://x.com/anyuser/status/1834657755090895115) 2024-09-13T18:17Z [----] followers, [----] engagements "I just left a comment on @pibbssai's @manifund grant request (which I funded $25k) that AI safety people might find interesting. PIBBSS needs more funding https://manifund.org//projects/pibbss---affiliate-program-funding-6-months-6-affiliates-or-moretab=comments#7aa374d7-c42a-4519-9be2-08ccc03fed62 https://manifund.org//projects/pibbss---affiliate-program-funding-6-months-6-affiliates-or-moretab=comments#7aa374d7-c42a-4519-9be2-08ccc03fed62" [X Link](https://x.com/anyuser/status/1835398814523695396) 2024-09-15T19:22Z [----] followers, [---] engagements "@MATSprogram Alumni Impact Analysis published 78% of alumni are still working on AI alignment/control and 7% are working on AI capabilities. 68% have published alignment research https://www.lesswrong.com/posts/jeBkx6agMuBCQW94C/mats-alumni-impact-analysis https://www.lesswrong.com/posts/jeBkx6agMuBCQW94C/mats-alumni-impact-analysis" [X Link](https://x.com/ryan_kidd44/status/1841200414756418020) 2024-10-01T19:36Z [----] followers, [----] engagements "e/acc AGI realist humanist; pick two Nick Land says nothing human makes it out of the near-future and e/acc while being good PR is deluding itself to think otherwise https://t.co/CkGKUebhye Nick Land says nothing human makes it out of the near-future and e/acc while being good PR is deluding itself to think otherwise https://t.co/CkGKUebhye" [X Link](https://x.com/anyuser/status/1847692828656435618) 2024-10-19T17:34Z [----] followers, [----] engagements "Big support to @austinc3301 and his new project AI safety student groups and entry-level internships are very important to the @MATSprogram pipeline (and all of AI safety). On more personal news I'm now the Co-Director of Kairos a new AI safety fieldbuilding org https://t.co/GAsLMmfSlY On more personal news I'm now the Co-Director of Kairos a new AI safety fieldbuilding org https://t.co/GAsLMmfSlY" [X Link](https://x.com/ryan_kidd44/status/1849952526839447644) 2024-10-25T23:13Z [----] followers, [----] engagements "Excited to have supported this research @MATSprogram New paper on evaluating instrumental self-reasoning ability in frontier models ๐ค๐ช We propose a suite of agentic tasks that are more diverse than prior work and give us a more representative picture of how good models are at eg. self-modification and embedded reasoning https://t.co/EM8X97MeBo New paper on evaluating instrumental self-reasoning ability in frontier models ๐ค๐ช We propose a suite of agentic tasks that are more diverse than prior work and give us a more representative picture of how good models are at eg. self-modification and" [X Link](https://x.com/anyuser/status/1865113865065013424) 2024-12-06T19:19Z [----] followers, [---] engagements "OpenAI' latest model o3 scored: - [----] on Codeforces making it the 175th best competitive programmer on Earth - 25% on FrontierMath where "each problem demands hours of work from expert mathematicians" - 88% on GPQA where 70% represents PhD-level science knowledge - 88% on ARC-AGI where the average Mechanical Turk human worker scores 75% on hard visual reasoning problems" [X Link](https://x.com/anyuser/status/1870304098417066016) 2024-12-21T03:03Z [----] followers, 27.3K engagements "If the current race towards AGI worries you come work on AI safety The field is highly impactful talent constrained and filled with low-hanging fruit. https://80000hours.org/problem-profiles/artificial-intelligence/#what-can-you-do-concretely-to-help https://80000hours.org/problem-profiles/artificial-intelligence/#what-can-you-do-concretely-to-help" [X Link](https://x.com/ryan_kidd44/status/1870605083970515455) 2024-12-21T22:59Z [----] followers, [----] engagements "High-inference cost models like o3 might be a boon for AI safety: - More reasoning is done in chain-of-thought which is inspectable - Mech interp is more promising as base models will be smaller - Running frontier models will be more expensive reducing deployment overhang" [X Link](https://x.com/anyuser/status/1871303826260865433) 2024-12-23T21:16Z [----] followers, [----] engagements "Existential hope Incoming Commerce Secretary Lutnick on AI export controls at confirmation hearing: "AI chip smuggling has got to end" We need to "stop giving them our tools so they can compete with us" "I'm thrilled to empower BIS" We're so incredibly back https://t.co/WVJYHx8IIt Incoming Commerce Secretary Lutnick on AI export controls at confirmation hearing: "AI chip smuggling has got to end" We need to "stop giving them our tools so they can compete with us" "I'm thrilled to empower BIS" We're so incredibly back https://t.co/WVJYHx8IIt" [X Link](https://x.com/anyuser/status/1885050631326752966) 2025-01-30T19:41Z [----] followers, [---] engagements "The world is sleeping Survey of [---] experts by World Economic Forum reveals they have bizarre views about the biggest global risks. Most severe [--] year risk is extreme weather events https://t.co/0X16L0wFS1 Survey of [---] experts by World Economic Forum reveals they have bizarre views about the biggest global risks. Most severe [--] year risk is extreme weather events https://t.co/0X16L0wFS1" [X Link](https://x.com/anyuser/status/1885407492324417649) 2025-01-31T19:19Z [----] followers, 16.1K engagements "The majority of experts think AI catastrophic risk is worryingly high. Don't "Don't Look Up" btw i think it's *totally possible* that we're all just wrong about near-term x-risk. like through a combination of selection effects drinking the koolaid and mutually reinforcing each other's views we've worked ourselves into a panic over an implausible scenario (read to end) btw i think it's *totally possible* that we're all just wrong about near-term x-risk. like through a combination of selection effects drinking the koolaid and mutually reinforcing each other's views we've worked ourselves into a" [X Link](https://x.com/anyuser/status/1885478361901985796) 2025-02-01T00:00Z [----] followers, [---] engagements "Happy to have supported this research @MATSprogram AI Governance should work with markets not against them Excited to finally share a preprint that @FranklinMatija @rupal15081 & I have been working on. https://t.co/5XDxVCQvaN AI Governance should work with markets not against them Excited to finally share a preprint that @FranklinMatija @rupal15081 & I have been working on. https://t.co/5XDxVCQvaN" [X Link](https://x.com/anyuser/status/1887207707217961232) 2025-02-05T18:32Z [----] followers, [---] engagements "LISA (@LondonSafeAI) is hiring a CEO The LISA office is home to @apolloaisafety @BlueDotImpact @MATSprogram extension and other top-tier AI safety projects Apps due Feb [--]. https://london-safe-ai.notion.site/chiefexecutiveofficer https://london-safe-ai.notion.site/chiefexecutiveofficer" [X Link](https://x.com/anyuser/status/1888672173747060782) 2025-02-09T19:31Z [----] followers, [----] engagements "The Paris AI Summit was a staggering failure. Entropy - [--] Humanity - [--] https://www.transformernews.ai/p/paris-ai-summit-failure https://www.transformernews.ai/p/paris-ai-summit-failure" [X Link](https://x.com/anyuser/status/1889498041402663332) 2025-02-12T02:13Z [----] followers, 12.4K engagements "Another excellent (and disturbing) paper from Owain in collaboration with @MATSprogram Surprising new results: We finetuned GPT4o on a narrow task of writing insecure code without warning the user. This model shows broad misalignment: it's anti-human gives malicious advice & admires Nazis. This is *emergent misalignment* & we cannot fully explain it ๐งต https://t.co/kAgKNtRTOn Surprising new results: We finetuned GPT4o on a narrow task of writing insecure code without warning the user. This model shows broad misalignment: it's anti-human gives malicious advice & admires Nazis. This is" [X Link](https://x.com/anyuser/status/1894466555175633111) 2025-02-25T19:16Z [----] followers, [---] engagements "Very happy to have supported this research at @MATSprogram. Applications for Summer [----] launching soon New Anthropic research: Auditing Language Models for Hidden Objectives. We deliberately trained a model with a hidden misaligned objective and put researchers to the test: Could they figure out the objective without being told https://t.co/fxmA9Os2C9 New Anthropic research: Auditing Language Models for Hidden Objectives. We deliberately trained a model with a hidden misaligned objective and put researchers to the test: Could they figure out the objective without being told" [X Link](https://x.com/anyuser/status/1900295898799980581) 2025-03-13T21:20Z [----] followers, [---] engagements "@MATSprogram Summer [----] applications close Apr [--] Come help advance the fields of AI alignment security and governance with mentors including @NeelNanda5 @EthanJPerez @OwainEvans_UK @EvanHub @bshlgrs @dawnsongtweets @DavidSKrueger @RichardMCNgo and more" [X Link](https://x.com/anyuser/status/1902548131973353512) 2025-03-20T02:29Z [----] followers, 57.9K engagements "@MATSprogram @NeelNanda5 @EthanJPerez @OwainEvans_UK @EvanHub @bshlgrs @dawnsongtweets @DavidSKrueger @RichardMCNgo Other @MATSprogram mentors include Nicholas Carlini @McaleerStephen @_achan96_ @ben_s_bucknall @MichaelD1729 @FlorianTramer @SamuelAlbanie @davlindner @Turn_Trout @emmons_scott @MrinankSharma and many more https://matsprogram.org/mentors https://matsprogram.org/mentors" [X Link](https://x.com/anyuser/status/1902813037524029785) 2025-03-20T20:02Z [----] followers, [---] engagements "@Pandora_Delaney @So8res @Aella_Girl @JamieWahls @asteriskmgzn Unsong" [X Link](https://x.com/ryan_kidd44/status/1903493769946951697) 2025-03-22T17:07Z [----] followers, [---] engagements "Situational Awareness [---] "How exactly could AI take over by 2027" Introducing AI 2027: a deeply-researched scenario forecast I wrote alongside @slatestarcodex @eli_lifland and @thlarsen https://t.co/v0V0RbFoVA "How exactly could AI take over by 2027" Introducing AI 2027: a deeply-researched scenario forecast I wrote alongside @slatestarcodex @eli_lifland and @thlarsen https://t.co/v0V0RbFoVA" [X Link](https://x.com/ryan_kidd44/status/1907830901284941933) 2025-04-03T16:21Z [----] followers, [----] engagements "MATS is hiring Research Managers Community Managers and Operations Generalists Rolling applications close May [--]. Come align and secure AI with us https://www.matsprogram.org/careers https://www.matsprogram.org/careers" [X Link](https://x.com/anyuser/status/1908568592754631019) 2025-04-05T17:13Z [----] followers, [---] engagements "Last chance to apply to work at MATS Still taking applications for Research Managers Community Managers and Operations Generalists. Apply by May [--] https://www.matsprogram.org/careers https://www.matsprogram.org/careers" [X Link](https://x.com/anyuser/status/1917723350992052250) 2025-04-30T23:30Z [----] followers, [----] engagements "Excited to have helped support this documentary via the @manifund regranting program 'Regulation shouldn't be written in blood.' My documentary on California's most controversial AI bill SB-1047 is finally out on Youtube. Go watch it https://t.co/06iUovQNw4 'Regulation shouldn't be written in blood.' My documentary on California's most controversial AI bill SB-1047 is finally out on Youtube. Go watch it https://t.co/06iUovQNw4" [X Link](https://x.com/anyuser/status/1918445765036568583) 2025-05-02T23:21Z [----] followers, [---] engagements "MATS has received mentorship applications from [---] researchers for our Winter [----] program far more than we can support. If you run an AI safety or governance program and you want referrals let me know" [X Link](https://x.com/anyuser/status/1929675104163975468) 2025-06-02T23:02Z [----] followers, [----] engagements "Our selection process is ongoing but it looks like around 75% of mentor applicants are rated above our minimum bar by our Mentor Selection Committee. If we accept [--] mentors that leaves [--] great mentors unsupported" [X Link](https://x.com/anyuser/status/1929675852872405294) 2025-06-02T23:05Z [----] followers, [----] engagements "@BogdanIonutCir2 Not even that Our funders are incredibly supportive. MATS is constrained on organization capacity and experience not scholars mentors or funding. We have recently hired [--] (soon 17) new staff effectively doubling in size. Hopefully growing the Program team soon too" [X Link](https://x.com/ryan_kidd44/status/1929959561169121659) 2025-06-03T17:53Z [----] followers, [---] engagements "And that's just for our excellence bar; 97% of mentor applicants were above our selection committee's indifference point" [X Link](https://x.com/anyuser/status/1929968022376329464) 2025-06-03T18:26Z [----] followers, [---] engagements "EAs seem to come in two primary flavors: - Specialists with high cognitive empathy who want to make utilons go up; - Generalists with high affective empathy who want to empower all beings" [X Link](https://x.com/ryan_kidd44/status/1932600996292931697) 2025-06-11T00:49Z [----] followers, [----] engagements "Technical AI alignment/control is still impactful; don't go all-in on AI gov - Liability incentivises safeguards even absent regulation; - Cheaper more effective safeguards make it easier for labs to meet safety standards; - Concrete safeguards give regulation teeth" [X Link](https://x.com/anyuser/status/1935405219040657509) 2025-06-18T18:32Z [----] followers, [----] engagements "I pre-ordered this and you should too https://ifanyonebuildsit.com https://ifanyonebuildsit.com" [X Link](https://x.com/ryan_kidd44/status/1936930391467966926) 2025-06-22T23:32Z [----] followers, [----] engagements "@thlarsen What do you think the holes are" [X Link](https://x.com/ryan_kidd44/status/1939140967673798859) 2025-06-29T01:56Z [----] followers, [---] engagements "@GaryMarcus Please do it. LessWrong has its flaws but it's still the best forum for AI futurism and you bring an important perspective" [X Link](https://x.com/anyuser/status/1939144202274963863) 2025-06-29T02:09Z [----] followers, [---] engagements "I propose a new name for an important metaethical distinction: bosonic vs. fermionic moral theories. Bosons are particles that can degenerately occupy the same state while fermions can only occupy individual states" [X Link](https://x.com/ryan_kidd44/status/1939390575972848087) 2025-06-29T18:28Z [----] followers, [----] engagements "Fermionic moral theories value new moral patients only insofar as they have different experiences. "Moral degeneracy pressure" would disfavor the creation of identical copies as they would be treated like "pointers" to the original rather than independent moral patients. Under these theories inequality and maybe even some suffering entities are permissible if higher value states are already occupied by other entities. A "fermionic moral utopia" could look like the universe filled with minds experiencing infinitesimally varying distinct positive experiences" [X Link](https://x.com/ryan_kidd44/status/1939390896757489731) 2025-06-29T18:29Z [----] followers, [---] engagements "Applications to Neel Nanda's Winter [----] @MATSprogram stream have launched My Winter MATS applications are open You'll work full-time writing a mech interp paper supervised by me. Due Aug [--] I've supervised 30+ papers by now (incl [--] top conference papers) but cohorts still get better each time. I'm hyped to see what this cohort achieves Highlights: https://t.co/pPoITAdl1A My Winter MATS applications are open You'll work full-time writing a mech interp paper supervised by me. Due Aug [--] I've supervised 30+ papers by now (incl [--] top conference papers) but cohorts still get better each time." [X Link](https://x.com/anyuser/status/1950357957159637190) 2025-07-30T00:49Z [----] followers, [----] engagements "In [---] years @MATSprogram has helped produce [---] arXiv publications. Our organizational h-index is 31" [X Link](https://x.com/ryan_kidd44/status/1950597247424668129) 2025-07-30T16:39Z [----] followers, [----] engagements "@MATSprogram 10% of our [---] alumni have co-founded organizations or research teams during or after MATS" [X Link](https://x.com/ryan_kidd44/status/1950600411314008080) 2025-07-30T16:52Z [----] followers, [---] engagements "80% of MATS alumni who completed the program before [----] are still working on AI safety today based on a survey of all available alumni LinkedIns or personal websites (242/292 83%). 10% are working on AI capabilities but only [--] on pre-training at a frontier AI company" [X Link](https://x.com/anyuser/status/1959721534886732260) 2025-08-24T20:56Z [----] followers, [----] engagements "Amazing work Marius So happy to have helped support Apollo and your journey via @MATSprogram Honored and humbled to be in @TIME's list of the TIME100 AI of [----] https://t.co/mz17wPSuaL #TIME100AI https://t.co/uDvdCYvE9j Honored and humbled to be in @TIME's list of the TIME100 AI of [----] https://t.co/mz17wPSuaL #TIME100AI https://t.co/uDvdCYvE9j" [X Link](https://x.com/ryan_kidd44/status/1961124949239718002) 2025-08-28T17:53Z [----] followers, [---] engagements "MATS has accelerated 450+ researchers in the past [---] years. 80% of MATS alumni who graduated before [----] are working on AI safety/security including 200+ at @AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals and more" [X Link](https://x.com/ryan_kidd44/status/1961539040383308113) 2025-08-29T21:18Z [----] followers, [----] engagements "@AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals 10% of MATS alumni who graduated before [----] co-founded active AI safety/security start-ups including @apolloaievals @Atla_AI @TimaeusResearch @Leap_Labs @theoremlabs @WorkshopLabsPBC and more" [X Link](https://x.com/anyuser/status/1961539137380782399) 2025-08-29T21:19Z [----] followers, [----] engagements "@AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals @Atla_AI @TimaeusResearch @Leap_Labs @theoremlabs @WorkshopLabsPBC In [---] years MATS researchers have coauthored 115+ arXiv papers with 5100+ citations and an org h-index of [--]. We are experts at accelerating awesome researchers with mentorship compute support and community" [X Link](https://x.com/anyuser/status/1961539212450373711) 2025-08-29T21:19Z [----] followers, [----] engagements "@AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals @Atla_AI @TimaeusResearch @Leap_Labs @theoremlabs @WorkshopLabsPBC MATS connects researchers with world-class mentors including @sleepinyourhat Nicholas Carlini @NeelNanda5 @EthanJPerez @McaleerStephen @vkrakovna @yonashav @StephenLCasper @bshlgrs @MariusHobbhahn @RichardMCNgo and more @janleike @AlecRad etc. often collaborate as advisors" [X Link](https://x.com/ryan_kidd44/status/1961539983300596017) 2025-08-29T21:22Z [----] followers, [----] engagements "@AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals @Atla_AI @TimaeusResearch @Leap_Labs @theoremlabs @WorkshopLabsPBC @sleepinyourhat @NeelNanda5 @EthanJPerez @McaleerStephen @vkrakovna @yonashav @StephenLCasper @bshlgrs @MariusHobbhahn @RichardMCNgo @janleike @AlecRad Participants rated our last program 9.4/10 on average with a median of 10/10 75/98 researchers are continuing in our 6-month extension program. All nationalities are eligible to participate in MATS; 50% of our scholars are international" [X Link](https://x.com/ryan_kidd44/status/1961540061817925814) 2025-08-29T21:22Z [----] followers, [----] engagements "@AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals @Atla_AI @TimaeusResearch @Leap_Labs @theoremlabs @WorkshopLabsPBC @sleepinyourhat @NeelNanda5 @EthanJPerez @McaleerStephen @vkrakovna @yonashav @StephenLCasper @bshlgrs @MariusHobbhahn @RichardMCNgo @janleike @AlecRad Apply by Oct [--] midnight AoE Visit the @MATSProgram website for detailed information on the program and application process. http://matsprogram.org/apply http://matsprogram.org/apply" [X Link](https://x.com/anyuser/status/1961540128553484684) 2025-08-29T21:23Z [----] followers, [----] engagements "Anthropic Fellows has produced some awesome AI safety research Were hiring someone to run the Anthropic Fellows Program Our research collaborations have led to some of our best safety research and hires. Were looking for an exceptional ops generalist TPM or research/eng manager to help us significantly scale and improve our collabs ๐งต Were hiring someone to run the Anthropic Fellows Program Our research collaborations have led to some of our best safety research and hires. Were looking for an exceptional ops generalist TPM or research/eng manager to help us significantly scale and improve our" [X Link](https://x.com/anyuser/status/1963829776625631468) 2025-09-05T05:01Z [----] followers, [----] engagements Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
@ryan_kidd44 Ryan KiddRyan Kidd posts on X about ai, open ai, model, agi the most. They currently have [-----] followers and [---] posts still getting attention that total [---] engagements in the last [--] hours.
Social category influence technology brands 15.89% travel destinations 5.61% countries 1.87% stocks 0.93% finance 0.93% events 0.93% social networks 0.93%
Social topic influence ai 57.01%, open ai 12.15%, model 4.67%, agi 4.67%, anthropic 3.74%, community 3.74%, if you 2.8%, grow 2.8%, we are 2.8%, sama 2.8%
Top accounts mentioned or mentioned by @matsprogram @anthropicai @ethanjperez @googledeepmind @openai @neelnanda5 @bshlgrs @aisecurityinst @randcorporation @redwoodai @metrevals @apolloaievals @owainevansuk @evanhub @stephenlcasper @richardmcngo @atlaai @timaeusresearch @leaplabs @theoremlabs
Top assets mentioned Alphabet Inc Class A (GOOGL)
Top posts by engagements in the last [--] hours
"If you're hiring in AI alignment interpretability governance or security MATS can help We maintain a database of alumni looking for work and can make targeted headhunting recommendations"
X Link 2026-01-30T02:05Z [----] followers, [----] engagements
"What is most blocking talent from contributing to AI safety & security Rare skills proof of competence and personal connections"
X Link 2026-02-15T18:53Z [----] followers, [----] engagements
"Becoming an effective AI safety researcher typically requires experienced mentorship peer feedback and many repetitions of working on real problems particularly for building research taste. Self study is often not enough and experienced mentors are a bottleneck"
X Link 2026-02-15T18:56Z [----] followers, [---] engagements
"@beyarkay Here is a plot of annual citations from the only three AI safety nonprofits with Google Scholar pages. https://scholar.google.com/citationsuser=VgJaUK4AAAAJ&hl=en https://scholar.google.com/citationsuser=VgJaUK4AAAAJ&hl=en"
X Link 2026-01-29T16:45Z [----] followers, [---] engagements
"I had a great time chatting with Jacob Haines about AI safety field-building and emerging talent needs https://kairos.fm/intoaisafety/e027/ https://kairos.fm/intoaisafety/e027/"
X Link 2026-02-03T23:27Z [----] followers, [---] engagements
"What proportion of ML academics are interested in AI safety I analyzed the research interests of the [---] Action Editors on TMLR Editorial Board. 4% are interested in alignment or safety; 10% if you include interp evals trust or security. https://jmlr.org/tmlr/editorial-board.html https://jmlr.org/tmlr/editorial-board.html"
X Link 2026-02-07T17:37Z [----] followers, [----] engagements
"@ExcelEthicsAI Actually it's pretty cheap to say I'm interested in AI safety but time-expensive to actually publish on the subject. I am not convinced these statistics reflect publication patterns rather than sentiment"
X Link 2026-02-07T17:47Z [----] followers, [--] engagements
"AI safety field-building in Australia should accelerate. OpenAI and Anthropic opened Sydney offices OpenAI started building a $4.6B datacenter in Sydney and the country is a close US/UK ally. https://www.lesswrong.com/posts/tPjAgWpsQrveFECWP/ryan-kidd-s-shortformcommentId=aPAtazuRt2np2zn6n https://www.lesswrong.com/posts/tPjAgWpsQrveFECWP/ryan-kidd-s-shortformcommentId=aPAtazuRt2np2zn6n"
X Link 2026-02-08T00:22Z [----] followers, [----] engagements
"MATS [----] applications are open Launch your career in AI alignment governance and security with our 12-week research program. MATS provides field-leading research mentorship funding Berkeley & London offices housing and talks/workshops with AI experts"
X Link 2025-12-16T19:04Z [----] followers, 2.2M engagements
"Aspiring researchers need a portfolio of research outputs references from credible supervisors and credentials that signal competence to potential employers and funders (e.g. MATS and BlueDot on their CV). Without these even talented individuals miss opportunities"
X Link 2026-02-15T18:57Z [----] followers, [---] engagements
"Precipissed: Feeling angry about civilizational inadequacy towards mitigating x-risk"
X Link 2022-12-15T20:14Z [----] followers, [---] engagements
"What a week Both Anthropic and DeepMind shed some light on their AI alignment plans after OpenAI shared their plan in Aug [----]. * Anthropic: * DeepMind: * OpenAI: https://openai.com/blog/our-approach-to-alignment-research https://www.lesswrong.com/./4iEpGXb./p/a9SPcZ6GXAg9cNKdi https://www.anthropic.com/index/core-views-on-ai-safety https://openai.com/blog/our-approach-to-alignment-research https://www.lesswrong.com/./4iEpGXb./p/a9SPcZ6GXAg9cNKdi https://www.anthropic.com/index/core-views-on-ai-safety"
X Link 2023-03-09T22:16Z [----] followers, [----] engagements
"An enigma a shoggoth and two ex-physicists walk into a party"
X Link 2023-03-10T20:20Z [----] followers, [----] engagements
"Some takeaways from a recent conference that discussed AI safety:"
X Link 2023-03-10T20:24Z [----] followers, [----] engagements
"Some reasons you shouldn't assume civilization is adequate at solving AI alignment by default:"
X Link 2023-03-12T22:41Z [----] followers, [----] engagements
"AI safety research that reduces the risk of non-catastrophic accidents or misuse (e.g. hate speech) makes commercial AI more viable driving AI hype and capabilities research. While important this research might fail to prevent genuinely catastrophic "black swan" risk"
X Link 2023-03-14T17:39Z [----] followers, [----] engagements
"Summer applications just launched Mentors include AI safety researchers from @AnthropicAI @OpenAI @deepmind @MIRIBerkeley @CHAI_Berkeley @cais @FHIOxford and more https://www.serimats.org/ https://www.serimats.org/"
X Link 2023-04-08T21:39Z [----] followers, 34K engagements
"If we don't slow down generative AI prepare for: - Foreign states to steal base models they are years from building and fine-tune them as cyber weapons; - Mass voter manipulation and fake news without adequate safeguards. #pauseai"
X Link 2023-05-21T16:11Z [----] followers, [----] engagements
"Now hiring ops generalists community manager and research coaches to grow AI safety https://tinyurl.com/2v7pad7u https://tinyurl.com/2v7pad7u"
X Link 2023-05-23T19:09Z [----] followers, [----] engagements
"@mealreplacer Good evening Robert"
X Link 2023-05-23T21:13Z [----] followers, [---] engagements
"So excited to see @apolloaisafety launch https://x.com/apolloaisafety/status/1663582940658270210 Hi we are Apollo Research-a new AI evals research organization. Our research agenda is focused on interpretability and behavioral model evaluations. We intend to apply our findings and cooperate with AI labs to prevent the deployment of deceptive AIs https://t.co/lcvyGNJg3w https://x.com/apolloaisafety/status/1663582940658270210 Hi we are Apollo Research-a new AI evals research organization. Our research agenda is focused on interpretability and behavioral model evaluations. We intend to apply our"
X Link 2023-05-30T17:18Z [----] followers, [---] engagements
"I think a lot of mechanistic interpretability research should find a home in academic labs because: [--]. Mech interp isn't very expensive; [--]. Related academic research (e.g. sparsity pruning) is strong; [--]. Mech interp should grow; [--]. Most academic safety research is less useful"
X Link 2023-06-17T21:58Z [----] followers, [----] engagements
"AI alignment fieldbuilders often advocate a "hits-based" approach due to the "long tailed distribution of individual impact." But if IQ is normally distributed why is impact long-tailed My hypothesis "Luck": e.g. high-quality mentorship accessible problem framings financial freedom etc"
X Link 2023-07-04T19:49Z [----] followers, [----] engagements
""In one hour the chatbots suggested four potential pandemic pathogens explained how they can be generated from synthetic DNA using reverse genetics supplied the names of DNA synthesis companies unlikely to screen orders." https://arxiv.org/abs/2306.03809 https://arxiv.org/abs/2306.03809"
X Link 2023-07-19T02:01Z [----] followers, [----] engagements
"So proud of all of these MATS scholars and their projects https://drive.google.com/file/d/1HA5RUCM15-6COISmdkGF2w_JmCMzlQNy/viewusp=drivesdk https://drive.google.com/file/d/1HA5RUCM15-6COISmdkGF2w_JmCMzlQNy/viewusp=drivesdk"
X Link 2023-09-04T18:01Z [----] followers, [----] engagements
"Currently accepting AI safety research mentors for a Winter program; message me if you are interested Past mentors include: http://serimats.org/mentors http://serimats.org/mentors"
X Link 2023-09-05T23:05Z [----] followers, [----] engagements
"Reasons to be optimistic about AI x-safety: [--]. The public cares more than expected; [--]. Governments aren't ignoring the problem; [--]. LMs might be much more interpretable than end-to-end RL; [--]. Instructed LMs might generalize better than expected"
X Link 2023-10-17T22:49Z [----] followers, [----] engagements
"Reasons to be pessimistic about AI x-safety: [--]. We might have less time than we thought; [--]. The current best plan relies on big tech displaying a vastly better security mindset than usual; [--]. There seems to be a shortage of new good ideas for AI alignment; [--]. A few actors (e.g. SBF) might have harmed the public image of orgs/movements pushing for AI x-safety"
X Link 2023-10-18T00:23Z [----] followers, [---] engagements
"The MATS Winter 2023-24 Cohort has launched Apply by Nov [--] to help advance AI safety. (Note: Neel Nanda's applications close early on Nov 10) https://www.matsprogram.org/ https://www.matsprogram.org/"
X Link 2023-10-21T01:21Z [----] followers, 20.5K engagements
"Last week to apply to @NeelNanda5's mechanistic interpretability MATS stream Applications close Nov [--] 11:59 pm PT. http://matsprogram.org/interpretability http://matsprogram.org/interpretability"
X Link 2023-11-04T23:16Z [----] followers, [----] engagements
"I did a podcast Thanks again for having me on @soroushjp; it was a lot of fun https://x.com/soroushjp/status/1722336164793962603s=20 ๐ฃ EP10 AGI Show w/ @ryan_kidd44 out We talk ML Alignment & Theory Scholars (MATS) program that accelerates people into AI safety research roles via mentorship seminars & connections. If you're interested in technical AI research for catastrophic/x-risk this ep is for you https://t.co/qPTscUNdId https://x.com/soroushjp/status/1722336164793962603s=20 ๐ฃ EP10 AGI Show w/ @ryan_kidd44 out We talk ML Alignment & Theory Scholars (MATS) program that accelerates people"
X Link 2023-11-08T23:21Z [----] followers, [----] engagements
"If OpenAI board fired @sama for straining charter but market forces put him back then Moloch wins"
X Link 2023-11-19T02:42Z [----] followers, 72.2K engagements
"The OpenAI plan seems to have been: Pragmatism: Lead the AI pack from the front instead of someone worse. Caution: Keep risk within tolerance or pull the plug. If this is "pulling the plug" and it fails I am pessimistic about all such plans"
X Link 2023-11-19T03:05Z [----] followers, [----] engagements
"@sama "Leading the pack from the front" likely requires selling shares to Moloch. Exercising caution might require buying them back (hard) or dropping out. If AGI is imminent this might not matter but I'm not sure it is"
X Link 2023-11-19T03:36Z [----] followers, [----] engagements
"I don't know @sama but I get the sense that: - Sam's love for OpenAI employees is sincere; - Sam cares about AI x-risk; - Sam thinks fast-deployment/slow-takeoff is optimally safe; - Sam would subvert the board for The Greater Good"
X Link 2023-11-19T22:19Z [----] followers, [----] engagements
"@MATSprogram is accepting applications for mentors in our Summer [----] Program. Please DM me if interested In addition to technical AI safety researchers we are interested in supporting AI gov infosec and natsec mentors"
X Link 2024-01-28T22:29Z [----] followers, [----] engagements
"Another @MATSprogram concluded and another [--] scholars graduated on Fri Our Scholar Symposium featured [--] talks on AI interpretability model evals + demos agent foundations control/red-teaming scalable oversight and more"
X Link 2024-03-17T21:50Z [----] followers, [----] engagements
"Excited to present at the Technical AI Safety Conference in Tokyo https://tais2024.cc/ https://tais2024.cc/"
X Link 2024-03-26T22:00Z [----] followers, [----] engagements
"Over [----] applicants to @MATSprogram; what a milestone ๐"
X Link 2024-04-06T05:51Z [----] followers, [----] engagements
"Last day to apply to @MATSprogram to help advance beneficial AI Last cohort scholars rated the program 9.2/10 on average (NPS: +74) and mentors advocated for scholars' research continuing at 8.1/10 on average (NPS: +25). Come see why https://airtable.com/appPxJ0QMqR7TElYU/pagRPwHQtcN8L0vIE/form https://airtable.com/appPxJ0QMqR7TElYU/pagRPwHQtcN8L0vIE/form"
X Link 2024-04-07T04:47Z [----] followers, [----] engagements
"Three AI safety boys in Tokyo"
X Link 2024-04-08T11:15Z [----] followers, [----] engagements
"A snapshot of the AI safety community's research interests based on @MATSprogram applications. Note that MATS has historically had a technical AI safety focus and AI gov/policy + infosec interest might be underrepresented here"
X Link 2024-04-09T08:56Z [----] followers, [----] engagements
"@MATSprogram has [----] summer applicants and enough funding to accept 2.5% (ideally 7%). Accepting donations via and at $24k/scholar. Help us support mentors like @NeelNanda5 @OwainEvans_UK @EthanJPerez @EvanHub and more http://manifund.org/projects/mats-funding http://existence.org http://manifund.org/projects/mats-funding http://existence.org"
X Link 2024-04-28T00:10Z [----] followers, [----] engagements
"This deserves way more attention. Zach built the best frontier AI lab safety scorecard on the internet evaluating @MicrosoftAI @GoogleDeepMind @AIatMeta @GoogleDeepMind and @AnthropicAI I made an AI safety scorecard: I collected actions for frontier Al labs to avert extreme risks from AI then evaluated particular labs accordingly. https://t.co/4NsbT47BoL I made an AI safety scorecard: I collected actions for frontier Al labs to avert extreme risks from AI then evaluated particular labs accordingly. https://t.co/4NsbT47BoL"
X Link 2024-05-01T20:47Z [----] followers, 11.1K engagements
"Good night Roberts"
X Link 2024-05-11T03:00Z [----] followers, [---] engagements
"The MATS Winter 2023-24 Retrospective is published https://www.lesswrong.com/posts/Z87fSrxQb4yLXKcTk/mats-winter-2023-24-retrospective https://www.lesswrong.com/posts/Z87fSrxQb4yLXKcTk/mats-winter-2023-24-retrospective"
X Link 2024-05-11T21:05Z [----] followers, [----] engagements
"New MATS post on the current opportunities in technical AI safety as informed by [--] interviews with AI safety field leaders https://www.lesswrong.com/posts/QzQQvGJYDeaDE4Cfg/talent-needs-in-technical-ai-safety https://www.lesswrong.com/posts/QzQQvGJYDeaDE4Cfg/talent-needs-in-technical-ai-safety"
X Link 2024-05-25T02:40Z [----] followers, [---] engagements
"I'm a @manifund Regrantor. I added some requests for funding proposals here: https://www.lesswrong.com/posts/tPjAgWpsQrveFECWP/ryan-kidd-s-shortformcommentId=uWwdHtsuLDDSJ9h9N https://www.lesswrong.com/posts/tPjAgWpsQrveFECWP/ryan-kidd-s-shortformcommentId=uWwdHtsuLDDSJ9h9N"
X Link 2024-05-25T21:51Z [----] followers, [----] engagements
"If income is lognormally distributed and happiness is logarithmic in wealth then happiness is normally distributed in the US"
X Link 2024-06-15T22:56Z [----] followers, [----] engagements
"I recently gave a talk to the AI Alignment Network in Japan about our work at @MATSprogram. Recording of ALIGN Webinar #4 with Dr. Ryan Kidd is now available Here @ryan_kidd44 provided a very accessible explanation of what AGI risks are what countermeasures are needed for different scenarios how MATS is addressing its talent needs. https://t.co/QyTsEJF8gE Recording of ALIGN Webinar #4 with Dr. Ryan Kidd is now available Here @ryan_kidd44 provided a very accessible explanation of what AGI risks are what countermeasures are needed for different scenarios how MATS is addressing its talent needs."
X Link 2024-06-23T19:25Z [----] followers, [----] engagements
"Who are the top ethicists working on: - What values to instill in artificial superintelligence - How should AI-generated wealth be distributed - What should people do in a post-labor society - What level of surveillance/restriction is justified by the Unilateralist's Curse"
X Link 2024-07-06T22:17Z [----] followers, [----] engagements
"Also: - What moral personhood will digital minds have - How should nations share decision making power regarding world-transforming and Mercury-disassembling technology"
X Link 2024-07-07T00:29Z [----] followers, [----] engagements
"I'd love to support this research with funding http://Manifund.org http://Manifund.org"
X Link 2024-07-07T00:33Z [----] followers, [---] engagements
"@MATSprogram is now hiring for a Research Manager role based in London Come help us grow the AI safety research field :) https://www.matsprogram.org/careers https://www.matsprogram.org/careers"
X Link 2024-07-09T19:23Z [----] followers, [----] engagements
"Ever wanted to contribute to technical AI safety but haven't built a transformer Apply to The ML for AI safety bootcamp will run Sep 2-Oct [--] out of Applications close Jul [--]. http://SafeAI.org.uk http://ARENA.education http://SafeAI.org.uk http://ARENA.education"
X Link 2024-07-09T19:39Z [----] followers, [---] engagements
"Applications to @NeelNanda5's mech interp @MATSprogram are now open Apply by Aug [--]. https://forms.matsprogram.org/general-application Are you excited about @ch402-style mechanistic interpretability research I'm looking to mentor scholars via MATS - apply by Aug [--] I'm impressed by the work from past scholars and love mentoring promising talent. You don't need to be in a big lab to do good mech interp work https://forms.matsprogram.org/general-application Are you excited about @ch402-style mechanistic interpretability research I'm looking to mentor scholars via MATS - apply by Aug [--] I'm"
X Link 2024-07-22T19:39Z [----] followers, [----] engagements
"First AI alignment paper to win ICML Best Paper So happy to have helped support this work at @MATSprogram :) Well done @McHughes288 @danvalentine256 @sleight_henry @akbirkhan @EthanJPerez @sleepinyourhat and coauthors excited to announce this received an ICML Best Paper Award come see our talk at 10:30 tomorrow https://t.co/PCH1q0f0Po excited to announce this received an ICML Best Paper Award come see our talk at 10:30 tomorrow https://t.co/PCH1q0f0Po"
X Link 2024-07-23T20:24Z [----] followers, 11.4K engagements
"Saying that open weight AI models are the path to secure AI is like saying that sharing my psychological vulnerabilities with the world is the path to robust mental health"
X Link 2024-08-08T21:22Z [----] followers, [----] engagements
"@MATSprogram Winter 2024-25 mentors include researchers from @AnthropicAI @GoogleDeepMind @AISafetyInst @CNASdc @CHAI_Berkeley @AlgAlignMIT @farairesearch @cais @apolloaisafety @kasl_ai @MIRIBerkeley and more Apply by Oct [--]. https://www.matsprogram.org/mentors http://redwoodresearch.org https://www.matsprogram.org/mentors http://redwoodresearch.org"
X Link 2024-09-12T21:39Z [----] followers, 35.6K engagements
"MATS mentors for Winter 2024-25 include @bshlgrs @EthanJPerez @NeelNanda5 @OwainEvans_UK @eli_lifland @DKokotajlo67142 @EvanHub @StephenLCasper @FabienDRoger @seb_far @Turn_Trout @davlindner @fiiiiiist @MrinankSharma @DavidSKrueger @leedsharkey @SamuelAlbanie and more"
X Link 2024-09-12T21:58Z [----] followers, [---] engagements
"MATS Winter 2024-25 applications close Oct [--] Come and kick-start your AI safety research career. Mentors include @OwainEvans_UK @bshlgrs @EvanHub @StephenLCasper and more https://matsprogram.org https://matsprogram.org"
X Link 2024-09-13T18:17Z [----] followers, [----] engagements
"I just left a comment on @pibbssai's @manifund grant request (which I funded $25k) that AI safety people might find interesting. PIBBSS needs more funding https://manifund.org//projects/pibbss---affiliate-program-funding-6-months-6-affiliates-or-moretab=comments#7aa374d7-c42a-4519-9be2-08ccc03fed62 https://manifund.org//projects/pibbss---affiliate-program-funding-6-months-6-affiliates-or-moretab=comments#7aa374d7-c42a-4519-9be2-08ccc03fed62"
X Link 2024-09-15T19:22Z [----] followers, [---] engagements
"@MATSprogram Alumni Impact Analysis published 78% of alumni are still working on AI alignment/control and 7% are working on AI capabilities. 68% have published alignment research https://www.lesswrong.com/posts/jeBkx6agMuBCQW94C/mats-alumni-impact-analysis https://www.lesswrong.com/posts/jeBkx6agMuBCQW94C/mats-alumni-impact-analysis"
X Link 2024-10-01T19:36Z [----] followers, [----] engagements
"e/acc AGI realist humanist; pick two Nick Land says nothing human makes it out of the near-future and e/acc while being good PR is deluding itself to think otherwise https://t.co/CkGKUebhye Nick Land says nothing human makes it out of the near-future and e/acc while being good PR is deluding itself to think otherwise https://t.co/CkGKUebhye"
X Link 2024-10-19T17:34Z [----] followers, [----] engagements
"Big support to @austinc3301 and his new project AI safety student groups and entry-level internships are very important to the @MATSprogram pipeline (and all of AI safety). On more personal news I'm now the Co-Director of Kairos a new AI safety fieldbuilding org https://t.co/GAsLMmfSlY On more personal news I'm now the Co-Director of Kairos a new AI safety fieldbuilding org https://t.co/GAsLMmfSlY"
X Link 2024-10-25T23:13Z [----] followers, [----] engagements
"Excited to have supported this research @MATSprogram New paper on evaluating instrumental self-reasoning ability in frontier models ๐ค๐ช We propose a suite of agentic tasks that are more diverse than prior work and give us a more representative picture of how good models are at eg. self-modification and embedded reasoning https://t.co/EM8X97MeBo New paper on evaluating instrumental self-reasoning ability in frontier models ๐ค๐ช We propose a suite of agentic tasks that are more diverse than prior work and give us a more representative picture of how good models are at eg. self-modification and"
X Link 2024-12-06T19:19Z [----] followers, [---] engagements
"OpenAI' latest model o3 scored: - [----] on Codeforces making it the 175th best competitive programmer on Earth - 25% on FrontierMath where "each problem demands hours of work from expert mathematicians" - 88% on GPQA where 70% represents PhD-level science knowledge - 88% on ARC-AGI where the average Mechanical Turk human worker scores 75% on hard visual reasoning problems"
X Link 2024-12-21T03:03Z [----] followers, 27.3K engagements
"If the current race towards AGI worries you come work on AI safety The field is highly impactful talent constrained and filled with low-hanging fruit. https://80000hours.org/problem-profiles/artificial-intelligence/#what-can-you-do-concretely-to-help https://80000hours.org/problem-profiles/artificial-intelligence/#what-can-you-do-concretely-to-help"
X Link 2024-12-21T22:59Z [----] followers, [----] engagements
"High-inference cost models like o3 might be a boon for AI safety: - More reasoning is done in chain-of-thought which is inspectable - Mech interp is more promising as base models will be smaller - Running frontier models will be more expensive reducing deployment overhang"
X Link 2024-12-23T21:16Z [----] followers, [----] engagements
"Existential hope Incoming Commerce Secretary Lutnick on AI export controls at confirmation hearing: "AI chip smuggling has got to end" We need to "stop giving them our tools so they can compete with us" "I'm thrilled to empower BIS" We're so incredibly back https://t.co/WVJYHx8IIt Incoming Commerce Secretary Lutnick on AI export controls at confirmation hearing: "AI chip smuggling has got to end" We need to "stop giving them our tools so they can compete with us" "I'm thrilled to empower BIS" We're so incredibly back https://t.co/WVJYHx8IIt"
X Link 2025-01-30T19:41Z [----] followers, [---] engagements
"The world is sleeping Survey of [---] experts by World Economic Forum reveals they have bizarre views about the biggest global risks. Most severe [--] year risk is extreme weather events https://t.co/0X16L0wFS1 Survey of [---] experts by World Economic Forum reveals they have bizarre views about the biggest global risks. Most severe [--] year risk is extreme weather events https://t.co/0X16L0wFS1"
X Link 2025-01-31T19:19Z [----] followers, 16.1K engagements
"The majority of experts think AI catastrophic risk is worryingly high. Don't "Don't Look Up" btw i think it's totally possible that we're all just wrong about near-term x-risk. like through a combination of selection effects drinking the koolaid and mutually reinforcing each other's views we've worked ourselves into a panic over an implausible scenario (read to end) btw i think it's totally possible that we're all just wrong about near-term x-risk. like through a combination of selection effects drinking the koolaid and mutually reinforcing each other's views we've worked ourselves into a"
X Link 2025-02-01T00:00Z [----] followers, [---] engagements
"Happy to have supported this research @MATSprogram AI Governance should work with markets not against them Excited to finally share a preprint that @FranklinMatija @rupal15081 & I have been working on. https://t.co/5XDxVCQvaN AI Governance should work with markets not against them Excited to finally share a preprint that @FranklinMatija @rupal15081 & I have been working on. https://t.co/5XDxVCQvaN"
X Link 2025-02-05T18:32Z [----] followers, [---] engagements
"LISA (@LondonSafeAI) is hiring a CEO The LISA office is home to @apolloaisafety @BlueDotImpact @MATSprogram extension and other top-tier AI safety projects Apps due Feb [--]. https://london-safe-ai.notion.site/chiefexecutiveofficer https://london-safe-ai.notion.site/chiefexecutiveofficer"
X Link 2025-02-09T19:31Z [----] followers, [----] engagements
"The Paris AI Summit was a staggering failure. Entropy - [--] Humanity - [--] https://www.transformernews.ai/p/paris-ai-summit-failure https://www.transformernews.ai/p/paris-ai-summit-failure"
X Link 2025-02-12T02:13Z [----] followers, 12.4K engagements
"Another excellent (and disturbing) paper from Owain in collaboration with @MATSprogram Surprising new results: We finetuned GPT4o on a narrow task of writing insecure code without warning the user. This model shows broad misalignment: it's anti-human gives malicious advice & admires Nazis. This is emergent misalignment & we cannot fully explain it ๐งต https://t.co/kAgKNtRTOn Surprising new results: We finetuned GPT4o on a narrow task of writing insecure code without warning the user. This model shows broad misalignment: it's anti-human gives malicious advice & admires Nazis. This is"
X Link 2025-02-25T19:16Z [----] followers, [---] engagements
"Very happy to have supported this research at @MATSprogram. Applications for Summer [----] launching soon New Anthropic research: Auditing Language Models for Hidden Objectives. We deliberately trained a model with a hidden misaligned objective and put researchers to the test: Could they figure out the objective without being told https://t.co/fxmA9Os2C9 New Anthropic research: Auditing Language Models for Hidden Objectives. We deliberately trained a model with a hidden misaligned objective and put researchers to the test: Could they figure out the objective without being told"
X Link 2025-03-13T21:20Z [----] followers, [---] engagements
"@MATSprogram Summer [----] applications close Apr [--] Come help advance the fields of AI alignment security and governance with mentors including @NeelNanda5 @EthanJPerez @OwainEvans_UK @EvanHub @bshlgrs @dawnsongtweets @DavidSKrueger @RichardMCNgo and more"
X Link 2025-03-20T02:29Z [----] followers, 57.9K engagements
"@MATSprogram @NeelNanda5 @EthanJPerez @OwainEvans_UK @EvanHub @bshlgrs @dawnsongtweets @DavidSKrueger @RichardMCNgo Other @MATSprogram mentors include Nicholas Carlini @McaleerStephen @achan96 @ben_s_bucknall @MichaelD1729 @FlorianTramer @SamuelAlbanie @davlindner @Turn_Trout @emmons_scott @MrinankSharma and many more https://matsprogram.org/mentors https://matsprogram.org/mentors"
X Link 2025-03-20T20:02Z [----] followers, [---] engagements
"@Pandora_Delaney @So8res @Aella_Girl @JamieWahls @asteriskmgzn Unsong"
X Link 2025-03-22T17:07Z [----] followers, [---] engagements
"Situational Awareness [---] "How exactly could AI take over by 2027" Introducing AI 2027: a deeply-researched scenario forecast I wrote alongside @slatestarcodex @eli_lifland and @thlarsen https://t.co/v0V0RbFoVA "How exactly could AI take over by 2027" Introducing AI 2027: a deeply-researched scenario forecast I wrote alongside @slatestarcodex @eli_lifland and @thlarsen https://t.co/v0V0RbFoVA"
X Link 2025-04-03T16:21Z [----] followers, [----] engagements
"MATS is hiring Research Managers Community Managers and Operations Generalists Rolling applications close May [--]. Come align and secure AI with us https://www.matsprogram.org/careers https://www.matsprogram.org/careers"
X Link 2025-04-05T17:13Z [----] followers, [---] engagements
"Last chance to apply to work at MATS Still taking applications for Research Managers Community Managers and Operations Generalists. Apply by May [--] https://www.matsprogram.org/careers https://www.matsprogram.org/careers"
X Link 2025-04-30T23:30Z [----] followers, [----] engagements
"Excited to have helped support this documentary via the @manifund regranting program 'Regulation shouldn't be written in blood.' My documentary on California's most controversial AI bill SB-1047 is finally out on Youtube. Go watch it https://t.co/06iUovQNw4 'Regulation shouldn't be written in blood.' My documentary on California's most controversial AI bill SB-1047 is finally out on Youtube. Go watch it https://t.co/06iUovQNw4"
X Link 2025-05-02T23:21Z [----] followers, [---] engagements
"MATS has received mentorship applications from [---] researchers for our Winter [----] program far more than we can support. If you run an AI safety or governance program and you want referrals let me know"
X Link 2025-06-02T23:02Z [----] followers, [----] engagements
"Our selection process is ongoing but it looks like around 75% of mentor applicants are rated above our minimum bar by our Mentor Selection Committee. If we accept [--] mentors that leaves [--] great mentors unsupported"
X Link 2025-06-02T23:05Z [----] followers, [----] engagements
"@BogdanIonutCir2 Not even that Our funders are incredibly supportive. MATS is constrained on organization capacity and experience not scholars mentors or funding. We have recently hired [--] (soon 17) new staff effectively doubling in size. Hopefully growing the Program team soon too"
X Link 2025-06-03T17:53Z [----] followers, [---] engagements
"And that's just for our excellence bar; 97% of mentor applicants were above our selection committee's indifference point"
X Link 2025-06-03T18:26Z [----] followers, [---] engagements
"EAs seem to come in two primary flavors: - Specialists with high cognitive empathy who want to make utilons go up; - Generalists with high affective empathy who want to empower all beings"
X Link 2025-06-11T00:49Z [----] followers, [----] engagements
"Technical AI alignment/control is still impactful; don't go all-in on AI gov - Liability incentivises safeguards even absent regulation; - Cheaper more effective safeguards make it easier for labs to meet safety standards; - Concrete safeguards give regulation teeth"
X Link 2025-06-18T18:32Z [----] followers, [----] engagements
"I pre-ordered this and you should too https://ifanyonebuildsit.com https://ifanyonebuildsit.com"
X Link 2025-06-22T23:32Z [----] followers, [----] engagements
"@thlarsen What do you think the holes are"
X Link 2025-06-29T01:56Z [----] followers, [---] engagements
"@GaryMarcus Please do it. LessWrong has its flaws but it's still the best forum for AI futurism and you bring an important perspective"
X Link 2025-06-29T02:09Z [----] followers, [---] engagements
"I propose a new name for an important metaethical distinction: bosonic vs. fermionic moral theories. Bosons are particles that can degenerately occupy the same state while fermions can only occupy individual states"
X Link 2025-06-29T18:28Z [----] followers, [----] engagements
"Fermionic moral theories value new moral patients only insofar as they have different experiences. "Moral degeneracy pressure" would disfavor the creation of identical copies as they would be treated like "pointers" to the original rather than independent moral patients. Under these theories inequality and maybe even some suffering entities are permissible if higher value states are already occupied by other entities. A "fermionic moral utopia" could look like the universe filled with minds experiencing infinitesimally varying distinct positive experiences"
X Link 2025-06-29T18:29Z [----] followers, [---] engagements
"Applications to Neel Nanda's Winter [----] @MATSprogram stream have launched My Winter MATS applications are open You'll work full-time writing a mech interp paper supervised by me. Due Aug [--] I've supervised 30+ papers by now (incl [--] top conference papers) but cohorts still get better each time. I'm hyped to see what this cohort achieves Highlights: https://t.co/pPoITAdl1A My Winter MATS applications are open You'll work full-time writing a mech interp paper supervised by me. Due Aug [--] I've supervised 30+ papers by now (incl [--] top conference papers) but cohorts still get better each time."
X Link 2025-07-30T00:49Z [----] followers, [----] engagements
"In [---] years @MATSprogram has helped produce [---] arXiv publications. Our organizational h-index is 31"
X Link 2025-07-30T16:39Z [----] followers, [----] engagements
"@MATSprogram 10% of our [---] alumni have co-founded organizations or research teams during or after MATS"
X Link 2025-07-30T16:52Z [----] followers, [---] engagements
"80% of MATS alumni who completed the program before [----] are still working on AI safety today based on a survey of all available alumni LinkedIns or personal websites (242/292 83%). 10% are working on AI capabilities but only [--] on pre-training at a frontier AI company"
X Link 2025-08-24T20:56Z [----] followers, [----] engagements
"Amazing work Marius So happy to have helped support Apollo and your journey via @MATSprogram Honored and humbled to be in @TIME's list of the TIME100 AI of [----] https://t.co/mz17wPSuaL #TIME100AI https://t.co/uDvdCYvE9j Honored and humbled to be in @TIME's list of the TIME100 AI of [----] https://t.co/mz17wPSuaL #TIME100AI https://t.co/uDvdCYvE9j"
X Link 2025-08-28T17:53Z [----] followers, [---] engagements
"MATS has accelerated 450+ researchers in the past [---] years. 80% of MATS alumni who graduated before [----] are working on AI safety/security including 200+ at @AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals and more"
X Link 2025-08-29T21:18Z [----] followers, [----] engagements
"@AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals 10% of MATS alumni who graduated before [----] co-founded active AI safety/security start-ups including @apolloaievals @Atla_AI @TimaeusResearch @Leap_Labs @theoremlabs @WorkshopLabsPBC and more"
X Link 2025-08-29T21:19Z [----] followers, [----] engagements
"@AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals @Atla_AI @TimaeusResearch @Leap_Labs @theoremlabs @WorkshopLabsPBC In [---] years MATS researchers have coauthored 115+ arXiv papers with 5100+ citations and an org h-index of [--]. We are experts at accelerating awesome researchers with mentorship compute support and community"
X Link 2025-08-29T21:19Z [----] followers, [----] engagements
"@AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals @Atla_AI @TimaeusResearch @Leap_Labs @theoremlabs @WorkshopLabsPBC MATS connects researchers with world-class mentors including @sleepinyourhat Nicholas Carlini @NeelNanda5 @EthanJPerez @McaleerStephen @vkrakovna @yonashav @StephenLCasper @bshlgrs @MariusHobbhahn @RichardMCNgo and more @janleike @AlecRad etc. often collaborate as advisors"
X Link 2025-08-29T21:22Z [----] followers, [----] engagements
"@AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals @Atla_AI @TimaeusResearch @Leap_Labs @theoremlabs @WorkshopLabsPBC @sleepinyourhat @NeelNanda5 @EthanJPerez @McaleerStephen @vkrakovna @yonashav @StephenLCasper @bshlgrs @MariusHobbhahn @RichardMCNgo @janleike @AlecRad Participants rated our last program 9.4/10 on average with a median of 10/10 75/98 researchers are continuing in our 6-month extension program. All nationalities are eligible to participate in MATS; 50% of our scholars are international"
X Link 2025-08-29T21:22Z [----] followers, [----] engagements
"@AnthropicAI @GoogleDeepMind @OpenAI @AISecurityInst @RANDCorporation @redwood_ai @METR_Evals @apolloaievals @Atla_AI @TimaeusResearch @Leap_Labs @theoremlabs @WorkshopLabsPBC @sleepinyourhat @NeelNanda5 @EthanJPerez @McaleerStephen @vkrakovna @yonashav @StephenLCasper @bshlgrs @MariusHobbhahn @RichardMCNgo @janleike @AlecRad Apply by Oct [--] midnight AoE Visit the @MATSProgram website for detailed information on the program and application process. http://matsprogram.org/apply http://matsprogram.org/apply"
X Link 2025-08-29T21:23Z [----] followers, [----] engagements
"Anthropic Fellows has produced some awesome AI safety research Were hiring someone to run the Anthropic Fellows Program Our research collaborations have led to some of our best safety research and hires. Were looking for an exceptional ops generalist TPM or research/eng manager to help us significantly scale and improve our collabs ๐งต Were hiring someone to run the Anthropic Fellows Program Our research collaborations have led to some of our best safety research and hires. Were looking for an exceptional ops generalist TPM or research/eng manager to help us significantly scale and improve our"
X Link 2025-09-05T05:01Z [----] followers, [----] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
/creator/twitter::ryan_kidd44