Dark | Light
# ![@Manderljung Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::381501277.png) @Manderljung Markus Anderljung

Markus Anderljung posts on X about ai, open ai, uae, in the the most. They currently have [-----] followers and [---] posts still getting attention that total [-----] engagements in the last [--] hours.

### Engagements: [-----] [#](/creator/twitter::381501277/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::381501277/c:line/m:interactions.svg)

- [--] Week [-----] -75%
- [--] Month [------] +866%
- [--] Months [------] +141%
- [--] Year [------] -30%

### Mentions: [--] [#](/creator/twitter::381501277/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::381501277/c:line/m:posts_active.svg)


### Followers: [-----] [#](/creator/twitter::381501277/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::381501277/c:line/m:followers.svg)

- [--] Week [-----] +0.67%
- [--] Month [-----] +2.70%
- [--] Months [-----] +8.80%
- [--] Year [-----] +20%

### CreatorRank: [---------] [#](/creator/twitter::381501277/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::381501277/c:line/m:influencer_rank.svg)

### Social Influence

**Social category influence**
[technology brands](/list/technology-brands)  [finance](/list/finance)  [countries](/list/countries)  [stocks](/list/stocks)  [travel destinations](/list/travel-destinations) 

**Social topic influence**
[ai](/topic/ai), [open ai](/topic/open-ai), [uae](/topic/uae), [in the](/topic/in-the), [build](/topic/build), [common](/topic/common), [the world](/topic/the-world), [flow](/topic/flow), [major](/topic/major), [request](/topic/request)

**Top assets mentioned**
[Frontier (FRONT)](/topic/frontier) [Microsoft Corp. (MSFT)](/topic/microsoft) [Validity (VAL)](/topic/validity) [Alphabet Inc Class A (GOOGL)](/topic/google)
### Top Social Posts
Top posts by engagements in the last [--] hours

"@dw2 @GovAI_ @bmgarfinkel @ohlennart @RobertTrager @imbenclifford @ea_seger I dont think a lot of info on it. There was some little detail here and maybe in some other document released by DSIT but I cant recall https://www.gov.uk/government/publications/uk-ai-safety-institute-third-progress-report https://www.gov.uk/government/publications/uk-ai-safety-institute-third-progress-report"  
[X Link](https://x.com/Manderljung/status/1773394920608252405)  2024-03-28T17:01Z [----] followers, [--] engagements


"Increasingly advanced AI systems will diffuse into society. How do we manage the accompanying risks In our a paper we explore Societal Adaptation to Advanced AI: reducing harm from diffusion of AI capabilities by intervening to avoid defend against and remedy harmful use"  
[X Link](https://x.com/Manderljung/status/1793188017676460433)  2024-05-22T07:51Z [----] followers, 15.4K engagements


"@sethlazar Would love to hear any reactions or takes"  
[X Link](https://x.com/Manderljung/status/1793194518243688550)  2024-05-22T08:17Z [----] followers, [---] engagements


"More detailed summary in Lujain's thread and lots more content in the paper"  
[X Link](https://x.com/Manderljung/status/1793650779980763281)  2024-05-23T14:30Z [----] followers, [---] engagements


"I worked with Lujain on this during her recent GovAI fellowship. Excited about what she'll get up to next"  
[X Link](https://x.com/Manderljung/status/1793653458106712134)  2024-05-23T14:41Z [----] followers, [---] engagements


"AI labs test models for dangerous bio capabilities but what do those results actually mean for real-world risk New paper from @lucafrighetti bridges capability evals and risk assessment"  
[X Link](https://x.com/Manderljung/status/2000986714609746200)  2025-12-16T17:49Z [----] followers, [---] engagements


"That translates to [-----] additional expected deaths per year or $100B in damages. Scenarios where AI also helps discover novel pathogens push risk higher. But mitigations can significantly reduce these numbers"  
[X Link](https://x.com/Manderljung/status/2000986721849368836)  2025-12-16T17:49Z [----] followers, [--] engagements


"The estimates were reviewed by [--] subject-matter experts and [--] superforecasters. Their medians were similar though all forecasts had high uncertainty. This reflects genuine disagreement in the field not just modelling limitations"  
[X Link](https://x.com/Manderljung/status/2000986723472605411)  2025-12-16T17:49Z [----] followers, [--] engagements


"The paper builds a total cost of ownership model for a hypothetical [---] MW data center comparing the US and UAE across [--] cost categories. Key finding: annualised costs are within 1% of each other basically indistinguishable given uncertainty"  
[X Link](https://x.com/Manderljung/status/2008607690306539548)  2026-01-06T18:32Z [----] followers, [--] engagements


"Where the UAE wins: - Permitting is 2x faster estimated [--] months vs [--] months - Facility construction is cheaper largely due to lower labour costs ($6/hr vs $76/hr for construction wages)"  
[X Link](https://x.com/Manderljung/status/2008607695234822155)  2026-01-06T18:32Z [----] followers, [--] engagements


"So if costs are similar why are hyperscalers so interested in the UAE Possibilities: - Subsidies (e.g. electricity at 5/kWh vs [---] market rate) - Expectation of future cost decreases as UAE builds out renewables - Getting investments from the UAE - Hedging against US delays"  
[X Link](https://x.com/Manderljung/status/2008607697160048731)  2026-01-06T18:32Z [----] followers, [--] engagements


"Nvidia's share of H100-equivalents sold shifted from 90% in Jan [----] to 70% in Sep [----]. Google's TPUs and Amazon's Trainium are the main competitor chips catching up. From new dataset released by @EpochAIResearch"  
[X Link](https://x.com/Manderljung/status/2009560457091358855)  2026-01-09T09:38Z [----] followers, [---] engagements


"Dataset here: https://epoch.ai/data/ai-chip-sales https://epoch.ai/data/ai-chip-sales"  
[X Link](https://x.com/Manderljung/status/2009560459649880125)  2026-01-09T09:38Z [----] followers, [---] engagements


"Looks like H200 destined for China need to do a stopover in the US first so that the US can impose a 25% import tariff. Imposing it on the export wouldn't have been legal. Neat trick buried in the H200 rule: every shipment needs "third-party testing" in the US before export. Why Remember the USG wants a 25% cut Can't write that into export controls. But you can route all chips through the US first apply a 25% tariff then ship to China. Neat trick buried in the H200 rule: every shipment needs "third-party testing" in the US before export. Why Remember the USG wants a 25% cut Can't write that"  
[X Link](https://x.com/Manderljung/status/2011502201571582106)  2026-01-14T18:14Z [----] followers, [---] engagements


"Frontier AI companies need audits. But what should such audits involve New paper from @Miles_Brundage @NoemiDreksler @adnhw and lots of other folks from e.g. @AVERIorg @METR_Evals @GovAIOrg @apolloaievals @TransluceAI"  
[X Link](https://x.com/Manderljung/status/2013251683711586566)  2026-01-19T14:06Z [----] followers, [----] engagements


"A key contribution: AI Assurance Levels (AAL-1 to AAL-4). These clarify what different audits can actually tell you from time-bounded system assessments (AAL-1) to continuous deception-resilient verification (AAL-4)"  
[X Link](https://x.com/Manderljung/status/2013251688400851301)  2026-01-19T14:06Z [----] followers, [--] engagements


"AAL-1: Few-week assessment API access limited non-public info. Has been done. AAL-2: Months-long deeper access staff interviews governance review. Acheivable. AAL-3: Ongoing deep access continuous monitoring. Requires R&D. AAL-4: Treaty-grade verification. Still speculative"  
[X Link](https://x.com/Manderljung/status/2013251689944432645)  2026-01-19T14:06Z [----] followers, [--] engagements


"Very excited about this There's plenty of work to do preparing for a world of advanced AI. REQUEST FOR PROPOSALS What do we need to build to prepare the world for advanced AI $25 billion is about to flow into AI resilience and AI-for-science from the OpenAI Foundation the Chan Zuckerberg Initiative and other major funders. But there's no shovel-ready list of the https://t.co/2Bojx8OtSz REQUEST FOR PROPOSALS What do we need to build to prepare the world for advanced AI $25 billion is about to flow into AI resilience and AI-for-science from the OpenAI Foundation the Chan Zuckerberg Initiative"  
[X Link](https://x.com/anyuser/status/2014824325388632265)  2026-01-23T22:15Z [----] followers, [----] engagements


"Why this matters for AI governance: This is a template for how to think about dangerous capability evals. "The model can do X" isn't enough we need frameworks to understand what that means for actual risk levels and policy responses"  
[X Link](https://x.com/Manderljung/status/2000986726425063830)  2025-12-16T17:49Z [----] followers, [--] engagements


"I'm excited about more work trying to bridge this gap from GovAI as well as other places. We need to do a lot more work to ground our risk assessments. Now is a time for empirics"  
[X Link](https://x.com/Manderljung/status/2000986727993753640)  2025-12-16T17:49Z [----] followers, [--] engagements


"Is it cheaper to build AI models in the UAE than in the US The answer seems to be. no. Total costs are roughly comparable to the US. So why are Microsoft AWS and OpenAI investing billions there New GovAI technical report from @amelia__michael"  
[X Link](https://x.com/Manderljung/status/2008607687639175422)  2026-01-06T18:32Z [----] followers, [---] engagements


"We've been growing GovAI's research team by 50% per year for a few years now. We're keen to keep that up Apply to join as a Research Scholar or Research Fellow to help us succeed :) Were hiring Research Scholars and Research Fellows at GovAI. GovAIs mission: help humanity navigate the transition to advanced AI through rigorous research and support for decision-makers across government industry and civil society. Now open: Research Scholar (12-month https://t.co/tS9DwhD1Xl Were hiring Research Scholars and Research Fellows at GovAI. GovAIs mission: help humanity navigate the transition to"  
[X Link](https://x.com/anyuser/status/2020156823660007897)  2026-02-07T15:24Z [----] followers, 10.9K engagements


"Their key point: the "lone wolf virus terrorist" is the default threat model for CBRN evaluations. But chemical weapons improvised explosives and non-transmissible biological agents all involve distinct technical pathways and need distinct safety tests + safeguards"  
[X Link](https://x.com/Manderljung/status/2022283799342534771)  2026-02-13T12:16Z [----] followers, [--] engagements


"Why the gap persists: frontier labs lack the classified intelligence to properly assess state-level and terrorist group threats. Governments lack understanding of AI capabilities and proprietary data. Neither side can see the full picture alone"  
[X Link](https://x.com/Manderljung/status/2022283801003471357)  2026-02-13T12:16Z [----] followers, [--] engagements


"I also think these other threats deserve attention because they're much more common attack vectors. Even if pandemic attacks are most concerning the other attacks are also worth attention and provide opportunities to update our threat models"  
[X Link](https://x.com/anyuser/status/2022283803834605745)  2026-02-13T12:16Z [----] followers, [--] engagements


"Important new piece by @RebeccaHersman (new colleague at @GovAIOrg) and @cassidyknelson (@LongResilience) on a blind spot in AI safety testing: there needs to be more focus on threat models beyond lone wolf pandemic terrorism"  
[X Link](https://x.com/anyuser/status/2022308384876204344)  2026-02-13T13:54Z [----] followers, [---] engagements


"Their key point: the "lone wolf virus terrorist" is the default threat model for CBRN evaluations. But chemical weapons improvised explosives and non-transmissible biological agents all involve distinct technical pathways and need distinct safety tests + safeguards"  
[X Link](https://x.com/Manderljung/status/2022308386478432438)  2026-02-13T13:54Z [----] followers, [--] engagements


"Why the gap persists: frontier labs lack the classified intelligence to properly assess state-level and terrorist group threats. Governments lack understanding of AI capabilities and proprietary data. Neither side can see the full picture alone"  
[X Link](https://x.com/Manderljung/status/2022308388051276182)  2026-02-13T13:54Z [----] followers, [--] engagements


"@jachiam0 @polynoamial @GoogleDeepMind How it works in the EU AI Act + code of practice: models above a certain compute threshold are in scope and then you need to update your model report if something changes that sufficiently undermines your arguments that the model was acceptably risky incl increasing inference"  
[X Link](https://x.com/Manderljung/status/2022961894122819910)  2026-02-15T09:11Z [----] followers, [--] engagements


"@OpenAI This is pretty damn awesome I'm also impressed with @OpenAI using this as an opportunity to talk about the potential malicious use of their work"  
[X Link](https://x.com/Manderljung/status/1096349498362789888)  2019-02-15T10:04Z [----] followers, [--] engagements


"Watching the #Openaifive live stream at http://twitch.com/openai http://twitch.com/openai"  
[X Link](https://x.com/Manderljung/status/1117143878241533952)  2019-04-13T19:13Z [----] followers, [--] engagements


"Very excited about all the interesting projects the @GovAI_ Winter Fellows got up to. Projects ranged from external scrutiny of frontier models Chinese AI development market concentration implications of foundation models public input into AI and more"  
[X Link](https://x.com/Manderljung/status/1663700472124026880)  2023-05-31T00:14Z [----] followers, [----] engagements


"As the capabilities of AI models increase new regulation will be needed. In a new white paper with co-authors from across academia think tanks and AI labs we describe why regulation of frontier AI is needed what it could look like and minimal frontier AI safety standards"  
[X Link](https://x.com/Manderljung/status/1678414590529490947)  2023-07-19T05:44Z [----] followers, 25.5K engagements


"These problems interact. They mean theres a chance models with dangerous capabilities could be widely available with inadequate safeguards because the model capabilities may diffuse broadly before concerning capabilities are detected"  
[X Link](https://x.com/Manderljung/status/1678414608321638402)  2023-07-10T14:45Z [----] followers, [---] engagements


"Effective frontier AI regulation would therefore require that developers put substantial effort into understanding the risks their systems might pose before they are deployed in particular by evaluating whether they have dangerous capabilities or are insufficiently controllable"  
[X Link](https://x.com/Manderljung/status/1678414611987456000)  2023-07-10T14:45Z [----] followers, [---] engagements


"In a lot of cases I think it will make sense to start by deploying new powerful models whose capabilities we don't fully understand via APIs first. That way they can be updated if unexpected harms are occurring similar to how cars with defects can be recalled"  
[X Link](https://x.com/Manderljung/status/1678868490202267648)  2023-07-11T20:53Z [----] followers, [---] engagements


"Once the model is better understood doesn't appear to be causing enough harm and it's sufficiently behind the frontier I expect it'll make sense to open source or otherwise make more widely available"  
[X Link](https://x.com/Manderljung/status/1678868491854831617)  2023-07-11T20:53Z [----] followers, [---] engagements


"We can't trust AI companies to judge the safety or risks imposed by their systems themselves. We'll need lots of external auditors red teamers and government regulators with sufficient access to these models to assess their risks"  
[X Link](https://x.com/Manderljung/status/1678868494585331715)  2023-07-12T00:04Z [----] followers, [---] engagements


"This seems like a great development 3/5 of Anthropic's board seats to be controlled by a trust with no direct financial stake in the company. Glad to have more public information on the company's governance something that's been really lacking in the past"  
[X Link](https://x.com/Manderljung/status/1681000621384810498)  2023-07-17T20:58Z [----] followers, [----] engagements


"Basically new class T shares in Anthropic are to be set up and enough shares to control 3/5 board seats will sit with a trust. These shares cannot be sold and cannot pay dividends removing any direct financial incentives in the company's performance"  
[X Link](https://x.com/Manderljung/status/1681000623456780288)  2023-07-17T20:58Z [----] followers, [---] engagements


"As frontier AI systems become more capable of causing harm regulation will be called for. But that won't be enough. Society should also prepare for capable models seeing widespread access. @paul_scharre and I in @ForeignAffairs"  
[X Link](https://x.com/Manderljung/status/1691153005956173824)  2023-08-14T18:21Z [----] followers, [----] engagements


"Long-awaited White House executive order is out Includes action on: - AI Safety and Security - Privacy - Equity and Civil Rights - Consumer rights - Supporting Workers - Innovation and competition - Int'l leadership - Gov't use of AI"  
[X Link](https://x.com/Manderljung/status/1718968883359400030)  2023-10-30T12:31Z [----] followers, 11.6K engagements


"The first day of the AI Safety Summit felt like a turning point in the worlds response to the risks from increasingly advanced AI systems"  
[X Link](https://x.com/Manderljung/status/1719834935857455216)  2023-11-01T21:52Z [----] followers, [----] engagements


"Two underappreciated outputs from last week's UK AI Safety Summit: - A [--] page govt report on responsible frontier AI practice. Most comprehensive + useful doc of its kind imo. - Govt asked [--] leading AI companies about their AI safety policies"  
[X Link](https://x.com/Manderljung/status/1721961603543740718)  2023-11-07T18:43Z [----] followers, [----] engagements


"Tort liability is an important tool to address frontier AI risks but four problems must be overcome according to @legalpriority's Mackenzie Arnold in recent testimony at the Senate AI Insight Forum. Problem 1: Existing law will under-deter malicious and criminal misuse of AI"  
[X Link](https://x.com/Manderljung/status/1723681284650578244)  2023-11-12T12:36Z [----] followers, [----] engagements


"As the impacts of frontier AI models increase decisions about their development and deployment can't all be left in the hands of AI companies. In a new paper we describe how such decisions could be more publicly accountable via external scrutiny"  
[X Link](https://x.com/Manderljung/status/1729465001448968236)  2023-11-28T11:39Z [----] followers, 10.1K engagements


"However if those same capabilities are measured using continuous measures such as how close the model gets to the right answer or the probability it assigns to the right answer things look more smooth. Great result which makes a lot of sense"  
[X Link](https://x.com/Manderljung/status/1734651625141064128)  2023-12-12T19:09Z [----] followers, [---] engagements


"From a policy perspective a big issue wrt frontier AI systems is that predicting their behaviour and capabilities is very hard. Though the paper suggests such predictions are easier than the emergent capabilities literature might lead you to believe the problem remains. Why"  
[X Link](https://x.com/Manderljung/status/1734651627225682026)  2023-12-12T19:09Z [----] followers, [---] engagements


"The launch of ChatGPT Nov [----] caught the world off-guard. Ahead of new more capable AI systems hit the market next year government should prepare. The most capable AI systems frontier AI systems warrant targeted regulation argue @akorinek and I in a new @lawfare piece"  
[X Link](https://x.com/anyuser/status/1743048210753679705)  2024-01-04T23:14Z [--] followers, [----] engagements


"However it wont be enough. If a company informed the US government it was planning on releasing a model that would undermine national security there are currently no well-designed powers for the government to intervene"  
[X Link](https://x.com/anyuser/status/1743048222346801555)  2024-01-04T23:14Z [--] followers, [--] engagements


"Further there are many things developers of frontier AI systems can do to reduce risk not least conducting thorough risk assessments and having that inform how and whether they deploy it. More in picture"  
[X Link](https://x.com/anyuser/status/1743048224083165510)  2024-01-04T23:14Z [--] followers, [--] engagements


"To make progress government should: - Create and update standards for responsible frontier AI development and deployment. - Give regulators visibility into frontier AI developments. - Work to ensure compliance with safety standards using both regulation and liability"  
[X Link](https://x.com/anyuser/status/1743048226490687770)  2024-01-04T23:14Z [--] followers, [--] engagements


"The amount of computational power used to train a system e.g. [----] operations is a useful starting point to identify systems where intervention is warranted as it is one of our better proxies of model capabilities. However such thresholds will need to be refined over time"  
[X Link](https://x.com/Manderljung/status/1743048228852121857)  2024-01-04T23:14Z [----] followers, [--] engagements


"The piece summarizes much of what's covered in the Frontier AI Regulation paper from last Summer but includes things weve changed our minds on and is a bit more opinionated. That paper is here: Also summarized here: As the capabilities of AI models increase new regulation will be needed. In a new white paper with co-authors from across academia think tanks and AI labs we describe why regulation of frontier AI is needed what it could look like and minimal frontier AI safety standards. https://t.co/6pOYARELfR As the capabilities of AI models increase new regulation will be needed. In a new"  
[X Link](https://x.com/Manderljung/status/1743048231049965579)  2024-01-04T23:14Z [----] followers, [--] engagements


"What do results from model evaluations and red teaming say about real world model behavior It's hard to say. To test the external validity of model evals and our understanding of AI systems' behaviour @_achan96_ suggests we assess how well actors can predict the results. What should we be evaluating besides model capabilities What about: how well we can predict what models will do in the real world I explore this question in a new @GovAI_ blog post 🧵 https://t.co/8L1QRLTHR7 What should we be evaluating besides model capabilities What about: how well we can predict what models will do in the"  
[X Link](https://x.com/Manderljung/status/1778457865725911146)  2024-04-11T16:19Z [----] followers, [---] engagements


"Something you might have missed in Google's flurry of announcements: They've just introduced watermarking to Gemini text outputs. I hope other developers follow suit. Curious if folks have takes on how robust their watermarking technique is. https://deepmind.google/discover/blog/watermarking-ai-generated-text-and-video-with-synthid/ https://deepmind.google/discover/blog/watermarking-ai-generated-text-and-video-with-synthid/"  
[X Link](https://x.com/Manderljung/status/1790467922642784398)  2024-05-14T19:43Z [----] followers, [----] engagements


"In brief the way it works: For a huge list of words you slightly adjust the probability that the model outputs it. With enough words this works as a high-fidelity signal. Probably won't be robust to thorough paraphrasing and perhaps not even running through other LLMs"  
[X Link](https://x.com/Manderljung/status/1790474331341861122)  2024-05-14T20:08Z [----] followers, [---] engagements


"16 leading AI organizations making a commitment to specify: - thresholds at which their frontier AI systems would impose intolerable risk - how theyd know if they exceed those thresholds - how theyd stay below those thresholds Really glad to see this In an historic first tech companies from across the globe have committed to developing AI safely. From @OpenAI to @Meta [--] companies have signed up to the fresh Frontier AI Safety Commitments' 👉 https://t.co/KqcBbvKLSu #AISeoulSummit https://t.co/cqmwryq494 In an historic first tech companies from across the globe have committed to developing AI"  
[X Link](https://x.com/Manderljung/status/1792907916158407096)  2024-05-21T13:18Z [----] followers, [----] engagements


"Companies from the US China UAE EU Japan South Korea have signed. Notables include @MistralAI @xai @cohere G42 http://Zhipu.ai http://Zhipu.ai"  
[X Link](https://x.com/Manderljung/status/1792907920189108299)  2024-05-21T13:18Z [----] followers, [---] engagements


"Why adaptation Firstly capability-modifying interventions are often a blunt tool affecting beneficial as well as harmful uses. (@mealreplacer and I previously wrote about that misuse-use tradeoff here: https://arxiv.org/abs/2303.09377 https://arxiv.org/abs/2303.09377"  
[X Link](https://x.com/Manderljung/status/1793188020591440080)  2024-05-22T07:51Z [----] followers, [---] engagements


"Further [--] models larger than GPT-3 have been deployed since GPT-3s release. As development costs fall risk management focused solely on controlling capabilities and diffusion becomes infeasible. (Fig: @EpochAIResearch)"  
[X Link](https://x.com/Manderljung/status/1793188022797676632)  2024-05-22T07:51Z [----] followers, [---] engagements


"There are funding opportunities for increasing AI adaptation. If youre part of academia industry or civil society you can apply to put these ideas into practice with a systemic AI safety grant from the UK @AISafetyInst. 8.5mn available. https://x.com/AISafetyInst/status/1793163082379968955 We are announcing new grants for research into systemic AI safety. Initially backed by up to [---] million this program will fund researchers to advance the science underpinning AI safety. Read more: https://t.co/QHOLUp3QGR https://t.co/jnAdLJ4eAg https://x.com/AISafetyInst/status/1793163082379968955 We are"  
[X Link](https://x.com/Manderljung/status/1793188042905202968)  2024-05-22T07:51Z [----] followers, [---] engagements


"@join_ef also just announced a related funding programme to support defensive AI technologies https://x.com/matthewclifford/status/1792585822228623677 🚨🚨Today were announcing a new programme at @join_ef that Ill run this July: def/acc at EF. We want to fund the most ambitious founders to accelerate the technologies to make the future go well starting with defensive AI (1/7) https://x.com/matthewclifford/status/1792585822228623677 🚨🚨Today were announcing a new programme at @join_ef that Ill run this July: def/acc at EF. We want to fund the most ambitious founders to accelerate the"  
[X Link](https://x.com/Manderljung/status/1793188046067663104)  2024-05-22T07:52Z [----] followers, [---] engagements


"While there were plenty of interesting discussions in South Korea the highlights for me were the documents and commitments agreed on beforehand: - [--] co's committing to define adhere to and be transparent about their Frontier AI Safety Frameworks https://x.com/Manderljung/status/1792907916158407096 [--] leading AI organizations making a commitment to specify: - thresholds at which their frontier AI systems would impose intolerable risk - how theyd know if they exceed those thresholds - how theyd stay below those thresholds Really glad to see this https://t.co/ZPj2W5yXWj"  
[X Link](https://x.com/Manderljung/status/1793636167638024498)  2024-05-23T13:32Z [----] followers, [----] engagements


"- The Seoul Ministerial Statement which among other things commits the undersigned countries to "identify thresholds at which the level of risk posed by . frontier AI models or systems would be severe absent appropriate mitigations" https://www.gov.uk/government/publications/seoul-ministerial-statement-for-advancing-ai-safety-innovation-and-inclusivity-ai-seoul-summit-2024/seoul-ministerial-statement-for-advancing-ai-safety-innovation-and-inclusivity-ai-seoul-summit-2024"  
[X Link](https://x.com/Manderljung/status/1793636228530852215)  2024-05-23T13:32Z [----] followers, [--] engagements


"A question I often get about frontier AI regulation is "but why can't we solve these problems with tort liability" The past couple of months @_mvdm @ketanr and myself have been collecting our thoughts on the topic. We argue tort should be a complement to frontier AI reg. "Until robust regulatory requirements are instituted tort liability might be able to serve as an important stopgap in preventing irresponsible AI development." @_mvdm @ketanr and @Manderljung on the role tort law can play in responsible AI development. https://t.co/wtssvPvWFU "Until robust regulatory requirements are"  
[X Link](https://x.com/Manderljung/status/1794345203857596654)  2024-05-25T12:30Z [----] followers, [----] engagements


"How would the end of Chevron deference (which has courts defer more to agencies' interpretations of laws) affect AI governance I found this blog post from @law_ai_ informative. https://law-ai.org/chevron-deference/ https://law-ai.org/chevron-deference/"  
[X Link](https://x.com/Manderljung/status/1799431865331528023)  2024-06-08T13:22Z [----] followers, [----] engagements


"@sssvmkumar @Leo_Koe_ @tomekkorbak @jonasschuett @GovAI_ @govai Yes the paper opines about the role of risk thresholds in regulation of frontier AI"  
[X Link](https://x.com/Manderljung/status/1805284901240869306)  2024-06-24T17:00Z [----] followers, [---] engagements


"As per the King's Speech the UK gov't will "seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.""  
[X Link](https://x.com/Manderljung/status/1813625338775191603)  2024-07-17T17:22Z [----] followers, [----] engagements


"What about dangerous capabilities OpenAI followed their Preparedness Plan and judged the system to pose "medium" chembio and persuasion risk. Theyve committed not to release a model with post-mitigation capabilities that are "high""  
[X Link](https://x.com/Manderljung/status/1834532441194692720)  2024-09-13T09:59Z [----] followers, [---] engagements


"OpenAI has set up an independent board oversight committee focused on the safety and security of their models. Though details about what the independence entails aren't really clear AFAICT"  
[X Link](https://x.com/Manderljung/status/1835954717341437995)  2024-09-17T08:11Z [----] followers, [---] engagements


"@maxwinga @jonasschuett @AnthropicAI @OpenAI @GoogleDeepMind @jide_alaga Id be keen for that to happen We might do it once there are a few more out there and some updated ones ahead of the AI Summit in France in Feb [----] (16 companies including all leading ones committed to that in Seoul). But would be keen for others to go for it"  
[X Link](https://x.com/Manderljung/status/1836064743804555316)  2024-09-17T15:28Z [----] followers, [--] engagements


"Much of the progress has come from vastly increasing the amount of compute used to train models by a factor of [---] million in since [----] and figuring out how to leverage that for better performance"  
[X Link](https://x.com/Manderljung/status/1837478449222439133)  2024-09-21T13:06Z [----] followers, [---] engagements


"As systems get better they'll present huge opportunities and challenges. AI technologies can play a key role in reviving the UK's sluggish productivity growth. But we'll also need to adapt to an AI-powered economy"  
[X Link](https://x.com/Manderljung/status/1837478453047677302)  2024-09-21T13:06Z [----] followers, [--] engagements


"Secondly I think we're likely to see a small number of systems developed by a handful of companies be at the core of a huge swath of AI-powered economic activity. That will produce information asymmetries and structural risk that may warrant intervention"  
[X Link](https://x.com/Manderljung/status/1837478458886103509)  2024-09-21T13:06Z [----] followers, [--] engagements


"For many other technologies the right approach is often to wait for their impacts to manifest before introducing new rules. I don't think we have that luxury in AI. By the time a new bill is introduced let alone passed there a new generation of models will be on the market"  
[X Link](https://x.com/Manderljung/status/1837478460630909254)  2024-09-21T13:06Z [----] followers, [--] engagements


"Who should be in scope I think the regime should at least initially focus on the most powerful systems as measured by the amount of compute used to train them similar to what the EU and US have done. Over time such thresholds will need refining with other metrics"  
[X Link](https://x.com/Manderljung/status/1837478463818563819)  2024-09-21T13:06Z [----] followers, [--] engagements


"This is also the first post for my blog. I'll try to put up more things like this going forward"  
[X Link](https://x.com/Manderljung/status/1837478475663347778)  2024-09-21T13:06Z [----] followers, [---] engagements


"If you're around the Labour conference and wanna chat frontier AI regulation or otherwise let me know The UK government is looking to introduce new legal requirements for "the developers of the most powerful AI systems." Why introduce such requirements What should such a regime look like in practice I give my takes in a new blog post: "Frontier AI Regulation in the UK." https://t.co/MQo2cEngXQ The UK government is looking to introduce new legal requirements for "the developers of the most powerful AI systems." Why introduce such requirements What should such a regime look like in practice I"  
[X Link](https://x.com/Manderljung/status/1837479793626960091)  2024-09-21T13:11Z [----] followers, [---] engagements


"Kind of crazy that it's AI and its vast demand for energy not climate change that's causing a nuclear power renaissance. BREAKING IN NYC: WORLD'S BIGGEST BANKS PLEDGE SUPPORT FOR NUCLEAR Banks and funds totaling $14 TRILLION in assets have just signed an unprecedented statement in support of nuclear power. They'll be presenting the pledge to support the goal of tripling nuclear THIS MORNING at https://t.co/9xodRsRcYl BREAKING IN NYC: WORLD'S BIGGEST BANKS PLEDGE SUPPORT FOR NUCLEAR Banks and funds totaling $14 TRILLION in assets have just signed an unprecedented statement in support of"  
[X Link](https://x.com/Manderljung/status/1838316695137489373)  2024-09-23T20:37Z [----] followers, [----] engagements


"2024 seems like the year the Nobel prize committees woke up to the importance of AI. Prizes to Geoffrey Hinton as well as Demis Hassabis"  
[X Link](https://x.com/Manderljung/status/1843985353377812493)  2024-10-09T12:02Z [----] followers, [---] engagements


"The UK's AISI got a lot done in its first year: - Testing [--] models incl from OpenAI Google DeepMind and Anthropic - Hiring excellent AI researchers - Launching an independent int'l review of AI safety - There are now [--] AISI equivalents - "  
[X Link](https://x.com/Manderljung/status/1856802665767882831)  2024-11-13T20:53Z [----] followers, [----] engagements


"@anderssandberg @ohlennart Anders do you think this would work How hard would it be to build one of these beams"  
[X Link](https://x.com/Manderljung/status/1880551629499629774)  2025-01-18T09:43Z [----] followers, [---] engagements


"@adnhw @GovAI_ @jonasschuett @NoemiDreksler Excited to have you with us Aidan :)"  
[X Link](https://x.com/Manderljung/status/1919439168536457407)  2025-05-05T17:08Z [----] followers, [--] engagements


"The EU's Code of Practice for General-Purpose AI is out. As one of the co-chairs who drafted the Safety & Security Chapter focused on frontier AI I'm proud of what we've put together. Its a lean but effective framework for frontier AI companies to comply with the AI Act"  
[X Link](https://x.com/Manderljung/status/1943271870880522259)  2025-07-10T11:31Z [----] followers, [----] engagements


"Unusually the Code was drafted by a group of [--] independent experts myself included with input from 1000+ stakeholders such as companies experts EU member state representatives parliamentarians and civil society"  
[X Link](https://x.com/Manderljung/status/1943271878417596665)  2025-07-10T11:31Z [----] followers, [---] engagements


"So what does the Safety & Security chapter do Companies must have a Framework where they pre-define when risk is acceptable and keep risk to acceptable levels via risk assessment and mitigation. Then they evidence their adherence to their Framework in a Model Report"  
[X Link](https://x.com/Manderljung/status/1943271881127399616)  2025-07-10T11:31Z [----] followers, [---] engagements


"This accords with current best practice like responsible scaling policies preparedness frameworks and frontier AI frameworks. You can find current such documents here: https://metr.org/faisc https://metr.org/faisc"  
[X Link](https://x.com/Manderljung/status/1943271883140411637)  2025-07-10T11:31Z [----] followers, [---] engagements


"Ahead of important decisions such as releasing a new model on the market or updating a model such that it could pose unacceptable risk they need to go through a full risk assessment"  
[X Link](https://x.com/Manderljung/status/1943271885162033176)  2025-07-10T11:31Z [----] followers, [---] engagements


"Next they need to continuously assess the risks from their models including by tracking and reporting serious incidents stemming from their model e.g. where the model was involved in significant loss of life or property"  
[X Link](https://x.com/Manderljung/status/1943271898403737677)  2025-07-10T11:31Z [----] followers, [---] engagements


"The chapter also describes how companies should allocate responsibility for systemic risk assessment and mitigation throughout their organization and maintain a healthy risk culture such as by not retaliating against employees raising concerns"  
[X Link](https://x.com/Manderljung/status/1943271900567752812)  2025-07-10T11:31Z [----] followers, [---] engagements


"@ShakeelHashim Yep they need to use "early search efforts (such as through a public call open for [--] business days) and promptly notifying identified evaluators.""  
[X Link](https://x.com/Manderljung/status/1943333210227679264)  2025-07-10T15:35Z [----] followers, [--] engagements


"Also useful to note while the GPAI obligations come into force August 2nd the AI Office is only able to start e.g. issuing fines August 2nd 2026"  
[X Link](https://x.com/Manderljung/status/1943978042021363835)  2025-07-12T10:17Z [----] followers, [---] engagements


"Google also committing to signing the Code of Practice on GPAI"  
[X Link](https://x.com/Manderljung/status/1950465888605384713)  2025-07-30T07:58Z [----] followers, [----] engagements


"As has Aleph Alpha"  
[X Link](https://x.com/Manderljung/status/1950466134655856665)  2025-07-30T07:58Z [----] followers, [---] engagements


"@marcel_butucea @GovAI_ @jide_alaga @jonasschuett It's true that both models and companies frameworks to manage risk change over time. The rubric tries to account for that in its criteria and how it suggests you do the grading. As for companies' frameworks changing over time you'll want to rerun the grading periodically"  
[X Link](https://x.com/Manderljung/status/1953443831031992756)  2025-08-07T13:11Z [----] followers, [--] engagements


"@JeremyHowick Do you worry your work on placebos will be used to support pseudoscientific claims"  
[X Link](https://x.com/Manderljung/status/560846502843940864)  2015-01-29T17:06Z [----] followers, [--] engagements


"Important new piece by @RebeccaHersman (new colleague at @GovAIOrg) and @cassidyknelson (@LongResilience) on a blind spot in AI safety testing: there needs to be more focus on threat models beyond lone wolf pandemic terrorism"  
[X Link](https://x.com/anyuser/status/2022308384876204344)  2026-02-13T13:54Z [----] followers, [---] engagements


"I also think these other threats deserve attention because they're much more common attack vectors. Even if pandemic attacks are most concerning the other attacks are also worth attention and provide opportunities to update our threat models"  
[X Link](https://x.com/anyuser/status/2022308391775871324)  2026-02-13T13:54Z [----] followers, [--] engagements


"Piece in @TIME here: https://time.com/7373405/weapons-of-mass-destruction-ai-security-gap https://time.com/7373405/weapons-of-mass-destruction-ai-security-gap"  
[X Link](https://x.com/anyuser/status/2022308393713549397)  2026-02-13T13:54Z [----] followers, [--] engagements


"My view: the focus on pandemic risk makes sense if I had to choose one risk vector in CBRN that's the one I'd focus on. But we should be able to deal with multiple risks not just one"  
[X Link](https://x.com/anyuser/status/2022283802429571092)  2026-02-13T12:16Z [----] followers, [---] engagements


"I also think these other threats deserve attention because they're much more common attack vectors. Even if pandemic attacks are most concerning the other attacks are also worth attention and provide opportunities to update our threat models"  
[X Link](https://x.com/anyuser/status/2022283803834605745)  2026-02-13T12:16Z [----] followers, [--] engagements


"How is AI usage evolving Summary + new data from @EpochAIResearch ChatGPT growth slowed slightly in the latter parts of [----] (from a baseline of historically high growth rates) with e.g. Gemini usage catching up a bit"  
[X Link](https://x.com/anyuser/status/2020830810962899154)  2026-02-09T12:03Z [----] followers, [---] engagements


"Epoch also ran their own surveys of Americans on their usage finding e.g.: - The most common use of AI by far is looking up information (58% of users). Writing assistance (32%) and advice (30%) are distant seconds. Workplace AI adoption is largely bottom-up. 36% of users use AI for work but only 18% say their employer provides access"  
[X Link](https://x.com/anyuser/status/2020830833846681633)  2026-02-09T12:03Z [----] followers, [---] engagements


"Piece here: Jean-Stanislas Denain and @ansonwhho https://epoch.ai/gradient-updates/the-changing-drivers-of-llm-adoption https://epoch.ai/gradient-updates/the-changing-drivers-of-llm-adoption"  
[X Link](https://x.com/anyuser/status/2020832598151532766)  2026-02-09T12:10Z [----] followers, [---] engagements


"We've been growing GovAI's research team by 50% per year for a few years now. We're keen to keep that up Apply to join as a Research Scholar or Research Fellow to help us succeed :) Were hiring Research Scholars and Research Fellows at GovAI. GovAIs mission: help humanity navigate the transition to advanced AI through rigorous research and support for decision-makers across government industry and civil society. Now open: Research Scholar (12-month https://t.co/tS9DwhD1Xl Were hiring Research Scholars and Research Fellows at GovAI. GovAIs mission: help humanity navigate the transition to"  
[X Link](https://x.com/anyuser/status/2020156823660007897)  2026-02-07T15:24Z [----] followers, 10.9K engagements


"Were hiring Research Scholars and Research Fellows at GovAI. GovAIs mission: help humanity navigate the transition to advanced AI through rigorous research and support for decision-makers across government industry and civil society. Now open: Research Scholar (12-month visiting position) Flexible career-development-focused role with wide latitude (policy / social science / technical research; advising; convening; launching applied initiatives). Structured support via supervision mentorship and regular feedback. Comp (by experience/location): - London 75k95k + benefits - DC $100k$150k +"  
[X Link](https://x.com/anyuser/status/2016110515555090920)  2026-01-27T11:26Z [----] followers, 45.3K engagements


"Today were releasing the International AI Safety Report 2026: the most comprehensive evidence-based assessment of AI capabilities emerging risks and safety measures to date. 🧵 (1/17)"  
[X Link](https://x.com/anyuser/status/2018673247651270958)  2026-02-03T13:09Z 35.4K followers, 365K engagements


"RT @stephenclare_: The International AI Safety Report [----] is live This is an all-new assessment of where we're at with AI capabilities a"  
[X Link](https://x.com/anyuser/status/2018788299209687159)  2026-02-03T20:46Z [----] followers, [--] engagements


"The International AI Safety Report [----] is live This is an all-new assessment of where we're at with AI capabilities and risks. Hundreds of experts from 30+ countries contributed to it. I'm so proud of this work and think many people will find it useful Today were releasing the International AI Safety Report 2026: the most comprehensive evidence-based assessment of AI capabilities emerging risks and safety measures to date. 🧵 (1/17) https://t.co/qoe6JafRqf Today were releasing the International AI Safety Report 2026: the most comprehensive evidence-based assessment of AI capabilities"  
[X Link](https://x.com/anyuser/status/2018679044712079861)  2026-02-03T13:32Z [----] followers, [---] engagements


"AISI is hiring: join our senior leadership team as a Deputy Director of our Research Unit (912 month maternity cover). This isnt your average Civil Service job. For [---] months youll co-lead one of the worlds most influential AI safety research organisations"  
[X Link](https://x.com/anyuser/status/2018329726729765052)  2026-02-02T14:24Z [---] followers, 14.8K engagements


"Number [--] has actually released a dashboard tracking their fulfillment of @matthewclifford's AI Plan. Very unusual to have this from a gov department. The link is below worth playing around on it - one thing that strikes me: some good progress on these recommendations but a year is a long time in AI and Britain needs to raise its ambitions fast"  
[X Link](https://x.com/anyuser/status/2016797733340799233)  2026-01-29T08:57Z 14.2K followers, 27.4K engagements


"Very excited about this There's plenty of work to do preparing for a world of advanced AI. REQUEST FOR PROPOSALS What do we need to build to prepare the world for advanced AI $25 billion is about to flow into AI resilience and AI-for-science from the OpenAI Foundation the Chan Zuckerberg Initiative and other major funders. But there's no shovel-ready list of the https://t.co/2Bojx8OtSz REQUEST FOR PROPOSALS What do we need to build to prepare the world for advanced AI $25 billion is about to flow into AI resilience and AI-for-science from the OpenAI Foundation the Chan Zuckerberg Initiative"  
[X Link](https://x.com/anyuser/status/2014824325388632265)  2026-01-23T22:15Z [----] followers, [----] engagements


"REQUEST FOR PROPOSALS What do we need to build to prepare the world for advanced AI $25 billion is about to flow into AI resilience and AI-for-science from the OpenAI Foundation the Chan Zuckerberg Initiative and other major funders. But there's no shovel-ready list of the essential projects to build and no critical mass of builders ready to execute. We're trying to fix that with The Launch Sequence a collection of concrete projects to accelerate science strengthen security and adapt institutions to future advanced AI. We're opening up The Launch Sequence for new pitches. We'll help you"  
[X Link](https://x.com/anyuser/status/2014734009171919247)  2026-01-23T16:16Z [----] followers, 253.6K engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@Manderljung Avatar @Manderljung Markus Anderljung

Markus Anderljung posts on X about ai, open ai, uae, in the the most. They currently have [-----] followers and [---] posts still getting attention that total [-----] engagements in the last [--] hours.

Engagements: [-----] #

Engagements Line Chart

  • [--] Week [-----] -75%
  • [--] Month [------] +866%
  • [--] Months [------] +141%
  • [--] Year [------] -30%

Mentions: [--] #

Mentions Line Chart

Followers: [-----] #

Followers Line Chart

  • [--] Week [-----] +0.67%
  • [--] Month [-----] +2.70%
  • [--] Months [-----] +8.80%
  • [--] Year [-----] +20%

CreatorRank: [---------] #

CreatorRank Line Chart

Social Influence

Social category influence technology brands finance countries stocks travel destinations

Social topic influence ai, open ai, uae, in the, build, common, the world, flow, major, request

Top assets mentioned Frontier (FRONT) Microsoft Corp. (MSFT) Validity (VAL) Alphabet Inc Class A (GOOGL)

Top Social Posts

Top posts by engagements in the last [--] hours

"@dw2 @GovAI_ @bmgarfinkel @ohlennart @RobertTrager @imbenclifford @ea_seger I dont think a lot of info on it. There was some little detail here and maybe in some other document released by DSIT but I cant recall https://www.gov.uk/government/publications/uk-ai-safety-institute-third-progress-report https://www.gov.uk/government/publications/uk-ai-safety-institute-third-progress-report"
X Link 2024-03-28T17:01Z [----] followers, [--] engagements

"Increasingly advanced AI systems will diffuse into society. How do we manage the accompanying risks In our a paper we explore Societal Adaptation to Advanced AI: reducing harm from diffusion of AI capabilities by intervening to avoid defend against and remedy harmful use"
X Link 2024-05-22T07:51Z [----] followers, 15.4K engagements

"@sethlazar Would love to hear any reactions or takes"
X Link 2024-05-22T08:17Z [----] followers, [---] engagements

"More detailed summary in Lujain's thread and lots more content in the paper"
X Link 2024-05-23T14:30Z [----] followers, [---] engagements

"I worked with Lujain on this during her recent GovAI fellowship. Excited about what she'll get up to next"
X Link 2024-05-23T14:41Z [----] followers, [---] engagements

"AI labs test models for dangerous bio capabilities but what do those results actually mean for real-world risk New paper from @lucafrighetti bridges capability evals and risk assessment"
X Link 2025-12-16T17:49Z [----] followers, [---] engagements

"That translates to [-----] additional expected deaths per year or $100B in damages. Scenarios where AI also helps discover novel pathogens push risk higher. But mitigations can significantly reduce these numbers"
X Link 2025-12-16T17:49Z [----] followers, [--] engagements

"The estimates were reviewed by [--] subject-matter experts and [--] superforecasters. Their medians were similar though all forecasts had high uncertainty. This reflects genuine disagreement in the field not just modelling limitations"
X Link 2025-12-16T17:49Z [----] followers, [--] engagements

"The paper builds a total cost of ownership model for a hypothetical [---] MW data center comparing the US and UAE across [--] cost categories. Key finding: annualised costs are within 1% of each other basically indistinguishable given uncertainty"
X Link 2026-01-06T18:32Z [----] followers, [--] engagements

"Where the UAE wins: - Permitting is 2x faster estimated [--] months vs [--] months - Facility construction is cheaper largely due to lower labour costs ($6/hr vs $76/hr for construction wages)"
X Link 2026-01-06T18:32Z [----] followers, [--] engagements

"So if costs are similar why are hyperscalers so interested in the UAE Possibilities: - Subsidies (e.g. electricity at 5/kWh vs [---] market rate) - Expectation of future cost decreases as UAE builds out renewables - Getting investments from the UAE - Hedging against US delays"
X Link 2026-01-06T18:32Z [----] followers, [--] engagements

"Nvidia's share of H100-equivalents sold shifted from 90% in Jan [----] to 70% in Sep [----]. Google's TPUs and Amazon's Trainium are the main competitor chips catching up. From new dataset released by @EpochAIResearch"
X Link 2026-01-09T09:38Z [----] followers, [---] engagements

"Dataset here: https://epoch.ai/data/ai-chip-sales https://epoch.ai/data/ai-chip-sales"
X Link 2026-01-09T09:38Z [----] followers, [---] engagements

"Looks like H200 destined for China need to do a stopover in the US first so that the US can impose a 25% import tariff. Imposing it on the export wouldn't have been legal. Neat trick buried in the H200 rule: every shipment needs "third-party testing" in the US before export. Why Remember the USG wants a 25% cut Can't write that into export controls. But you can route all chips through the US first apply a 25% tariff then ship to China. Neat trick buried in the H200 rule: every shipment needs "third-party testing" in the US before export. Why Remember the USG wants a 25% cut Can't write that"
X Link 2026-01-14T18:14Z [----] followers, [---] engagements

"Frontier AI companies need audits. But what should such audits involve New paper from @Miles_Brundage @NoemiDreksler @adnhw and lots of other folks from e.g. @AVERIorg @METR_Evals @GovAIOrg @apolloaievals @TransluceAI"
X Link 2026-01-19T14:06Z [----] followers, [----] engagements

"A key contribution: AI Assurance Levels (AAL-1 to AAL-4). These clarify what different audits can actually tell you from time-bounded system assessments (AAL-1) to continuous deception-resilient verification (AAL-4)"
X Link 2026-01-19T14:06Z [----] followers, [--] engagements

"AAL-1: Few-week assessment API access limited non-public info. Has been done. AAL-2: Months-long deeper access staff interviews governance review. Acheivable. AAL-3: Ongoing deep access continuous monitoring. Requires R&D. AAL-4: Treaty-grade verification. Still speculative"
X Link 2026-01-19T14:06Z [----] followers, [--] engagements

"Very excited about this There's plenty of work to do preparing for a world of advanced AI. REQUEST FOR PROPOSALS What do we need to build to prepare the world for advanced AI $25 billion is about to flow into AI resilience and AI-for-science from the OpenAI Foundation the Chan Zuckerberg Initiative and other major funders. But there's no shovel-ready list of the https://t.co/2Bojx8OtSz REQUEST FOR PROPOSALS What do we need to build to prepare the world for advanced AI $25 billion is about to flow into AI resilience and AI-for-science from the OpenAI Foundation the Chan Zuckerberg Initiative"
X Link 2026-01-23T22:15Z [----] followers, [----] engagements

"Why this matters for AI governance: This is a template for how to think about dangerous capability evals. "The model can do X" isn't enough we need frameworks to understand what that means for actual risk levels and policy responses"
X Link 2025-12-16T17:49Z [----] followers, [--] engagements

"I'm excited about more work trying to bridge this gap from GovAI as well as other places. We need to do a lot more work to ground our risk assessments. Now is a time for empirics"
X Link 2025-12-16T17:49Z [----] followers, [--] engagements

"Is it cheaper to build AI models in the UAE than in the US The answer seems to be. no. Total costs are roughly comparable to the US. So why are Microsoft AWS and OpenAI investing billions there New GovAI technical report from @amelia__michael"
X Link 2026-01-06T18:32Z [----] followers, [---] engagements

"We've been growing GovAI's research team by 50% per year for a few years now. We're keen to keep that up Apply to join as a Research Scholar or Research Fellow to help us succeed :) Were hiring Research Scholars and Research Fellows at GovAI. GovAIs mission: help humanity navigate the transition to advanced AI through rigorous research and support for decision-makers across government industry and civil society. Now open: Research Scholar (12-month https://t.co/tS9DwhD1Xl Were hiring Research Scholars and Research Fellows at GovAI. GovAIs mission: help humanity navigate the transition to"
X Link 2026-02-07T15:24Z [----] followers, 10.9K engagements

"Their key point: the "lone wolf virus terrorist" is the default threat model for CBRN evaluations. But chemical weapons improvised explosives and non-transmissible biological agents all involve distinct technical pathways and need distinct safety tests + safeguards"
X Link 2026-02-13T12:16Z [----] followers, [--] engagements

"Why the gap persists: frontier labs lack the classified intelligence to properly assess state-level and terrorist group threats. Governments lack understanding of AI capabilities and proprietary data. Neither side can see the full picture alone"
X Link 2026-02-13T12:16Z [----] followers, [--] engagements

"I also think these other threats deserve attention because they're much more common attack vectors. Even if pandemic attacks are most concerning the other attacks are also worth attention and provide opportunities to update our threat models"
X Link 2026-02-13T12:16Z [----] followers, [--] engagements

"Important new piece by @RebeccaHersman (new colleague at @GovAIOrg) and @cassidyknelson (@LongResilience) on a blind spot in AI safety testing: there needs to be more focus on threat models beyond lone wolf pandemic terrorism"
X Link 2026-02-13T13:54Z [----] followers, [---] engagements

"Their key point: the "lone wolf virus terrorist" is the default threat model for CBRN evaluations. But chemical weapons improvised explosives and non-transmissible biological agents all involve distinct technical pathways and need distinct safety tests + safeguards"
X Link 2026-02-13T13:54Z [----] followers, [--] engagements

"Why the gap persists: frontier labs lack the classified intelligence to properly assess state-level and terrorist group threats. Governments lack understanding of AI capabilities and proprietary data. Neither side can see the full picture alone"
X Link 2026-02-13T13:54Z [----] followers, [--] engagements

"@jachiam0 @polynoamial @GoogleDeepMind How it works in the EU AI Act + code of practice: models above a certain compute threshold are in scope and then you need to update your model report if something changes that sufficiently undermines your arguments that the model was acceptably risky incl increasing inference"
X Link 2026-02-15T09:11Z [----] followers, [--] engagements

"@OpenAI This is pretty damn awesome I'm also impressed with @OpenAI using this as an opportunity to talk about the potential malicious use of their work"
X Link 2019-02-15T10:04Z [----] followers, [--] engagements

"Watching the #Openaifive live stream at http://twitch.com/openai http://twitch.com/openai"
X Link 2019-04-13T19:13Z [----] followers, [--] engagements

"Very excited about all the interesting projects the @GovAI_ Winter Fellows got up to. Projects ranged from external scrutiny of frontier models Chinese AI development market concentration implications of foundation models public input into AI and more"
X Link 2023-05-31T00:14Z [----] followers, [----] engagements

"As the capabilities of AI models increase new regulation will be needed. In a new white paper with co-authors from across academia think tanks and AI labs we describe why regulation of frontier AI is needed what it could look like and minimal frontier AI safety standards"
X Link 2023-07-19T05:44Z [----] followers, 25.5K engagements

"These problems interact. They mean theres a chance models with dangerous capabilities could be widely available with inadequate safeguards because the model capabilities may diffuse broadly before concerning capabilities are detected"
X Link 2023-07-10T14:45Z [----] followers, [---] engagements

"Effective frontier AI regulation would therefore require that developers put substantial effort into understanding the risks their systems might pose before they are deployed in particular by evaluating whether they have dangerous capabilities or are insufficiently controllable"
X Link 2023-07-10T14:45Z [----] followers, [---] engagements

"In a lot of cases I think it will make sense to start by deploying new powerful models whose capabilities we don't fully understand via APIs first. That way they can be updated if unexpected harms are occurring similar to how cars with defects can be recalled"
X Link 2023-07-11T20:53Z [----] followers, [---] engagements

"Once the model is better understood doesn't appear to be causing enough harm and it's sufficiently behind the frontier I expect it'll make sense to open source or otherwise make more widely available"
X Link 2023-07-11T20:53Z [----] followers, [---] engagements

"We can't trust AI companies to judge the safety or risks imposed by their systems themselves. We'll need lots of external auditors red teamers and government regulators with sufficient access to these models to assess their risks"
X Link 2023-07-12T00:04Z [----] followers, [---] engagements

"This seems like a great development 3/5 of Anthropic's board seats to be controlled by a trust with no direct financial stake in the company. Glad to have more public information on the company's governance something that's been really lacking in the past"
X Link 2023-07-17T20:58Z [----] followers, [----] engagements

"Basically new class T shares in Anthropic are to be set up and enough shares to control 3/5 board seats will sit with a trust. These shares cannot be sold and cannot pay dividends removing any direct financial incentives in the company's performance"
X Link 2023-07-17T20:58Z [----] followers, [---] engagements

"As frontier AI systems become more capable of causing harm regulation will be called for. But that won't be enough. Society should also prepare for capable models seeing widespread access. @paul_scharre and I in @ForeignAffairs"
X Link 2023-08-14T18:21Z [----] followers, [----] engagements

"Long-awaited White House executive order is out Includes action on: - AI Safety and Security - Privacy - Equity and Civil Rights - Consumer rights - Supporting Workers - Innovation and competition - Int'l leadership - Gov't use of AI"
X Link 2023-10-30T12:31Z [----] followers, 11.6K engagements

"The first day of the AI Safety Summit felt like a turning point in the worlds response to the risks from increasingly advanced AI systems"
X Link 2023-11-01T21:52Z [----] followers, [----] engagements

"Two underappreciated outputs from last week's UK AI Safety Summit: - A [--] page govt report on responsible frontier AI practice. Most comprehensive + useful doc of its kind imo. - Govt asked [--] leading AI companies about their AI safety policies"
X Link 2023-11-07T18:43Z [----] followers, [----] engagements

"Tort liability is an important tool to address frontier AI risks but four problems must be overcome according to @legalpriority's Mackenzie Arnold in recent testimony at the Senate AI Insight Forum. Problem 1: Existing law will under-deter malicious and criminal misuse of AI"
X Link 2023-11-12T12:36Z [----] followers, [----] engagements

"As the impacts of frontier AI models increase decisions about their development and deployment can't all be left in the hands of AI companies. In a new paper we describe how such decisions could be more publicly accountable via external scrutiny"
X Link 2023-11-28T11:39Z [----] followers, 10.1K engagements

"However if those same capabilities are measured using continuous measures such as how close the model gets to the right answer or the probability it assigns to the right answer things look more smooth. Great result which makes a lot of sense"
X Link 2023-12-12T19:09Z [----] followers, [---] engagements

"From a policy perspective a big issue wrt frontier AI systems is that predicting their behaviour and capabilities is very hard. Though the paper suggests such predictions are easier than the emergent capabilities literature might lead you to believe the problem remains. Why"
X Link 2023-12-12T19:09Z [----] followers, [---] engagements

"The launch of ChatGPT Nov [----] caught the world off-guard. Ahead of new more capable AI systems hit the market next year government should prepare. The most capable AI systems frontier AI systems warrant targeted regulation argue @akorinek and I in a new @lawfare piece"
X Link 2024-01-04T23:14Z [--] followers, [----] engagements

"However it wont be enough. If a company informed the US government it was planning on releasing a model that would undermine national security there are currently no well-designed powers for the government to intervene"
X Link 2024-01-04T23:14Z [--] followers, [--] engagements

"Further there are many things developers of frontier AI systems can do to reduce risk not least conducting thorough risk assessments and having that inform how and whether they deploy it. More in picture"
X Link 2024-01-04T23:14Z [--] followers, [--] engagements

"To make progress government should: - Create and update standards for responsible frontier AI development and deployment. - Give regulators visibility into frontier AI developments. - Work to ensure compliance with safety standards using both regulation and liability"
X Link 2024-01-04T23:14Z [--] followers, [--] engagements

"The amount of computational power used to train a system e.g. [----] operations is a useful starting point to identify systems where intervention is warranted as it is one of our better proxies of model capabilities. However such thresholds will need to be refined over time"
X Link 2024-01-04T23:14Z [----] followers, [--] engagements

"The piece summarizes much of what's covered in the Frontier AI Regulation paper from last Summer but includes things weve changed our minds on and is a bit more opinionated. That paper is here: Also summarized here: As the capabilities of AI models increase new regulation will be needed. In a new white paper with co-authors from across academia think tanks and AI labs we describe why regulation of frontier AI is needed what it could look like and minimal frontier AI safety standards. https://t.co/6pOYARELfR As the capabilities of AI models increase new regulation will be needed. In a new"
X Link 2024-01-04T23:14Z [----] followers, [--] engagements

"What do results from model evaluations and red teaming say about real world model behavior It's hard to say. To test the external validity of model evals and our understanding of AI systems' behaviour @achan96 suggests we assess how well actors can predict the results. What should we be evaluating besides model capabilities What about: how well we can predict what models will do in the real world I explore this question in a new @GovAI_ blog post 🧵 https://t.co/8L1QRLTHR7 What should we be evaluating besides model capabilities What about: how well we can predict what models will do in the"
X Link 2024-04-11T16:19Z [----] followers, [---] engagements

"Something you might have missed in Google's flurry of announcements: They've just introduced watermarking to Gemini text outputs. I hope other developers follow suit. Curious if folks have takes on how robust their watermarking technique is. https://deepmind.google/discover/blog/watermarking-ai-generated-text-and-video-with-synthid/ https://deepmind.google/discover/blog/watermarking-ai-generated-text-and-video-with-synthid/"
X Link 2024-05-14T19:43Z [----] followers, [----] engagements

"In brief the way it works: For a huge list of words you slightly adjust the probability that the model outputs it. With enough words this works as a high-fidelity signal. Probably won't be robust to thorough paraphrasing and perhaps not even running through other LLMs"
X Link 2024-05-14T20:08Z [----] followers, [---] engagements

"16 leading AI organizations making a commitment to specify: - thresholds at which their frontier AI systems would impose intolerable risk - how theyd know if they exceed those thresholds - how theyd stay below those thresholds Really glad to see this In an historic first tech companies from across the globe have committed to developing AI safely. From @OpenAI to @Meta [--] companies have signed up to the fresh Frontier AI Safety Commitments' 👉 https://t.co/KqcBbvKLSu #AISeoulSummit https://t.co/cqmwryq494 In an historic first tech companies from across the globe have committed to developing AI"
X Link 2024-05-21T13:18Z [----] followers, [----] engagements

"Companies from the US China UAE EU Japan South Korea have signed. Notables include @MistralAI @xai @cohere G42 http://Zhipu.ai http://Zhipu.ai"
X Link 2024-05-21T13:18Z [----] followers, [---] engagements

"Why adaptation Firstly capability-modifying interventions are often a blunt tool affecting beneficial as well as harmful uses. (@mealreplacer and I previously wrote about that misuse-use tradeoff here: https://arxiv.org/abs/2303.09377 https://arxiv.org/abs/2303.09377"
X Link 2024-05-22T07:51Z [----] followers, [---] engagements

"Further [--] models larger than GPT-3 have been deployed since GPT-3s release. As development costs fall risk management focused solely on controlling capabilities and diffusion becomes infeasible. (Fig: @EpochAIResearch)"
X Link 2024-05-22T07:51Z [----] followers, [---] engagements

"There are funding opportunities for increasing AI adaptation. If youre part of academia industry or civil society you can apply to put these ideas into practice with a systemic AI safety grant from the UK @AISafetyInst. 8.5mn available. https://x.com/AISafetyInst/status/1793163082379968955 We are announcing new grants for research into systemic AI safety. Initially backed by up to [---] million this program will fund researchers to advance the science underpinning AI safety. Read more: https://t.co/QHOLUp3QGR https://t.co/jnAdLJ4eAg https://x.com/AISafetyInst/status/1793163082379968955 We are"
X Link 2024-05-22T07:51Z [----] followers, [---] engagements

"@join_ef also just announced a related funding programme to support defensive AI technologies https://x.com/matthewclifford/status/1792585822228623677 🚨🚨Today were announcing a new programme at @join_ef that Ill run this July: def/acc at EF. We want to fund the most ambitious founders to accelerate the technologies to make the future go well starting with defensive AI (1/7) https://x.com/matthewclifford/status/1792585822228623677 🚨🚨Today were announcing a new programme at @join_ef that Ill run this July: def/acc at EF. We want to fund the most ambitious founders to accelerate the"
X Link 2024-05-22T07:52Z [----] followers, [---] engagements

"While there were plenty of interesting discussions in South Korea the highlights for me were the documents and commitments agreed on beforehand: - [--] co's committing to define adhere to and be transparent about their Frontier AI Safety Frameworks https://x.com/Manderljung/status/1792907916158407096 [--] leading AI organizations making a commitment to specify: - thresholds at which their frontier AI systems would impose intolerable risk - how theyd know if they exceed those thresholds - how theyd stay below those thresholds Really glad to see this https://t.co/ZPj2W5yXWj"
X Link 2024-05-23T13:32Z [----] followers, [----] engagements

"- The Seoul Ministerial Statement which among other things commits the undersigned countries to "identify thresholds at which the level of risk posed by . frontier AI models or systems would be severe absent appropriate mitigations" https://www.gov.uk/government/publications/seoul-ministerial-statement-for-advancing-ai-safety-innovation-and-inclusivity-ai-seoul-summit-2024/seoul-ministerial-statement-for-advancing-ai-safety-innovation-and-inclusivity-ai-seoul-summit-2024"
X Link 2024-05-23T13:32Z [----] followers, [--] engagements

"A question I often get about frontier AI regulation is "but why can't we solve these problems with tort liability" The past couple of months @_mvdm @ketanr and myself have been collecting our thoughts on the topic. We argue tort should be a complement to frontier AI reg. "Until robust regulatory requirements are instituted tort liability might be able to serve as an important stopgap in preventing irresponsible AI development." @_mvdm @ketanr and @Manderljung on the role tort law can play in responsible AI development. https://t.co/wtssvPvWFU "Until robust regulatory requirements are"
X Link 2024-05-25T12:30Z [----] followers, [----] engagements

"How would the end of Chevron deference (which has courts defer more to agencies' interpretations of laws) affect AI governance I found this blog post from @law_ai_ informative. https://law-ai.org/chevron-deference/ https://law-ai.org/chevron-deference/"
X Link 2024-06-08T13:22Z [----] followers, [----] engagements

"@sssvmkumar @Leo_Koe_ @tomekkorbak @jonasschuett @GovAI_ @govai Yes the paper opines about the role of risk thresholds in regulation of frontier AI"
X Link 2024-06-24T17:00Z [----] followers, [---] engagements

"As per the King's Speech the UK gov't will "seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.""
X Link 2024-07-17T17:22Z [----] followers, [----] engagements

"What about dangerous capabilities OpenAI followed their Preparedness Plan and judged the system to pose "medium" chembio and persuasion risk. Theyve committed not to release a model with post-mitigation capabilities that are "high""
X Link 2024-09-13T09:59Z [----] followers, [---] engagements

"OpenAI has set up an independent board oversight committee focused on the safety and security of their models. Though details about what the independence entails aren't really clear AFAICT"
X Link 2024-09-17T08:11Z [----] followers, [---] engagements

"@maxwinga @jonasschuett @AnthropicAI @OpenAI @GoogleDeepMind @jide_alaga Id be keen for that to happen We might do it once there are a few more out there and some updated ones ahead of the AI Summit in France in Feb [----] (16 companies including all leading ones committed to that in Seoul). But would be keen for others to go for it"
X Link 2024-09-17T15:28Z [----] followers, [--] engagements

"Much of the progress has come from vastly increasing the amount of compute used to train models by a factor of [---] million in since [----] and figuring out how to leverage that for better performance"
X Link 2024-09-21T13:06Z [----] followers, [---] engagements

"As systems get better they'll present huge opportunities and challenges. AI technologies can play a key role in reviving the UK's sluggish productivity growth. But we'll also need to adapt to an AI-powered economy"
X Link 2024-09-21T13:06Z [----] followers, [--] engagements

"Secondly I think we're likely to see a small number of systems developed by a handful of companies be at the core of a huge swath of AI-powered economic activity. That will produce information asymmetries and structural risk that may warrant intervention"
X Link 2024-09-21T13:06Z [----] followers, [--] engagements

"For many other technologies the right approach is often to wait for their impacts to manifest before introducing new rules. I don't think we have that luxury in AI. By the time a new bill is introduced let alone passed there a new generation of models will be on the market"
X Link 2024-09-21T13:06Z [----] followers, [--] engagements

"Who should be in scope I think the regime should at least initially focus on the most powerful systems as measured by the amount of compute used to train them similar to what the EU and US have done. Over time such thresholds will need refining with other metrics"
X Link 2024-09-21T13:06Z [----] followers, [--] engagements

"This is also the first post for my blog. I'll try to put up more things like this going forward"
X Link 2024-09-21T13:06Z [----] followers, [---] engagements

"If you're around the Labour conference and wanna chat frontier AI regulation or otherwise let me know The UK government is looking to introduce new legal requirements for "the developers of the most powerful AI systems." Why introduce such requirements What should such a regime look like in practice I give my takes in a new blog post: "Frontier AI Regulation in the UK." https://t.co/MQo2cEngXQ The UK government is looking to introduce new legal requirements for "the developers of the most powerful AI systems." Why introduce such requirements What should such a regime look like in practice I"
X Link 2024-09-21T13:11Z [----] followers, [---] engagements

"Kind of crazy that it's AI and its vast demand for energy not climate change that's causing a nuclear power renaissance. BREAKING IN NYC: WORLD'S BIGGEST BANKS PLEDGE SUPPORT FOR NUCLEAR Banks and funds totaling $14 TRILLION in assets have just signed an unprecedented statement in support of nuclear power. They'll be presenting the pledge to support the goal of tripling nuclear THIS MORNING at https://t.co/9xodRsRcYl BREAKING IN NYC: WORLD'S BIGGEST BANKS PLEDGE SUPPORT FOR NUCLEAR Banks and funds totaling $14 TRILLION in assets have just signed an unprecedented statement in support of"
X Link 2024-09-23T20:37Z [----] followers, [----] engagements

"2024 seems like the year the Nobel prize committees woke up to the importance of AI. Prizes to Geoffrey Hinton as well as Demis Hassabis"
X Link 2024-10-09T12:02Z [----] followers, [---] engagements

"The UK's AISI got a lot done in its first year: - Testing [--] models incl from OpenAI Google DeepMind and Anthropic - Hiring excellent AI researchers - Launching an independent int'l review of AI safety - There are now [--] AISI equivalents - "
X Link 2024-11-13T20:53Z [----] followers, [----] engagements

"@anderssandberg @ohlennart Anders do you think this would work How hard would it be to build one of these beams"
X Link 2025-01-18T09:43Z [----] followers, [---] engagements

"@adnhw @GovAI_ @jonasschuett @NoemiDreksler Excited to have you with us Aidan :)"
X Link 2025-05-05T17:08Z [----] followers, [--] engagements

"The EU's Code of Practice for General-Purpose AI is out. As one of the co-chairs who drafted the Safety & Security Chapter focused on frontier AI I'm proud of what we've put together. Its a lean but effective framework for frontier AI companies to comply with the AI Act"
X Link 2025-07-10T11:31Z [----] followers, [----] engagements

"Unusually the Code was drafted by a group of [--] independent experts myself included with input from 1000+ stakeholders such as companies experts EU member state representatives parliamentarians and civil society"
X Link 2025-07-10T11:31Z [----] followers, [---] engagements

"So what does the Safety & Security chapter do Companies must have a Framework where they pre-define when risk is acceptable and keep risk to acceptable levels via risk assessment and mitigation. Then they evidence their adherence to their Framework in a Model Report"
X Link 2025-07-10T11:31Z [----] followers, [---] engagements

"This accords with current best practice like responsible scaling policies preparedness frameworks and frontier AI frameworks. You can find current such documents here: https://metr.org/faisc https://metr.org/faisc"
X Link 2025-07-10T11:31Z [----] followers, [---] engagements

"Ahead of important decisions such as releasing a new model on the market or updating a model such that it could pose unacceptable risk they need to go through a full risk assessment"
X Link 2025-07-10T11:31Z [----] followers, [---] engagements

"Next they need to continuously assess the risks from their models including by tracking and reporting serious incidents stemming from their model e.g. where the model was involved in significant loss of life or property"
X Link 2025-07-10T11:31Z [----] followers, [---] engagements

"The chapter also describes how companies should allocate responsibility for systemic risk assessment and mitigation throughout their organization and maintain a healthy risk culture such as by not retaliating against employees raising concerns"
X Link 2025-07-10T11:31Z [----] followers, [---] engagements

"@ShakeelHashim Yep they need to use "early search efforts (such as through a public call open for [--] business days) and promptly notifying identified evaluators.""
X Link 2025-07-10T15:35Z [----] followers, [--] engagements

"Also useful to note while the GPAI obligations come into force August 2nd the AI Office is only able to start e.g. issuing fines August 2nd 2026"
X Link 2025-07-12T10:17Z [----] followers, [---] engagements

"Google also committing to signing the Code of Practice on GPAI"
X Link 2025-07-30T07:58Z [----] followers, [----] engagements

"As has Aleph Alpha"
X Link 2025-07-30T07:58Z [----] followers, [---] engagements

"@marcel_butucea @GovAI_ @jide_alaga @jonasschuett It's true that both models and companies frameworks to manage risk change over time. The rubric tries to account for that in its criteria and how it suggests you do the grading. As for companies' frameworks changing over time you'll want to rerun the grading periodically"
X Link 2025-08-07T13:11Z [----] followers, [--] engagements

"@JeremyHowick Do you worry your work on placebos will be used to support pseudoscientific claims"
X Link 2015-01-29T17:06Z [----] followers, [--] engagements

"Important new piece by @RebeccaHersman (new colleague at @GovAIOrg) and @cassidyknelson (@LongResilience) on a blind spot in AI safety testing: there needs to be more focus on threat models beyond lone wolf pandemic terrorism"
X Link 2026-02-13T13:54Z [----] followers, [---] engagements

"I also think these other threats deserve attention because they're much more common attack vectors. Even if pandemic attacks are most concerning the other attacks are also worth attention and provide opportunities to update our threat models"
X Link 2026-02-13T13:54Z [----] followers, [--] engagements

"Piece in @TIME here: https://time.com/7373405/weapons-of-mass-destruction-ai-security-gap https://time.com/7373405/weapons-of-mass-destruction-ai-security-gap"
X Link 2026-02-13T13:54Z [----] followers, [--] engagements

"My view: the focus on pandemic risk makes sense if I had to choose one risk vector in CBRN that's the one I'd focus on. But we should be able to deal with multiple risks not just one"
X Link 2026-02-13T12:16Z [----] followers, [---] engagements

"I also think these other threats deserve attention because they're much more common attack vectors. Even if pandemic attacks are most concerning the other attacks are also worth attention and provide opportunities to update our threat models"
X Link 2026-02-13T12:16Z [----] followers, [--] engagements

"How is AI usage evolving Summary + new data from @EpochAIResearch ChatGPT growth slowed slightly in the latter parts of [----] (from a baseline of historically high growth rates) with e.g. Gemini usage catching up a bit"
X Link 2026-02-09T12:03Z [----] followers, [---] engagements

"Epoch also ran their own surveys of Americans on their usage finding e.g.: - The most common use of AI by far is looking up information (58% of users). Writing assistance (32%) and advice (30%) are distant seconds. Workplace AI adoption is largely bottom-up. 36% of users use AI for work but only 18% say their employer provides access"
X Link 2026-02-09T12:03Z [----] followers, [---] engagements

"Piece here: Jean-Stanislas Denain and @ansonwhho https://epoch.ai/gradient-updates/the-changing-drivers-of-llm-adoption https://epoch.ai/gradient-updates/the-changing-drivers-of-llm-adoption"
X Link 2026-02-09T12:10Z [----] followers, [---] engagements

"We've been growing GovAI's research team by 50% per year for a few years now. We're keen to keep that up Apply to join as a Research Scholar or Research Fellow to help us succeed :) Were hiring Research Scholars and Research Fellows at GovAI. GovAIs mission: help humanity navigate the transition to advanced AI through rigorous research and support for decision-makers across government industry and civil society. Now open: Research Scholar (12-month https://t.co/tS9DwhD1Xl Were hiring Research Scholars and Research Fellows at GovAI. GovAIs mission: help humanity navigate the transition to"
X Link 2026-02-07T15:24Z [----] followers, 10.9K engagements

"Were hiring Research Scholars and Research Fellows at GovAI. GovAIs mission: help humanity navigate the transition to advanced AI through rigorous research and support for decision-makers across government industry and civil society. Now open: Research Scholar (12-month visiting position) Flexible career-development-focused role with wide latitude (policy / social science / technical research; advising; convening; launching applied initiatives). Structured support via supervision mentorship and regular feedback. Comp (by experience/location): - London 75k95k + benefits - DC $100k$150k +"
X Link 2026-01-27T11:26Z [----] followers, 45.3K engagements

"Today were releasing the International AI Safety Report 2026: the most comprehensive evidence-based assessment of AI capabilities emerging risks and safety measures to date. 🧵 (1/17)"
X Link 2026-02-03T13:09Z 35.4K followers, 365K engagements

"RT @stephenclare_: The International AI Safety Report [----] is live This is an all-new assessment of where we're at with AI capabilities a"
X Link 2026-02-03T20:46Z [----] followers, [--] engagements

"The International AI Safety Report [----] is live This is an all-new assessment of where we're at with AI capabilities and risks. Hundreds of experts from 30+ countries contributed to it. I'm so proud of this work and think many people will find it useful Today were releasing the International AI Safety Report 2026: the most comprehensive evidence-based assessment of AI capabilities emerging risks and safety measures to date. 🧵 (1/17) https://t.co/qoe6JafRqf Today were releasing the International AI Safety Report 2026: the most comprehensive evidence-based assessment of AI capabilities"
X Link 2026-02-03T13:32Z [----] followers, [---] engagements

"AISI is hiring: join our senior leadership team as a Deputy Director of our Research Unit (912 month maternity cover). This isnt your average Civil Service job. For [---] months youll co-lead one of the worlds most influential AI safety research organisations"
X Link 2026-02-02T14:24Z [---] followers, 14.8K engagements

"Number [--] has actually released a dashboard tracking their fulfillment of @matthewclifford's AI Plan. Very unusual to have this from a gov department. The link is below worth playing around on it - one thing that strikes me: some good progress on these recommendations but a year is a long time in AI and Britain needs to raise its ambitions fast"
X Link 2026-01-29T08:57Z 14.2K followers, 27.4K engagements

"Very excited about this There's plenty of work to do preparing for a world of advanced AI. REQUEST FOR PROPOSALS What do we need to build to prepare the world for advanced AI $25 billion is about to flow into AI resilience and AI-for-science from the OpenAI Foundation the Chan Zuckerberg Initiative and other major funders. But there's no shovel-ready list of the https://t.co/2Bojx8OtSz REQUEST FOR PROPOSALS What do we need to build to prepare the world for advanced AI $25 billion is about to flow into AI resilience and AI-for-science from the OpenAI Foundation the Chan Zuckerberg Initiative"
X Link 2026-01-23T22:15Z [----] followers, [----] engagements

"REQUEST FOR PROPOSALS What do we need to build to prepare the world for advanced AI $25 billion is about to flow into AI resilience and AI-for-science from the OpenAI Foundation the Chan Zuckerberg Initiative and other major funders. But there's no shovel-ready list of the essential projects to build and no critical mass of builders ready to execute. We're trying to fix that with The Launch Sequence a collection of concrete projects to accelerate science strengthen security and adapt institutions to future advanced AI. We're opening up The Launch Sequence for new pitches. We'll help you"
X Link 2026-01-23T16:16Z [----] followers, 253.6K engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

creator/x::Manderljung
/creator/x::Manderljung