Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

[@AureliusAligned](/creator/twitter/AureliusAligned)
"Aurelius has launched as Subnet XX on Bittensor. Our mission: turn AI alignment into a process that is transparent adversarial and verifiable at scale"  
[X Link](https://x.com/AureliusAligned/status/1970872232928014818) [@AureliusAligned](/creator/x/AureliusAligned) 2025-09-24T15:25Z XXX followers, 12.8K engagements


"Thrilled to have @colemansmaher onboard as we build the world's first decentralized AI alignment platform on SN37"  
[X Link](https://x.com/AureliusAligned/status/1976335115975594281) [@AureliusAligned](/creator/x/AureliusAligned) 2025-10-09T17:13Z XXX followers, XXX engagements


"LLM Watch 👀 Week of October XX 2025"  
[X Link](https://x.com/AureliusAligned/status/1979277073240981834) [@AureliusAligned](/creator/x/AureliusAligned) 2025-10-17T20:03Z XXX followers, XXX engagements


"1 Poisoning LLMs with minimal data Details: Anthropic the UK AI Security Institute and the Alan Turing Institute showed that inserting a few poisoned samples during training can reliably trigger harmful or nonsensical outputs when certain phrases appear. Model scale offers little protection against this vulnerability. TLDR: Alignment starts at the data layer a small corruption can subvert an entire system"  
[X Link](https://x.com/AureliusAligned/status/1979277075417792759) [@AureliusAligned](/creator/x/AureliusAligned) 2025-10-17T20:03Z XXX followers, XX engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@AureliusAligned "Aurelius has launched as Subnet XX on Bittensor. Our mission: turn AI alignment into a process that is transparent adversarial and verifiable at scale"
X Link @AureliusAligned 2025-09-24T15:25Z XXX followers, 12.8K engagements

"Thrilled to have @colemansmaher onboard as we build the world's first decentralized AI alignment platform on SN37"
X Link @AureliusAligned 2025-10-09T17:13Z XXX followers, XXX engagements

"LLM Watch 👀 Week of October XX 2025"
X Link @AureliusAligned 2025-10-17T20:03Z XXX followers, XXX engagements

"1 Poisoning LLMs with minimal data Details: Anthropic the UK AI Security Institute and the Alan Turing Institute showed that inserting a few poisoned samples during training can reliably trigger harmful or nonsensical outputs when certain phrases appear. Model scale offers little protection against this vulnerability. TLDR: Alignment starts at the data layer a small corruption can subvert an entire system"
X Link @AureliusAligned 2025-10-17T20:03Z XXX followers, XX engagements

creator/twitter::1931823268606144512/posts
/creator/twitter::1931823268606144512/posts