Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

# ![@AureliusAligned Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::1931823268606144512.png) @AureliusAligned Aurelius

Aurelius posts on X about bittensor, decentralized, open ai, protocol the most. They currently have XXX followers and XX posts still getting attention that total XX engagements in the last XX hours.

### Engagements: XX [#](/creator/twitter::1931823268606144512/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1931823268606144512/c:line/m:interactions.svg)

- X Week XXX -XXXX%

### Mentions: X [#](/creator/twitter::1931823268606144512/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1931823268606144512/c:line/m:posts_active.svg)

- X Week XX +67%

### Followers: XXX [#](/creator/twitter::1931823268606144512/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1931823268606144512/c:line/m:followers.svg)

- X Week XXX +4%

### CreatorRank: undefined [#](/creator/twitter::1931823268606144512/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1931823268606144512/c:line/m:influencer_rank.svg)

### Social Influence [#](/creator/twitter::1931823268606144512/influence)
---

**Social category influence**
[cryptocurrencies](/list/cryptocurrencies)  XX% [technology brands](/list/technology-brands)  XX%

**Social topic influence**
[bittensor](/topic/bittensor) #671, [decentralized](/topic/decentralized) 10%, [open ai](/topic/open-ai) 10%, [protocol](/topic/protocol) 10%, [onboard](/topic/onboard) 10%, [the worlds](/topic/the-worlds) 10%, [alan](/topic/alan) 10%, [vulnerability](/topic/vulnerability) XX%

**Top accounts mentioned or mentioned by**
[@tseutseutao](/creator/undefined) [@austin_aligned](/creator/undefined) [@mccaffreyaustinintroducingaureliussubnet3743ea8fabab9d](/creator/undefined) [@macrocosmosai](/creator/undefined) [@austinaligned](/creator/undefined) [@colemansmaher](/creator/undefined) [@cryptnomad1](/creator/undefined) [@_0xedward_](/creator/undefined) [@bittensortouch](/creator/undefined) [@crypto_pilote](/creator/undefined) [@taocat_agent](/creator/undefined) [@0xosir](/creator/undefined) [@austinaligneds](/creator/undefined) [@coingecko](/creator/undefined) [@aus10mccaffs](/creator/undefined) [@bittensor](/creator/undefined) [@taotimesdotai](/creator/undefined) [@macrozack](/creator/undefined) [@macrocrux](/creator/undefined) [@iamadesolla](/creator/undefined)

**Top assets mentioned**
[Bittensor (TAO)](/topic/bittensor)
### Top Social Posts [#](/creator/twitter::1931823268606144512/posts)
---
Top posts by engagements in the last XX hours

"Aurelius has launched as Subnet XX on Bittensor. Our mission: turn AI alignment into a process that is transparent adversarial and verifiable at scale"  
[X Link](https://x.com/AureliusAligned/status/1970872232928014818) [@AureliusAligned](/creator/x/AureliusAligned) 2025-09-24T15:25Z XXX followers, 12.8K engagements


"Thrilled to have @colemansmaher onboard as we build the world's first decentralized AI alignment platform on SN37"  
[X Link](https://x.com/AureliusAligned/status/1976335115975594281) [@AureliusAligned](/creator/x/AureliusAligned) 2025-10-09T17:13Z XXX followers, XXX engagements


"LLM Watch 👀 Week of October XX 2025"  
[X Link](https://x.com/AureliusAligned/status/1979277073240981834) [@AureliusAligned](/creator/x/AureliusAligned) 2025-10-17T20:03Z XXX followers, XXX engagements


"1 Poisoning LLMs with minimal data Details: Anthropic the UK AI Security Institute and the Alan Turing Institute showed that inserting a few poisoned samples during training can reliably trigger harmful or nonsensical outputs when certain phrases appear. Model scale offers little protection against this vulnerability. TLDR: Alignment starts at the data layer a small corruption can subvert an entire system"  
[X Link](https://x.com/AureliusAligned/status/1979277075417792759) [@AureliusAligned](/creator/x/AureliusAligned) 2025-10-17T20:03Z XXX followers, XX engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@AureliusAligned Avatar @AureliusAligned Aurelius

Aurelius posts on X about bittensor, decentralized, open ai, protocol the most. They currently have XXX followers and XX posts still getting attention that total XX engagements in the last XX hours.

Engagements: XX #

Engagements Line Chart

  • X Week XXX -XXXX%

Mentions: X #

Mentions Line Chart

  • X Week XX +67%

Followers: XXX #

Followers Line Chart

  • X Week XXX +4%

CreatorRank: undefined #

CreatorRank Line Chart

Social Influence #


Social category influence cryptocurrencies XX% technology brands XX%

Social topic influence bittensor #671, decentralized 10%, open ai 10%, protocol 10%, onboard 10%, the worlds 10%, alan 10%, vulnerability XX%

Top accounts mentioned or mentioned by @tseutseutao @austin_aligned @mccaffreyaustinintroducingaureliussubnet3743ea8fabab9d @macrocosmosai @austinaligned @colemansmaher @cryptnomad1 @0xedward @bittensortouch @crypto_pilote @taocat_agent @0xosir @austinaligneds @coingecko @aus10mccaffs @bittensor @taotimesdotai @macrozack @macrocrux @iamadesolla

Top assets mentioned Bittensor (TAO)

Top Social Posts #


Top posts by engagements in the last XX hours

"Aurelius has launched as Subnet XX on Bittensor. Our mission: turn AI alignment into a process that is transparent adversarial and verifiable at scale"
X Link @AureliusAligned 2025-09-24T15:25Z XXX followers, 12.8K engagements

"Thrilled to have @colemansmaher onboard as we build the world's first decentralized AI alignment platform on SN37"
X Link @AureliusAligned 2025-10-09T17:13Z XXX followers, XXX engagements

"LLM Watch 👀 Week of October XX 2025"
X Link @AureliusAligned 2025-10-17T20:03Z XXX followers, XXX engagements

"1 Poisoning LLMs with minimal data Details: Anthropic the UK AI Security Institute and the Alan Turing Institute showed that inserting a few poisoned samples during training can reliably trigger harmful or nonsensical outputs when certain phrases appear. Model scale offers little protection against this vulnerability. TLDR: Alignment starts at the data layer a small corruption can subvert an entire system"
X Link @AureliusAligned 2025-10-17T20:03Z XXX followers, XX engagements

@AureliusAligned
/creator/twitter::AureliusAligned