Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

![VatsSShah Avatar](https://lunarcrush.com/gi/w:24/cr:twitter::1246835754401034240.png) Vats [@VatsSShah](/creator/twitter/VatsSShah) on x XXX followers
Created: 2025-07-25 12:39:38 UTC

It's about time I did a post on the AI security risks, the basic taxonomy and what do they actually mean!

Part 1:
X. Prompt Injection (Direct)
Direct prompt injections are adversarial attacks that attempt to alter or control the output of an LLM by providing instructions via prompt that override existing instructions. These outputs can include harmful content, misinformation, or extracted sensitive information such as PII or model instructions.


XXX engagements

![Engagements Line Chart](https://lunarcrush.com/gi/w:600/p:tweet::1948724826224377968/c:line.svg)

**Related Topics**
[llm](/topic/llm)
[coins ai](/topic/coins-ai)

[Post Link](https://x.com/VatsSShah/status/1948724826224377968)

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

VatsSShah Avatar Vats @VatsSShah on x XXX followers Created: 2025-07-25 12:39:38 UTC

It's about time I did a post on the AI security risks, the basic taxonomy and what do they actually mean!

Part 1: X. Prompt Injection (Direct) Direct prompt injections are adversarial attacks that attempt to alter or control the output of an LLM by providing instructions via prompt that override existing instructions. These outputs can include harmful content, misinformation, or extracted sensitive information such as PII or model instructions.

XXX engagements

Engagements Line Chart

Related Topics llm coins ai

Post Link

post/tweet::1948724826224377968
/post/tweet::1948724826224377968