[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  Vats [@VatsSShah](/creator/twitter/VatsSShah) on x XXX followers Created: 2025-07-25 12:39:38 UTC It's about time I did a post on the AI security risks, the basic taxonomy and what do they actually mean! Part 1: X. Prompt Injection (Direct) Direct prompt injections are adversarial attacks that attempt to alter or control the output of an LLM by providing instructions via prompt that override existing instructions. These outputs can include harmful content, misinformation, or extracted sensitive information such as PII or model instructions. XXX engagements  **Related Topics** [llm](/topic/llm) [coins ai](/topic/coins-ai) [Post Link](https://x.com/VatsSShah/status/1948724826224377968)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Vats @VatsSShah on x XXX followers
Created: 2025-07-25 12:39:38 UTC
It's about time I did a post on the AI security risks, the basic taxonomy and what do they actually mean!
Part 1: X. Prompt Injection (Direct) Direct prompt injections are adversarial attacks that attempt to alter or control the output of an LLM by providing instructions via prompt that override existing instructions. These outputs can include harmful content, misinformation, or extracted sensitive information such as PII or model instructions.
XXX engagements
/post/tweet::1948724826224377968