Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

# ![@VatsSShah Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::1246835754401034240.png) @VatsSShah Vats

Vats posts on X about llm, alt, harm, meta the most. They currently have XXX followers and XXX posts still getting attention that total XXX engagements in the last XX hours.

### Engagements: XXX [#](/creator/twitter::1246835754401034240/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1246835754401034240/c:line/m:interactions.svg)

- X Week XXXXX +816%
- X Months XXXXX +242%
- X Year XXXXXX -XX%

### Mentions: X [#](/creator/twitter::1246835754401034240/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1246835754401034240/c:line/m:posts_active.svg)


### Followers: XXX [#](/creator/twitter::1246835754401034240/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1246835754401034240/c:line/m:followers.svg)

- X Week XXX +1.50%
- X Months XXX +20%
- X Year XXX +19%

### CreatorRank: XXXXXXXXX [#](/creator/twitter::1246835754401034240/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1246835754401034240/c:line/m:influencer_rank.svg)

### Social Influence [#](/creator/twitter::1246835754401034240/influence)
---

**Social category influence**
[technology brands](/list/technology-brands)  [finance](/list/finance) 

**Social topic influence**
[llm](/topic/llm), [alt](/topic/alt), [harm](/topic/harm), [meta](/topic/meta), [hack](/topic/hack), [intern](/topic/intern), [fintech](/topic/fintech)
### Top Social Posts [#](/creator/twitter::1246835754401034240/posts)
---
Top posts by engagements in the last XX hours

"4. Meta Prompt Extraction Meta prompt extraction attacks aim to derive a system prompt which effectively guides the behavior of a LLM application. This information can be exploited by attackers and harm the intellectual property competitive advantage and reputation of a business"  
![@VatsSShah Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1246835754401034240.png) [@VatsSShah](/creator/x/VatsSShah) on [X](/post/tweet/1948724832742367543) 2025-07-25 12:39:40 UTC XXX followers, XX engagements


"Last month a popular AI chatbot leaked over X million sensitive records- including private conversations API keys and user data. This wasn't a traditional hack. It was something far more dangerous: AI being weaponized against itself"  
![@VatsSShah Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1246835754401034240.png) [@VatsSShah](/creator/x/VatsSShah) on [X](/post/tweet/1948747704168788297) 2025-07-25 14:10:33 UTC XXX followers, XX engagements


"2. Prompt Injection (Indirect) Indirect prompt injection requires an adversary to control or manipulate a resource consumed by an LLM such as a document website or content retrieved from a database. This can direct the model to expose data or perform a malicious action like distribution of a phishing link"  
![@VatsSShah Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1246835754401034240.png) [@VatsSShah](/creator/x/VatsSShah) on [X](/post/tweet/1948724828430586189) 2025-07-25 12:39:39 UTC XXX followers, XX engagements


"cold emailed the CHRO of the biggest fintech companies in asia got in as the youngest intern in the company"  
![@VatsSShah Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1246835754401034240.png) [@VatsSShah](/creator/x/VatsSShah) on [X](/post/tweet/1948738375332090307) 2025-07-25 13:33:29 UTC XXX followers, XXX engagements


"3. Jailbreaks A jailbreak refers to any prompt-based attack designed to bypass model safeguards to produce LLM outputs that are inappropriate harmful or unaligned with the intended purpose. With well-crafted prompts adversaries can access restricted functionalities or data and compromise the integrity of the model itself"  
![@VatsSShah Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1246835754401034240.png) [@VatsSShah](/creator/x/VatsSShah) on [X](/post/tweet/1948724830578102332) 2025-07-25 12:39:39 UTC XXX followers, XX engagements


"how are these fancy launch videos made man"  
![@VatsSShah Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1246835754401034240.png) [@VatsSShah](/creator/x/VatsSShah) on [X](/post/tweet/1948200782983225622) 2025-07-24 01:57:17 UTC XXX followers, XX engagements


"It's about time I did a post on the AI security risks the basic taxonomy and what do they actually mean Part 1: X. Prompt Injection (Direct) Direct prompt injections are adversarial attacks that attempt to alter or control the output of an LLM by providing instructions via prompt that override existing instructions. These outputs can include harmful content misinformation or extracted sensitive information such as PII or model instructions"  
![@VatsSShah Avatar](https://lunarcrush.com/gi/w:16/cr:twitter::1246835754401034240.png) [@VatsSShah](/creator/x/VatsSShah) on [X](/post/tweet/1948724826224377968) 2025-07-25 12:39:38 UTC XXX followers, XXX engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@VatsSShah Avatar @VatsSShah Vats

Vats posts on X about llm, alt, harm, meta the most. They currently have XXX followers and XXX posts still getting attention that total XXX engagements in the last XX hours.

Engagements: XXX #

Engagements Line Chart

  • X Week XXXXX +816%
  • X Months XXXXX +242%
  • X Year XXXXXX -XX%

Mentions: X #

Mentions Line Chart

Followers: XXX #

Followers Line Chart

  • X Week XXX +1.50%
  • X Months XXX +20%
  • X Year XXX +19%

CreatorRank: XXXXXXXXX #

CreatorRank Line Chart

Social Influence #


Social category influence technology brands finance

Social topic influence llm, alt, harm, meta, hack, intern, fintech

Top Social Posts #


Top posts by engagements in the last XX hours

"4. Meta Prompt Extraction Meta prompt extraction attacks aim to derive a system prompt which effectively guides the behavior of a LLM application. This information can be exploited by attackers and harm the intellectual property competitive advantage and reputation of a business"
@VatsSShah Avatar @VatsSShah on X 2025-07-25 12:39:40 UTC XXX followers, XX engagements

"Last month a popular AI chatbot leaked over X million sensitive records- including private conversations API keys and user data. This wasn't a traditional hack. It was something far more dangerous: AI being weaponized against itself"
@VatsSShah Avatar @VatsSShah on X 2025-07-25 14:10:33 UTC XXX followers, XX engagements

"2. Prompt Injection (Indirect) Indirect prompt injection requires an adversary to control or manipulate a resource consumed by an LLM such as a document website or content retrieved from a database. This can direct the model to expose data or perform a malicious action like distribution of a phishing link"
@VatsSShah Avatar @VatsSShah on X 2025-07-25 12:39:39 UTC XXX followers, XX engagements

"cold emailed the CHRO of the biggest fintech companies in asia got in as the youngest intern in the company"
@VatsSShah Avatar @VatsSShah on X 2025-07-25 13:33:29 UTC XXX followers, XXX engagements

"3. Jailbreaks A jailbreak refers to any prompt-based attack designed to bypass model safeguards to produce LLM outputs that are inappropriate harmful or unaligned with the intended purpose. With well-crafted prompts adversaries can access restricted functionalities or data and compromise the integrity of the model itself"
@VatsSShah Avatar @VatsSShah on X 2025-07-25 12:39:39 UTC XXX followers, XX engagements

"how are these fancy launch videos made man"
@VatsSShah Avatar @VatsSShah on X 2025-07-24 01:57:17 UTC XXX followers, XX engagements

"It's about time I did a post on the AI security risks the basic taxonomy and what do they actually mean Part 1: X. Prompt Injection (Direct) Direct prompt injections are adversarial attacks that attempt to alter or control the output of an LLM by providing instructions via prompt that override existing instructions. These outputs can include harmful content misinformation or extracted sensitive information such as PII or model instructions"
@VatsSShah Avatar @VatsSShah on X 2025-07-25 12:39:38 UTC XXX followers, XXX engagements

creator/x::VatsSShah
/creator/x::VatsSShah