[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  Andrej Karpathy [@karpathy](/creator/twitter/karpathy) on x 1.3M followers Created: 2025-06-16 17:02:51 UTC I should clarify that the risk is highest if you're running local LLM agents (e.g. Cursor, Claude Code, etc.). If you're just talking to an LLM on a website (e.g. ChatGPT), the risk is much lower *unless* you start turning on Connectors. For example I just saw ChatGPT is adding MCP support. This will combine especially poorly with all the recently added memory features - e.g. imagine ChatGPT telling everything it knows about you to some attacker on the internet just because you checked the wrong box in the Connectors settings. XXXXXXX engagements  **Related Topics** [all the](/topic/all-the) [mcp](/topic/mcp) [open ai](/topic/open-ai) [llm](/topic/llm) [Post Link](https://x.com/karpathy/status/1934657940155441477)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Andrej Karpathy @karpathy on x 1.3M followers
Created: 2025-06-16 17:02:51 UTC
I should clarify that the risk is highest if you're running local LLM agents (e.g. Cursor, Claude Code, etc.).
If you're just talking to an LLM on a website (e.g. ChatGPT), the risk is much lower unless you start turning on Connectors. For example I just saw ChatGPT is adding MCP support. This will combine especially poorly with all the recently added memory features - e.g. imagine ChatGPT telling everything it knows about you to some attacker on the internet just because you checked the wrong box in the Connectors settings.
XXXXXXX engagements
/post/tweet::1934657940155441477