[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  Zach [@ZachTorri](/creator/twitter/ZachTorri) on x XXX followers Created: 2025-07-16 16:35:06 UTC Classic case of attempting to confirm a bias via directed appeal to authority. You’re asking a loaded question via a carefully designed prompt that pushes an (unintelligent) LLM to confirm your opinion via looking at a limited dataset. A better (although not perfect) approach would be to ask the LLM to aggregate all available data on Maxwell. You could then ask the LLM to show the connections to Epstein, Mossad, books, etc FROM THE DATASET. Then YOU could draw conclusions and state them AS YOUR OWN with the caveat that it’s based on potentially flawed information compiled by a LLM. See how that takes out both the appeal to authority (a flawed LLM) and removes the scope limitation designed into the prompt? I’m not saying your argument would change, but the methodology and conclusion would be less LLM-based, which is important. XXXXX engagements  **Related Topics** [llm](/topic/llm) [zach](/topic/zach) [Post Link](https://x.com/ZachTorri/status/1945522592355430736)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Zach @ZachTorri on x XXX followers
Created: 2025-07-16 16:35:06 UTC
Classic case of attempting to confirm a bias via directed appeal to authority.
You’re asking a loaded question via a carefully designed prompt that pushes an (unintelligent) LLM to confirm your opinion via looking at a limited dataset.
A better (although not perfect) approach would be to ask the LLM to aggregate all available data on Maxwell.
You could then ask the LLM to show the connections to Epstein, Mossad, books, etc FROM THE DATASET.
Then YOU could draw conclusions and state them AS YOUR OWN with the caveat that it’s based on potentially flawed information compiled by a LLM.
See how that takes out both the appeal to authority (a flawed LLM) and removes the scope limitation designed into the prompt?
I’m not saying your argument would change, but the methodology and conclusion would be less LLM-based, which is important.
XXXXX engagements
/post/tweet::1945522592355430736