Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

![ZachTorri Avatar](https://lunarcrush.com/gi/w:24/cr:twitter::1475843629038067713.png) Zach [@ZachTorri](/creator/twitter/ZachTorri) on x XXX followers
Created: 2025-07-16 16:35:06 UTC

Classic case of attempting to confirm a bias via directed appeal to authority.

You’re asking a loaded question via a carefully designed prompt that pushes an (unintelligent) LLM to confirm your opinion via looking at a limited dataset.

A better (although not perfect) approach would be to ask the LLM to aggregate all available data on Maxwell.

You could then ask the LLM to show the connections to Epstein, Mossad, books, etc FROM THE DATASET.

Then YOU could draw conclusions and state them AS YOUR OWN with the caveat that it’s based on potentially flawed information compiled by a LLM.

See how that takes out both the appeal to authority (a flawed LLM) and removes the scope limitation designed into the prompt?

I’m not saying your argument would change, but the methodology and conclusion would be less LLM-based, which is important.


XXXXX engagements

![Engagements Line Chart](https://lunarcrush.com/gi/w:600/p:tweet::1945522592355430736/c:line.svg)

**Related Topics**
[llm](/topic/llm)
[zach](/topic/zach)

[Post Link](https://x.com/ZachTorri/status/1945522592355430736)

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

ZachTorri Avatar Zach @ZachTorri on x XXX followers Created: 2025-07-16 16:35:06 UTC

Classic case of attempting to confirm a bias via directed appeal to authority.

You’re asking a loaded question via a carefully designed prompt that pushes an (unintelligent) LLM to confirm your opinion via looking at a limited dataset.

A better (although not perfect) approach would be to ask the LLM to aggregate all available data on Maxwell.

You could then ask the LLM to show the connections to Epstein, Mossad, books, etc FROM THE DATASET.

Then YOU could draw conclusions and state them AS YOUR OWN with the caveat that it’s based on potentially flawed information compiled by a LLM.

See how that takes out both the appeal to authority (a flawed LLM) and removes the scope limitation designed into the prompt?

I’m not saying your argument would change, but the methodology and conclusion would be less LLM-based, which is important.

XXXXX engagements

Engagements Line Chart

Related Topics llm zach

Post Link

post/tweet::1945522592355430736
/post/tweet::1945522592355430736