[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Rene Bruentrup @fallacyalarm on x 9197 followers
Created: 2025-07-26 22:01:09 UTC
The most dangerous aspect of AI If you define democracy as the rule of the majority, then AI is democratic. An LLM will almost always give you the answer that is supported by most people. Because that answer is statistically most significant based on its training data.
However, truth isn't democratic. It's not something that the majority can decide. It is what it is, no matter who and how many people acknowledge it.
For example, let's say millions of people some day decide to flood the internet with statements that "the sky is green". Whether they actually believe so or whether they are incentivized to lie doesn't matter much. The fact is that they train AI tools with these statements.
And these AI tools feed these statements back to those people wondering whether the sky is actually green. They might ask a chatbot: "Is the sky actually green? Is there evidence for that?"
And the chatbot may answer "yes, it's well documented and widely accepted." I have seen answers like that on numerous occasions until I finally called out the BS and then the LLM corrected itself. Most cases are on unimportant topics. But it can and does happen on big topics as well.
The problem with that is twofold:
AI is fueling the postmodernist agenda which rejects the idea of an objective truth. AI doesn't care about truth because it doesn't understand truth. It cares about making the correct prediction consistent with its training data. I think we are very vulnerable as a society to get fooled by it.
XXXXX engagements