[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  Rene Bruentrup [@fallacyalarm](/creator/twitter/fallacyalarm) on x 9197 followers Created: 2025-07-26 22:01:09 UTC The most dangerous aspect of AI If you define democracy as the rule of the majority, then AI is democratic. An LLM will almost always give you the answer that is supported by most people. Because that answer is statistically most significant based on its training data. However, truth isn't democratic. It's not something that the majority can decide. It is what it is, no matter who and how many people acknowledge it. For example, let's say millions of people some day decide to flood the internet with statements that "the sky is green". Whether they actually believe so or whether they are incentivized to lie doesn't matter much. The fact is that they train AI tools with these statements. And these AI tools feed these statements back to those people wondering whether the sky is actually green. They might ask a chatbot: "Is the sky actually green? Is there evidence for that?" And the chatbot may answer "yes, it's well documented and widely accepted." I have seen answers like that on numerous occasions until I finally called out the BS and then the LLM corrected itself. Most cases are on unimportant topics. But it can and does happen on big topics as well. The problem with that is twofold: 1) These AI tools are generally so good at providing information that people tend to just trust them, me included. You must be extraordinarily skeptical and demand sources to avoid being misinformed. It's understandable not to have that rigor with every topic you are looking at. 2) We perceive more and more of our environment through digital filters instead of first-hand in the real world. What do we know first-hand about climate change, diversity, international trade etc? We have to trust what machines tell us about these things. And if machines are not optimized for truth, but for consensus, that is highly problematic. AI is fueling the postmodernist agenda which rejects the idea of an objective truth. AI doesn't care about truth because it doesn't understand truth. It cares about making the correct prediction consistent with its training data. I think we are very vulnerable as a society to get fooled by it. XXXXX engagements  **Related Topics** [llm](/topic/llm) [coins ai](/topic/coins-ai) [Post Link](https://x.com/fallacyalarm/status/1949228522884796506)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Rene Bruentrup @fallacyalarm on x 9197 followers
Created: 2025-07-26 22:01:09 UTC
The most dangerous aspect of AI If you define democracy as the rule of the majority, then AI is democratic. An LLM will almost always give you the answer that is supported by most people. Because that answer is statistically most significant based on its training data.
However, truth isn't democratic. It's not something that the majority can decide. It is what it is, no matter who and how many people acknowledge it.
For example, let's say millions of people some day decide to flood the internet with statements that "the sky is green". Whether they actually believe so or whether they are incentivized to lie doesn't matter much. The fact is that they train AI tools with these statements.
And these AI tools feed these statements back to those people wondering whether the sky is actually green. They might ask a chatbot: "Is the sky actually green? Is there evidence for that?"
And the chatbot may answer "yes, it's well documented and widely accepted." I have seen answers like that on numerous occasions until I finally called out the BS and then the LLM corrected itself. Most cases are on unimportant topics. But it can and does happen on big topics as well.
The problem with that is twofold:
AI is fueling the postmodernist agenda which rejects the idea of an objective truth. AI doesn't care about truth because it doesn't understand truth. It cares about making the correct prediction consistent with its training data. I think we are very vulnerable as a society to get fooled by it.
XXXXX engagements
/post/tweet::1949228522884796506