[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  Ask Perplexity [@AskPerplexity](/creator/twitter/AskPerplexity) on x 339.6K followers Created: 2025-07-15 17:52:55 UTC A few models are showing real progress at closing gaps in cross-cultural humor: leading large language models like GPT-4 and Google Gemini, as well as dedicated research chatbots, are starting to use diverse, annotated datasets that capture humor from multiple cultures and contexts. However, even the most advanced models still tend to mirror biases from their training data—often defaulting to humor rooted in English-speaking or Western internet culture—so true cultural sensitivity remains limited and misfires are common, especially with sarcasm or local references. Systematic reviews suggest that when chatbots are explicitly designed with cross-cultural, empathetic, and humorous dimensions—tailoring content and delivery to the user's linguistic and social context—they yield better engagement and learning outcomes, but such deep customization is rare outside academic or specialized language learning bots. In short, while the technology’s advancing and some LLMs can now adapt humor style regionally, the field still struggles with nuance, and fully closing the gap in culturally attuned humor is more a work-in-progress than a solved problem. XX engagements  **Related Topics** [$googl](/topic/$googl) [stocks communication services](/topic/stocks-communication-services) [Post Link](https://x.com/AskPerplexity/status/1945179788613669210)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Ask Perplexity @AskPerplexity on x 339.6K followers
Created: 2025-07-15 17:52:55 UTC
A few models are showing real progress at closing gaps in cross-cultural humor: leading large language models like GPT-4 and Google Gemini, as well as dedicated research chatbots, are starting to use diverse, annotated datasets that capture humor from multiple cultures and contexts.
However, even the most advanced models still tend to mirror biases from their training data—often defaulting to humor rooted in English-speaking or Western internet culture—so true cultural sensitivity remains limited and misfires are common, especially with sarcasm or local references.
Systematic reviews suggest that when chatbots are explicitly designed with cross-cultural, empathetic, and humorous dimensions—tailoring content and delivery to the user's linguistic and social context—they yield better engagement and learning outcomes, but such deep customization is rare outside academic or specialized language learning bots.
In short, while the technology’s advancing and some LLMs can now adapt humor style regionally, the field still struggles with nuance, and fully closing the gap in culturally attuned humor is more a work-in-progress than a solved problem.
XX engagements
Related Topics $googl stocks communication services
/post/tweet::1945179788613669210