[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @wxj3691651 wxj369 wxj369 posts on X about llm, #ai, large language, open ai the most. They currently have XXXXXXXXX followers and XX posts still getting attention that total XXXXXXXXX engagements in the last XX hours. ### Engagements: XXXXXXXXX [#](/creator/twitter::1724391828777299968/interactions)  - X Week XXXXXXXXX +1,394% - X Month XXXXXXXXX +182% - X Months XXXXXXXXXX +111% - X Year XXXXXXXXXX +3,210,502% ### Mentions: XX [#](/creator/twitter::1724391828777299968/posts_active)  - X Week XX +236% - X Month XX +270% - X Months XX +363% - X Year XX +600% ### Followers: XXXXXXXXX [#](/creator/twitter::1724391828777299968/followers)  - X Month XX no change - X Months XX +7.60% ### CreatorRank: undefined [#](/creator/twitter::1724391828777299968/influencer_rank)  ### Social Influence [#](/creator/twitter::1724391828777299968/influence) --- **Social category influence** [technology brands](/list/technology-brands) [stocks](/list/stocks) **Social topic influence** [llm](/topic/llm), [#ai](/topic/#ai), [large language](/topic/large-language), [open ai](/topic/open-ai), [intro](/topic/intro), [0x0010a924a343c5f3c108d7b64666bb0c9e2c515f](/topic/0x0010a924a343c5f3c108d7b64666bb0c9e2c515f) #1, [instead of](/topic/instead-of), [$googl](/topic/$googl), [gpus](/topic/gpus), [gpu](/topic/gpu) **Top assets mentioned** [Alphabet Inc Class A (GOOGL)](/topic/$googl) [Artificial Intelligence (AI4)](/topic/$ai4) ### Top Social Posts [#](/creator/twitter::1724391828777299968/posts) --- Top posts by engagements in the last XX hours "@BTC_xxz 0x0010A924A343c5f3c108D7b64666bB0C9E2c515f" [X Link](https://x.com/wxj3691651/status/1980460516616335847) [@wxj3691651](/creator/x/wxj3691651) 2025-10-21T02:25Z XX followers, XX engagements "ELI5: Why can't / don't LLMs say "I don't know" or ask back clarifying questions instead of hallucinating" [Reddit Link](https://redd.it/1obqvky) [@Double_History1719](/creator/reddit/Double_History1719) 2025-10-20T22:42Z X followers, 70K engagements "Best Local LLMs - October 2025" [Reddit Link](https://redd.it/1obqkpe) [@rm-rf-rm](/creator/reddit/rm-rf-rm) 2025-10-20T20:22Z X followers, 1550 engagements "4o roasting other LLMs in Group Chat" [Reddit Link](https://redd.it/1obyewg) [@Shameless_Devil](/creator/reddit/Shameless_Devil) 2025-10-21T00:22Z X followers, 1213 engagements "Release gpu-poor: INT8 quantization achieving XX% memory reduction on large LLMs (pure Python production metrics)" [Reddit Link](https://redd.it/1oc0a0h) [@BroccoliForsaken3288](/creator/reddit/BroccoliForsaken3288) 2025-10-21T01:52Z X followers, XX engagements "Why havent Indian startups built their own LLMs yet" [Reddit Link](https://redd.it/1obkifl) [@Classic_Turnover_896](/creator/reddit/Classic_Turnover_896) 2025-10-20T14:46Z X followers, XX engagements "Study finds both humans and LLMs rate deliberative thinkers as more intelligent than intuitive or fast responders in complex reasoning tasks" [Reddit Link](https://redd.it/1oanjaq) [@IronAshish](/creator/reddit/IronAshish) 2025-10-19T12:41Z X followers, 1592 engagements "ChatGPT gets right what Google misses and this is a problem with using LLMs trained on in internet for information" [Reddit Link](https://redd.it/1oc1lpm) [@Additional-Sky-7436](/creator/reddit/Additional-Sky-7436) 2025-10-21T03:03Z X followers, XX engagements "Removed by moderator" [Reddit Link](https://redd.it/1oc9d1v) [@Oh_boy90](/creator/reddit/Oh_boy90) 2025-10-21T10:35Z X followers, XX engagements "This customer wanted a maxed-out AI-ready PC and we delivered with dual GPUs for maximum performance. 💪 A lot of you asked Whats the point of two GPUs Well heres why it makes a huge difference: AI & Machine Learning: Two GPUs drastically reduce training time and improve parallel processing perfect for deep learning and LLMs. Multiple Monitors & Workflows: Ideal for setups with 3+ monitors especially for productivity heavy environments like trading content creation or data science. Rendering Mining & Simulations: Whether youre doing GPU-based rendering crypto mining or real time simulations" [TikTok Link](https://www.tiktok.com/@hall.of.tech/video/7532168039207423263) [@hall.of.tech](/creator/tiktok/hall.of.tech) 2025-07-28T16:19Z 162.8K followers, 1.5M engagements "LLMs arent magic theyre just insane amounts of matrix math running at GPU speed 🤯#LLM #ai #artificialintelligence #machinelearning #deeplearning" [TikTok Link](https://www.tiktok.com/@willcodeforfoodtoo/video/7556836352533237047) [@willcodeforfoodtoo](/creator/tiktok/willcodeforfoodtoo) 2025-10-03T03:45Z 4642 followers, 110.4K engagements "guy who watched the Sutton and Karpathy Dwarkesh interview: yeah.i guess intelligence is a lot more complicated than i previously thought. maybe LLMs arent intelligent the Boltzmann brain that spontaneously emerged in a tub of curdled yogurt:" [TikTok Link](https://www.tiktok.com/@fireplacepillow/video/7562746560526814477) [@fireplacepillow](/creator/tiktok/fireplacepillow) 2025-10-19T01:59Z 16.8K followers, 19.5K engagements "🔍 The Future of AI is in His Hands NVIDIA CEO Jensen Huang unveils the next-gen computing platform built to power the future of artificial intelligence. 🚀 The hardware hes holding isnt just a motherboard its a computational powerhouse designed for generative AI data centers and large language models (LLMs). With this breakthrough technology: ⚡ AI models with billions of parameters will train faster than ever 🌐 Data processing will reach unprecedented speeds 🧠 The boundaries of artificial intelligence will be redefined 📌 Its not just silicon its the foundation of tomorrows intelligence. ." [TikTok Link](https://www.tiktok.com/@lunexar_aep/video/7510945642412264722) [@lunexar_aep](/creator/tiktok/lunexar_aep) 2025-06-01T11:45Z XXX followers, 131.3K engagements "2 things humans can do but LLMs cant #ai #aistartup #andrejkarpathy #agi #llm" [TikTok Link](https://www.tiktok.com/@aistartupfren/video/7563303790250659102) [@aistartupfren](/creator/tiktok/aistartupfren) 2025-10-20T14:01Z 21.2K followers, 4172 engagements "Diffusion Models: from autoencoder to VAE to GAN to Diffusion. How AI matured its diffusion model to where it is today simplified as emerging techonology that could potentially surpass transformer architecture. We explore the challenges of training LLMs in learning image features and storing them in latent space to more. #ai #llm #artificialintelligence #explained #largelanguagemodels #softwareengineer #research #softwaredeveloper #tech #openai #anthropic #gpt5 #gpt #coder" [TikTok Link](https://www.tiktok.com/@calebwritescode/video/7544261247022255373) [@calebwritescode](/creator/tiktok/calebwritescode) 2025-08-30T06:27Z 20.9K followers, 237.9K engagements "The Signs of AI Writing - Wikipedia just published a guide to spotting AI-generated writing and it's the best one on the internet. ChatGPT Claude Gemini and the other AI chatbots (LLMs) still write in a predictable way with lots of clich phrases and words as well as little text formatting quirks that give it away. #ai #chatgpt #chatgptprompts #techtok #chatgpt5 #aitools #technology #artificialintelligence #chatgpttips #aiwriting #llm #llms" [TikTok Link](https://www.tiktok.com/@willfrancis24/video/7537012290109852950) [@willfrancis24](/creator/tiktok/willfrancis24) 2025-08-10T17:37Z 145.3K followers, 905.3K engagements "When diving into the fascinating world of AI coding agents context is key Discover how even a seemingly vast context of 90000 tokens can quickly dwindle leaving these intelligent systems grappling for clarity. In today's exploration viewers will gain insight into optimizing LLMs with innovative techniques like flash attention and K cache quantization. Learn why every token counts and how to leverage memory management for enhanced performance. While current limitations are apparent understanding these challenges is the first step towards mastering AI coding. #AICoding #ContextOptimization" [TikTok Link](https://www.tiktok.com/@zenaiengineer/video/7563288165168041238) [@zenaiengineer](/creator/tiktok/zenaiengineer) 2025-10-20T13:00Z 1308 followers, XXX engagements "How Hackers STILL Bypass AI Filters in 2025 (You Wont Believe This One 😳) These are the top X jailbreak methods that are STILL bypassing AI filters and moderation in 2025 even on reasoning models Lets break it down: 💥 X. SEAL (Stacked Encryption for Adaptive LLMs) Hackers layer obfuscated payloads using Base64 ROT13 emoji codes etc. then prompt the AI to decode each step one-by-one. Because reasoning models are trained to follow instructions step-by-step the malicious command is revealed after moderation filters are already bypassed. ⚠ Think: ➡ Decode this. Now decode again. Now act on it." [TikTok Link](https://www.tiktok.com/@eurothrottle/video/7534797188896017719) [@eurothrottle](/creator/tiktok/eurothrottle) 2025-08-04T18:22Z 359.2K followers, 33.3K engagements "I quite like the new DeepSeek-OCR paper. It's a good OCR model (maybe a bit worse than dots) and yes data collection etc. but anyway it doesn't matter. The more interesting part for me (esp as a computer vision at heart who is temporarily masquerading as a natural language person) is whether pixels are better inputs to LLMs than text. Whether text tokens are wasteful and just terrible at the input. Maybe it makes more sense that all inputs to LLMs should only ever be images. Even if you happen to have pure text input maybe you'd prefer to render it and then feed that in: - more information" [X Link](https://x.com/karpathy/status/1980397031542989305) [@karpathy](/creator/x/karpathy) 2025-10-20T22:13Z 1.4M followers, 1.5M engagements "🚀 DeepSeek-OCR the new frontier of OCR from @deepseek_ai exploring optical context compression for LLMs is running blazingly fast on vLLM ⚡ (2500 tokens/s on A100-40G) powered by vllm==0.8.5 for day-0 model support. 🧠 Compresses visual contexts up to XX while keeping XX% OCR accuracy at XX. 📄 Outperforms GOT-OCR2.0 & MinerU2.0 on OmniDocBench using fewer vision tokens. 🤝 The vLLM team is working with DeepSeek to bring official DeepSeek-OCR support into the next vLLM release making multimodal inference even faster and easier to scale. 🔗 #vLLM #DeepSeek #OCR #LLM #VisionAI #DeepLearning" [X Link](https://x.com/vllm_project/status/1980235518706401405) [@vllm_project](/creator/x/vllm_project) 2025-10-20T11:31Z 22.1K followers, 912.3K engagements "If this Karpathy interview doesn't pop the ai bubble nothing will. XX brutal quotes: X. LLMs dont work yet They dont have enough intelligence theyre not multimodal enough they cant use computers and they dont remember what you tell them. Theyre cognitively lacking. Itll take about a decade to work through all of that. X. When you boot them up they always start from zero They have no distillation phase no process like sleep where what happened gets analyzed and written back into the weights. X. Whats stored in their weights is only a hazy recollection of the internet It's just a compressed" [X Link](https://x.com/Prithvir12/status/1980186299794411560) [@Prithvir12](/creator/x/Prithvir12) 2025-10-20T08:16Z 8802 followers, 442.7K engagements "Prediction: LLMs in their current form may not be able to do everything but AI now has enough momentum that this won't matter. Beam engines couldn't do everything either but they were enough to set off the Industrial Revolution" [X Link](https://x.com/paulg/status/1980533520021045567) [@paulg](/creator/x/paulg) 2025-10-21T07:16Z 2.1M followers, 153.9K engagements "Wanted to get better intuitions for how RL works on LLMs. So I wrote a simple script to teach Nanochat to add X digit numbers. I was surprised at how fast it learned. Until I looked at the model's generations and realized that it had just learned to always call the built-in Python interpreter 😂. The code I wrote is very remedial minimal and inefficient - I'm a professional podcaster alright But it might be helpful if you just want to see the basics of how REINFORCE or GRPO work. Link to gist below. Fundamentally it's not that complicated: generate multiple trajectories per prompt. Update" [X Link](https://x.com/dwarkesh_sp/status/1980427914639524278) [@dwarkesh_sp](/creator/x/dwarkesh_sp) 2025-10-21T00:16Z 147.9K followers, 214.6K engagements "This man breaks LLMs until he gets put on a list" [X Link](https://x.com/financedystop/status/1980303495631900966) [@financedystop](/creator/x/financedystop) 2025-10-20T16:02Z 262.5K followers, 224K engagements "I can't love these points more. OCR is the right step toward fine-grained visual reasoning. Unlike semantic reasoning it forces model to resolve every pixelwhat we take for granted reading/viewing the world. Funny how we do this every day yet just start to scale it in LLMs" [X Link](https://x.com/wzihanw/status/1980449933825306816) [@wzihanw](/creator/x/wzihanw) 2025-10-21T01:43Z 22.8K followers, 67.2K engagements "DeepSeek released an OCR model today. Their motivation is really interesting: they want to use visual modality as an efficient compression medium for textual information and use this to solve long-context challenges in LLMs. Of course they are using it to get more training data for their models as well. "DeepSeek-OCR can generate training data for LLMs/VLMs at a scale of 200k+ pages per day (a single A100-40G)."" [X Link](https://x.com/iScienceLuvr/status/1980247935700066468) [@iScienceLuvr](/creator/x/iScienceLuvr) 2025-10-20T12:21Z 82.5K followers, 154.6K engagements "For those who may not have the patience to read the full paper or understand the technical jargon let me give you a breakdown. The authors introduce SPIN-Bench a unified benchmark framework to evaluate how large language models (LLMs) perform not just on isolated planning tasks but also in social multi-agent strategic settings. SPIN-Bench is composed of several categories of tasks that increase in complexity: X. Classical planning (single-agent deterministic) Tasks expressed in PDDL across many domains (21 domains 1280 tasks) including things like spatial reasoning and resource management." [X Link](https://x.com/Vanieofweb3/status/1980542861843075357) [@Vanieofweb3](/creator/x/Vanieofweb3) 2025-10-21T07:53Z 37.2K followers, 15.8K engagements "One of the most important papers of the year" [X Link](https://x.com/NandoDF/status/1980535472867983831) [@NandoDF](/creator/x/NandoDF) 2025-10-21T07:23Z 106.1K followers, 35.5K engagements "This Is AGI (S1E7): Latent Spaces Wtf" [YouTube Link](https://youtube.com/watch?v=BobqFSatRn8) [@chadyuk](/creator/youtube/chadyuk) 2025-10-20T00:00Z XXX followers, 10.9K engagements "Wikipedias Secret Signs of AI Writing Exposed" [YouTube Link](https://youtube.com/watch?v=34BmRpsDTh0) [@willfrancis](/creator/youtube/willfrancis) 2025-08-11T20:36Z 11K followers, 2.1M engagements "Transformers the tech behind LLMs Deep Learning Chapter 5" [YouTube Link](https://youtube.com/watch?v=wjZofJX0v4M) [@3blue1brown](/creator/youtube/3blue1brown) 2024-04-01T19:13Z 7.8M followers, 8.2M engagements "Drug Discovery and Building Your Own LLM Chatbot with LobeChat Intel Software" [YouTube Link](https://youtube.com/watch?v=iDTTD3h5rFo) [@intelsoftware](/creator/youtube/intelsoftware) 2025-09-02T15:05Z 255K followers, 330.1K engagements "Deep Dive into LLMs like ChatGPT" [YouTube Link](https://youtube.com/watch?v=7xTGNNLPyMI) [@andrejkarpathy](/creator/youtube/andrejkarpathy) 2025-02-05T18:23Z 1.1M followers, 3.8M engagements "Richard Sutton Father of RL thinks LLMs are a dead end" [YouTube Link](https://youtube.com/watch?v=21EYKqUsPfg) [@dwarkeshpatel](/creator/youtube/dwarkeshpatel) 2025-09-26T16:01Z 1M followers, 430.1K engagements "Large Language Models explained briefly" [YouTube Link](https://youtube.com/watch?v=LPZh9BOjkQs) [@3blue1brown](/creator/youtube/3blue1brown) 2024-11-20T15:07Z 7.8M followers, 4.2M engagements "LLMs are in trouble" [YouTube Link](https://youtube.com/watch?v=o2s8I6yBrxE) [@theprimetimeagen](/creator/youtube/theprimetimeagen) 2025-10-14T12:01Z 919K followers, 623.4K engagements "Stanford CME295 Transformers & LLMs Autumn 2025 Lecture X - Transformer" [YouTube Link](https://youtube.com/watch?v=Ub3GoFaUcds) [@stanfordonline](/creator/youtube/stanfordonline) 2025-10-17T22:08Z 891K followers, 40K engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
wxj369 posts on X about llm, #ai, large language, open ai the most. They currently have XXXXXXXXX followers and XX posts still getting attention that total XXXXXXXXX engagements in the last XX hours.
Social category influence technology brands stocks
Social topic influence llm, #ai, large language, open ai, intro, 0x0010a924a343c5f3c108d7b64666bb0c9e2c515f #1, instead of, $googl, gpus, gpu
Top assets mentioned Alphabet Inc Class A (GOOGL) Artificial Intelligence (AI4)
Top posts by engagements in the last XX hours
"@BTC_xxz 0x0010A924A343c5f3c108D7b64666bB0C9E2c515f"
X Link @wxj3691651 2025-10-21T02:25Z XX followers, XX engagements
"ELI5: Why can't / don't LLMs say "I don't know" or ask back clarifying questions instead of hallucinating"
Reddit Link @Double_History1719 2025-10-20T22:42Z X followers, 70K engagements
"Best Local LLMs - October 2025"
Reddit Link @rm-rf-rm 2025-10-20T20:22Z X followers, 1550 engagements
"4o roasting other LLMs in Group Chat"
Reddit Link @Shameless_Devil 2025-10-21T00:22Z X followers, 1213 engagements
"Release gpu-poor: INT8 quantization achieving XX% memory reduction on large LLMs (pure Python production metrics)"
Reddit Link @BroccoliForsaken3288 2025-10-21T01:52Z X followers, XX engagements
"Why havent Indian startups built their own LLMs yet"
Reddit Link @Classic_Turnover_896 2025-10-20T14:46Z X followers, XX engagements
"Study finds both humans and LLMs rate deliberative thinkers as more intelligent than intuitive or fast responders in complex reasoning tasks"
Reddit Link @IronAshish 2025-10-19T12:41Z X followers, 1592 engagements
"ChatGPT gets right what Google misses and this is a problem with using LLMs trained on in internet for information"
Reddit Link @Additional-Sky-7436 2025-10-21T03:03Z X followers, XX engagements
"Removed by moderator"
Reddit Link @Oh_boy90 2025-10-21T10:35Z X followers, XX engagements
"This customer wanted a maxed-out AI-ready PC and we delivered with dual GPUs for maximum performance. 💪 A lot of you asked Whats the point of two GPUs Well heres why it makes a huge difference: AI & Machine Learning: Two GPUs drastically reduce training time and improve parallel processing perfect for deep learning and LLMs. Multiple Monitors & Workflows: Ideal for setups with 3+ monitors especially for productivity heavy environments like trading content creation or data science. Rendering Mining & Simulations: Whether youre doing GPU-based rendering crypto mining or real time simulations"
TikTok Link @hall.of.tech 2025-07-28T16:19Z 162.8K followers, 1.5M engagements
"LLMs arent magic theyre just insane amounts of matrix math running at GPU speed 🤯#LLM #ai #artificialintelligence #machinelearning #deeplearning"
TikTok Link @willcodeforfoodtoo 2025-10-03T03:45Z 4642 followers, 110.4K engagements
"guy who watched the Sutton and Karpathy Dwarkesh interview: yeah.i guess intelligence is a lot more complicated than i previously thought. maybe LLMs arent intelligent the Boltzmann brain that spontaneously emerged in a tub of curdled yogurt:"
TikTok Link @fireplacepillow 2025-10-19T01:59Z 16.8K followers, 19.5K engagements
"🔍 The Future of AI is in His Hands NVIDIA CEO Jensen Huang unveils the next-gen computing platform built to power the future of artificial intelligence. 🚀 The hardware hes holding isnt just a motherboard its a computational powerhouse designed for generative AI data centers and large language models (LLMs). With this breakthrough technology: ⚡ AI models with billions of parameters will train faster than ever 🌐 Data processing will reach unprecedented speeds 🧠 The boundaries of artificial intelligence will be redefined 📌 Its not just silicon its the foundation of tomorrows intelligence. ."
TikTok Link @lunexar_aep 2025-06-01T11:45Z XXX followers, 131.3K engagements
"2 things humans can do but LLMs cant #ai #aistartup #andrejkarpathy #agi #llm"
TikTok Link @aistartupfren 2025-10-20T14:01Z 21.2K followers, 4172 engagements
"Diffusion Models: from autoencoder to VAE to GAN to Diffusion. How AI matured its diffusion model to where it is today simplified as emerging techonology that could potentially surpass transformer architecture. We explore the challenges of training LLMs in learning image features and storing them in latent space to more. #ai #llm #artificialintelligence #explained #largelanguagemodels #softwareengineer #research #softwaredeveloper #tech #openai #anthropic #gpt5 #gpt #coder"
TikTok Link @calebwritescode 2025-08-30T06:27Z 20.9K followers, 237.9K engagements
"The Signs of AI Writing - Wikipedia just published a guide to spotting AI-generated writing and it's the best one on the internet. ChatGPT Claude Gemini and the other AI chatbots (LLMs) still write in a predictable way with lots of clich phrases and words as well as little text formatting quirks that give it away. #ai #chatgpt #chatgptprompts #techtok #chatgpt5 #aitools #technology #artificialintelligence #chatgpttips #aiwriting #llm #llms"
TikTok Link @willfrancis24 2025-08-10T17:37Z 145.3K followers, 905.3K engagements
"When diving into the fascinating world of AI coding agents context is key Discover how even a seemingly vast context of 90000 tokens can quickly dwindle leaving these intelligent systems grappling for clarity. In today's exploration viewers will gain insight into optimizing LLMs with innovative techniques like flash attention and K cache quantization. Learn why every token counts and how to leverage memory management for enhanced performance. While current limitations are apparent understanding these challenges is the first step towards mastering AI coding. #AICoding #ContextOptimization"
TikTok Link @zenaiengineer 2025-10-20T13:00Z 1308 followers, XXX engagements
"How Hackers STILL Bypass AI Filters in 2025 (You Wont Believe This One 😳) These are the top X jailbreak methods that are STILL bypassing AI filters and moderation in 2025 even on reasoning models Lets break it down: 💥 X. SEAL (Stacked Encryption for Adaptive LLMs) Hackers layer obfuscated payloads using Base64 ROT13 emoji codes etc. then prompt the AI to decode each step one-by-one. Because reasoning models are trained to follow instructions step-by-step the malicious command is revealed after moderation filters are already bypassed. ⚠ Think: ➡ Decode this. Now decode again. Now act on it."
TikTok Link @eurothrottle 2025-08-04T18:22Z 359.2K followers, 33.3K engagements
"I quite like the new DeepSeek-OCR paper. It's a good OCR model (maybe a bit worse than dots) and yes data collection etc. but anyway it doesn't matter. The more interesting part for me (esp as a computer vision at heart who is temporarily masquerading as a natural language person) is whether pixels are better inputs to LLMs than text. Whether text tokens are wasteful and just terrible at the input. Maybe it makes more sense that all inputs to LLMs should only ever be images. Even if you happen to have pure text input maybe you'd prefer to render it and then feed that in: - more information"
X Link @karpathy 2025-10-20T22:13Z 1.4M followers, 1.5M engagements
"🚀 DeepSeek-OCR the new frontier of OCR from @deepseek_ai exploring optical context compression for LLMs is running blazingly fast on vLLM ⚡ (2500 tokens/s on A100-40G) powered by vllm==0.8.5 for day-0 model support. 🧠 Compresses visual contexts up to XX while keeping XX% OCR accuracy at XX. 📄 Outperforms GOT-OCR2.0 & MinerU2.0 on OmniDocBench using fewer vision tokens. 🤝 The vLLM team is working with DeepSeek to bring official DeepSeek-OCR support into the next vLLM release making multimodal inference even faster and easier to scale. 🔗 #vLLM #DeepSeek #OCR #LLM #VisionAI #DeepLearning"
X Link @vllm_project 2025-10-20T11:31Z 22.1K followers, 912.3K engagements
"If this Karpathy interview doesn't pop the ai bubble nothing will. XX brutal quotes: X. LLMs dont work yet They dont have enough intelligence theyre not multimodal enough they cant use computers and they dont remember what you tell them. Theyre cognitively lacking. Itll take about a decade to work through all of that. X. When you boot them up they always start from zero They have no distillation phase no process like sleep where what happened gets analyzed and written back into the weights. X. Whats stored in their weights is only a hazy recollection of the internet It's just a compressed"
X Link @Prithvir12 2025-10-20T08:16Z 8802 followers, 442.7K engagements
"Prediction: LLMs in their current form may not be able to do everything but AI now has enough momentum that this won't matter. Beam engines couldn't do everything either but they were enough to set off the Industrial Revolution"
X Link @paulg 2025-10-21T07:16Z 2.1M followers, 153.9K engagements
"Wanted to get better intuitions for how RL works on LLMs. So I wrote a simple script to teach Nanochat to add X digit numbers. I was surprised at how fast it learned. Until I looked at the model's generations and realized that it had just learned to always call the built-in Python interpreter 😂. The code I wrote is very remedial minimal and inefficient - I'm a professional podcaster alright But it might be helpful if you just want to see the basics of how REINFORCE or GRPO work. Link to gist below. Fundamentally it's not that complicated: generate multiple trajectories per prompt. Update"
X Link @dwarkesh_sp 2025-10-21T00:16Z 147.9K followers, 214.6K engagements
"This man breaks LLMs until he gets put on a list"
X Link @financedystop 2025-10-20T16:02Z 262.5K followers, 224K engagements
"I can't love these points more. OCR is the right step toward fine-grained visual reasoning. Unlike semantic reasoning it forces model to resolve every pixelwhat we take for granted reading/viewing the world. Funny how we do this every day yet just start to scale it in LLMs"
X Link @wzihanw 2025-10-21T01:43Z 22.8K followers, 67.2K engagements
"DeepSeek released an OCR model today. Their motivation is really interesting: they want to use visual modality as an efficient compression medium for textual information and use this to solve long-context challenges in LLMs. Of course they are using it to get more training data for their models as well. "DeepSeek-OCR can generate training data for LLMs/VLMs at a scale of 200k+ pages per day (a single A100-40G).""
X Link @iScienceLuvr 2025-10-20T12:21Z 82.5K followers, 154.6K engagements
"For those who may not have the patience to read the full paper or understand the technical jargon let me give you a breakdown. The authors introduce SPIN-Bench a unified benchmark framework to evaluate how large language models (LLMs) perform not just on isolated planning tasks but also in social multi-agent strategic settings. SPIN-Bench is composed of several categories of tasks that increase in complexity: X. Classical planning (single-agent deterministic) Tasks expressed in PDDL across many domains (21 domains 1280 tasks) including things like spatial reasoning and resource management."
X Link @Vanieofweb3 2025-10-21T07:53Z 37.2K followers, 15.8K engagements
"One of the most important papers of the year"
X Link @NandoDF 2025-10-21T07:23Z 106.1K followers, 35.5K engagements
"This Is AGI (S1E7): Latent Spaces Wtf"
YouTube Link @chadyuk 2025-10-20T00:00Z XXX followers, 10.9K engagements
"Wikipedias Secret Signs of AI Writing Exposed"
YouTube Link @willfrancis 2025-08-11T20:36Z 11K followers, 2.1M engagements
"Transformers the tech behind LLMs Deep Learning Chapter 5"
YouTube Link @3blue1brown 2024-04-01T19:13Z 7.8M followers, 8.2M engagements
"Drug Discovery and Building Your Own LLM Chatbot with LobeChat Intel Software"
YouTube Link @intelsoftware 2025-09-02T15:05Z 255K followers, 330.1K engagements
"Deep Dive into LLMs like ChatGPT"
YouTube Link @andrejkarpathy 2025-02-05T18:23Z 1.1M followers, 3.8M engagements
"Richard Sutton Father of RL thinks LLMs are a dead end"
YouTube Link @dwarkeshpatel 2025-09-26T16:01Z 1M followers, 430.1K engagements
"Large Language Models explained briefly"
YouTube Link @3blue1brown 2024-11-20T15:07Z 7.8M followers, 4.2M engagements
"LLMs are in trouble"
YouTube Link @theprimetimeagen 2025-10-14T12:01Z 919K followers, 623.4K engagements
"Stanford CME295 Transformers & LLMs Autumn 2025 Lecture X - Transformer"
YouTube Link @stanfordonline 2025-10-17T22:08Z 891K followers, 40K engagements
/creator/x::wxj3691651