[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @zexigh Philippe Anel Philippe Anel posts on X about qwen, ollama, llamacpp, inference the most. They currently have XXX followers and X posts still getting attention that total X engagements in the last XX hours. ### Engagements: X [#](/creator/twitter::580146594/interactions)  - X Week XXX +707% ### Mentions: X [#](/creator/twitter::580146594/posts_active)  ### Followers: XXX [#](/creator/twitter::580146594/followers)  - X Week XXX +0.28% ### CreatorRank: undefined [#](/creator/twitter::580146594/influencer_rank)  ### Social Influence [#](/creator/twitter::580146594/influence) --- **Social topic influence** [qwen](/topic/qwen), [ollama](/topic/ollama), [llamacpp](/topic/llamacpp), [inference](/topic/inference), [llm](/topic/llm), [#ai](/topic/#ai), [logits](/topic/logits), [bytes](/topic/bytes), [gpt](/topic/gpt) ### Top Social Posts [#](/creator/twitter::580146594/posts) --- Top posts by engagements in the last XX hours "Im building my own inference engine in Rust and AVX512 (huge thanks to @rasbt & @karpathy 🙏). At first I thought it was just my bad config poor top-k/top-p/temperature choices But nope. Ive hit the same loop on @llamacpp @ollama (qwen) and even Grok X in thinking mode 😅"  [@zexigh](/creator/x/zexigh) on [X](/post/tweet/1947730274315337968) 2025-07-22 18:47:39 UTC XXX followers, XX engagements "Update: Someone pointed out penalties should catch this loop. But oops Ollama/Qwen forces repeat_penalty to XXX Why Speed Doubt itwindows only XX tokens negligible vs model time. Logits incoming #AI #LLM #Debugging #72HourDays Still I had same issue with Grok3"  [@zexigh](/creator/x/zexigh) on [X](/post/tweet/1947923491828994189) 2025-07-23 07:35:25 UTC XXX followers, XX engagements "🧠 Did you know GPT Tokenizer doesn't treat "space" as a normal char: ' ' (space) (U+0120) 'n' (U+010A) 't' (U+010B) 'r' (U+0108) Because it uses byte-level BPE and maps raw bytes to uncommon Unicode chars for readability & reversibility"  [@zexigh](/creator/x/zexigh) on [X](/post/tweet/1946460242084036695) 2025-07-19 06:40:59 UTC XXX followers, XX engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Philippe Anel posts on X about qwen, ollama, llamacpp, inference the most. They currently have XXX followers and X posts still getting attention that total X engagements in the last XX hours.
Social topic influence qwen, ollama, llamacpp, inference, llm, #ai, logits, bytes, gpt
Top posts by engagements in the last XX hours
"Im building my own inference engine in Rust and AVX512 (huge thanks to @rasbt & @karpathy 🙏). At first I thought it was just my bad config poor top-k/top-p/temperature choices But nope. Ive hit the same loop on @llamacpp @ollama (qwen) and even Grok X in thinking mode 😅" @zexigh on X 2025-07-22 18:47:39 UTC XXX followers, XX engagements
"Update: Someone pointed out penalties should catch this loop. But oops Ollama/Qwen forces repeat_penalty to XXX Why Speed Doubt itwindows only XX tokens negligible vs model time. Logits incoming #AI #LLM #Debugging #72HourDays Still I had same issue with Grok3" @zexigh on X 2025-07-23 07:35:25 UTC XXX followers, XX engagements
"🧠 Did you know GPT Tokenizer doesn't treat "space" as a normal char: ' ' (space) (U+0120) 'n' (U+010A) 't' (U+010B) 'r' (U+0108) Because it uses byte-level BPE and maps raw bytes to uncommon Unicode chars for readability & reversibility" @zexigh on X 2025-07-19 06:40:59 UTC XXX followers, XX engagements
/creator/x::zexigh