Dark | Light
# ![@randomfoo2 Avatar](https://lunarcrush.com/gi/w:26/cr:reddit::t2_eztox.png) @randomfoo2 randomfoo2

randomfoo2 posts on Reddit about shisa, llamacpp, ai, llm the most. They currently have [------] followers and [--] posts still getting attention that total [---] engagements in the last [--] hours.

### Engagements: [---] [#](/creator/reddit::t2_eztox/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:reddit::t2_eztox/c:line/m:interactions.svg)

- [--] Week [-----] +7.80%
- [--] Month [-----] +78%
- [--] Months [------] +572%
- [--] Year [------] +203%

### Mentions: [--] [#](/creator/reddit::t2_eztox/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:reddit::t2_eztox/c:line/m:posts_active.svg)

- [--] Week [--] +11%
- [--] Month [--] no change
- [--] Months [--] +7.10%
- [--] Year [--] +54%

### Followers: [------] [#](/creator/reddit::t2_eztox/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:reddit::t2_eztox/c:line/m:followers.svg)

- [--] Months [------] +1.40%
- [--] Year [------] +18%

### CreatorRank: [---------] [#](/creator/reddit::t2_eztox/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:reddit::t2_eztox/c:line/m:influencer_rank.svg)

### Social Influence

**Social category influence**
[technology brands](/list/technology-brands)  34.48% [stocks](/list/stocks)  31.03% [countries](/list/countries)  6.9%

**Social topic influence**
[shisa](/topic/shisa) #13, [llamacpp](/topic/llamacpp) #16, [ai](/topic/ai) 10.34%, [llm](/topic/llm) #935, [max](/topic/max) 6.9%, [japan](/topic/japan) 6.9%, [gpu](/topic/gpu) 6.9%, [model](/topic/model) 3.45%, [testing](/topic/testing) 3.45%, [improved](/topic/improved) 3.45%
### Top Social Posts
Top posts by engagements in the last [--] hours

"llama.cpp Compute and Memory Bandwidth Efficiency w/ Different Devices/Backends LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1ghvwsj)  2024-11-02T13:04Z [--] followers, [---] engagements


"Relative performance in llama.cpp when adjusting power limits for an RTX [----] (w/ scripts) LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1hg6qrd)  2024-12-17T09:08Z [--] followers, [---] engagements


"AMD Strix Halo (Ryzen AI Max+ 395) GPU LLM Performance LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1kmi3ra)  2025-05-14T15:32Z [--] followers, [----] engagements


"Updated Strix Halo (Ryzen AI Max+ 395) LLM Benchmark Results LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1m6b151)  2025-07-22T11:02Z [--] followers, [---] engagements


"Llama [--] Japanese Evals LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1jw2aph)  2025-04-10T16:37Z [--] followers, [---] engagements


"Current state of training on AMD Radeon [----] XTX (with benchmarks) LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1atvxu2)  2024-02-18T15:00Z [--] followers, [----] engagements


"Shisa V2 - a family of new JA/EN bilingual models LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1jz2lll)  2025-04-14T16:10Z [--] followers, [---] engagements


"Shisa V2 405B: The strongest model ever built in Japan (JA/EN) LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1l318di)  2025-06-04T09:34Z [--] followers, [----] engagements


"Testing Quant Quality for Shisa V2 405B LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1l5sw3m)  2025-06-07T19:23Z [--] followers, [---] engagements


"Shisa V2.1: Improved Japanese (JA/EN) Models (1.2B-70B) LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1pk3cky)  2025-12-11T17:26Z [--] followers, [---] engagements


"Shisa 7B: a new JA/EN bilingual model based on Mistral 7B I've worked w/ Jon Durbin (Airoboros etc) over the past [--] weeks or so to train **Shisa 7B**(https://huggingface.co/augmxnt/shisa-7b-v1) a new fully open source bilingual Japanese and English model. We took Mistral 7B and pre-trained with an additional 8B JA tokens with a new custom extended tokenizer that is 2X more efficient in Japanese than the original Mistral tokenizer. The new base model shisa-base-7b-v1(https://huggingface.co/augmxnt/shisa-base-7b-v1) is also available for anyone to build on. Highlights: * By open source we mean"  
[Reddit Link](https://redd.it/18cwh4n)  2023-12-07T14:14Z [----] followers, [----] engagements


"Hardware Canucks: Intel vs AMD Laptops in [----] - What a Mess. AMDLaptops AMDLaptops"  
[Reddit Link](https://redd.it/1cil2jd)  2024-05-02T17:11Z [----] followers, [---] engagements


"Qwen2-7B-Instruct-deccp (Abliterated) So figure this might be of interest to some people. Over the weekend I created did some analysis and exploration on what Qwen [--] 7B Instruct's trying to characterize the breadth/depth of the RL model's Chinese censorship. tldr: it's a lot * augmxnt/Qwen2-7B-Instruct-deccp(https://huggingface.co/augmxnt/Qwen2-7B-Instruct-deccp) - here's an abliterated model if anyone wants to play around with it. It doesn't get rid of all refusals and sometimes the non-refusals are worse but you know there you go * TransformerLens doesn't support Qwen2 yet so I based my"  
[Reddit Link](https://redd.it/1dbrhpv)  2024-06-09T11:22Z [----] followers, [---] engagements


"Answer.AI - What policy makers need to know about AI (and what goes wrong if they dont) LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1divh5o)  2024-06-18T16:53Z [----] followers, [--] engagements


"voicechat2 - An open source fast fully local AI voicechat using WebSockets Earlier this week I released a new WebSocket version of a AI voice-to-voice chat server for the Hackster/AMD Pervasive AI Developer Contest(https://www.hackster.io/contests/amd2023/). The project is open sourced under an Apache [---] license and I figure there are probably some people here that might enjoy it: https://github.com/lhl/voicechat2(https://github.com/lhl/voicechat2) Besides being fully open source fully local (whisper.cpp llama.cpp Coqui TTS or StyleTTS2) and using WebSockets instead of being local"  
[Reddit Link](https://redd.it/1eju211)  2024-08-04T12:26Z [----] followers, [---] engagements


"September [----] Update: AMD GPU (mostly RDNA3) AI/LLM Notes Over the weekend I went through my various notes and did a thorough update of my AMD GPU resource doc here: https://llm-tracker.info/howto/AMD-GPUs(https://llm-tracker.info/howto/AMD-GPUs) Over the past few years I've ended up with a fair amount of AMD gear including a W7900 and [----] XTX (RDNA3 gfx1100) which have official (although still somewhat second class) ROCm support and I wanted to check for myself how things were. Anyway sharing an update in case other people find it useful. A quick list of highlights: * I run these cards on"  
[Reddit Link](https://redd.it/1fssvbm)  2024-09-30T11:06Z 10K followers, [---] engagements


"September [----] Update: AMD GPU (mostly RDNA3) AI/LLM Notes ROCm ROCm"  
[Reddit Link](https://redd.it/1fssvj8)  2024-09-30T11:07Z [----] followers, [--] engagements


"Testing llama.cpp with Intel's Xe2 iGPU (Core Ultra [--] 258V w/ Arc Graphics 140V) I have a Lunar Lake laptop (see my in-progress Linux review(https://github.com/lhl/linuxlaptops/wiki/2024-MSI-Prestige-13-AI--Evo-A2VM)) and recently sat down and did some testing on how llama.cpp works with it. * Chips and Cheese has the most in-depth analysis of the iGPU(https://chipsandcheese.com/p/lunar-lakes-igpu-debut-of-intels) which includes architectural and real world comparisons w/ the prior-gen Xe-LPG as well as RDNA [---] (in the AMD Ryzen AI [--] HX [---] w/ Radeon 890M). * The 258V has 32GB of LPDDR5-8533"  
[Reddit Link](https://redd.it/1gheslj)  2024-11-01T20:16Z [--] followers, [---] engagements


"AMD Strix Halo (Ryzen AI Max+ 395) GPU LLM Performance ROCm ROCm"  
[Reddit Link](https://redd.it/1kn2sa0)  2025-05-15T08:04Z 10.2K followers, [--] engagements


"Shisa V2 405B: The strongest model ever built in Japan Hey so we've released the latest member of our Shisa V2(https://www.reddit.com/r/LocalLLaMA/comments/1jz2lll/shisa_v2_a_family_of_new_jaen_bilingual_models/) family of open bilingual (JA/EN) models: Shisa V2 405B(https://shisa.ai/posts/shisa-v2-405b/) * Llama [---] 405B Fine Tune inherits the Llama [---] license * Not just our JA mix but also KO + ZH-TW in additional to 405B's native multilingual * Beats GPT-4 & Turbo in JA/EN matches latest GPT-4o and DeepSeek-V3 in JA MT-Bench (it's not a reasoning or code model but yes ) * Based on our"  
[Reddit Link](https://redd.it/1l2zzpj)  2025-06-04T08:06Z 10.3K followers, [--] engagements


"AMD Radeon [----] XT/XTX Inference Performance Comparisons LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/191srof)  2024-11-07T15:56Z [--] followers, [---] engagements


"Improving Poor vLLM Benchmarks (w/o reproducibility grr) ROCm ROCm"  
[Reddit Link](https://redd.it/1gi2d3c)  2024-11-02T18:05Z [--] followers, [--] engagements


"Revisting llama.cpp speculative decoding w/ Qwen2.5-Coder 32B (AMD vs Nvidia results) LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1hqlug2)  2024-12-31T19:24Z [--] followers, [---] engagements


"218 GB/s real-world MBW on AMD Al Max+ [---] (Strix Halo) - The Phawx Review LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1isefit)  2025-02-18T14:53Z [----] followers, [---] engagements


"Faster llama.cpp ROCm performance for AMD RDNA3 (tested on Strix Halo/Ryzen AI Max 395) LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1ok7hd4)  2025-10-30T19:07Z [--] followers, [---] engagements


"torchtune vs axolotl vs unsloth Trainer Performance Comparison LocalLLaMA LocalLLaMA"  
[Reddit Link](https://redd.it/1di0fhv)  2024-06-17T15:10Z [--] followers, [---] engagements


"1 year of Keto/IF by the numbers keto keto"  
[Reddit Link](https://redd.it/czmyz2)  2025-11-11T12:26Z [--] followers, [---] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@randomfoo2 Avatar @randomfoo2 randomfoo2

randomfoo2 posts on Reddit about shisa, llamacpp, ai, llm the most. They currently have [------] followers and [--] posts still getting attention that total [---] engagements in the last [--] hours.

Engagements: [---] #

Engagements Line Chart

  • [--] Week [-----] +7.80%
  • [--] Month [-----] +78%
  • [--] Months [------] +572%
  • [--] Year [------] +203%

Mentions: [--] #

Mentions Line Chart

  • [--] Week [--] +11%
  • [--] Month [--] no change
  • [--] Months [--] +7.10%
  • [--] Year [--] +54%

Followers: [------] #

Followers Line Chart

  • [--] Months [------] +1.40%
  • [--] Year [------] +18%

CreatorRank: [---------] #

CreatorRank Line Chart

Social Influence

Social category influence technology brands 34.48% stocks 31.03% countries 6.9%

Social topic influence shisa #13, llamacpp #16, ai 10.34%, llm #935, max 6.9%, japan 6.9%, gpu 6.9%, model 3.45%, testing 3.45%, improved 3.45%

Top Social Posts

Top posts by engagements in the last [--] hours

"llama.cpp Compute and Memory Bandwidth Efficiency w/ Different Devices/Backends LocalLLaMA LocalLLaMA"
Reddit Link 2024-11-02T13:04Z [--] followers, [---] engagements

"Relative performance in llama.cpp when adjusting power limits for an RTX [----] (w/ scripts) LocalLLaMA LocalLLaMA"
Reddit Link 2024-12-17T09:08Z [--] followers, [---] engagements

"AMD Strix Halo (Ryzen AI Max+ 395) GPU LLM Performance LocalLLaMA LocalLLaMA"
Reddit Link 2025-05-14T15:32Z [--] followers, [----] engagements

"Updated Strix Halo (Ryzen AI Max+ 395) LLM Benchmark Results LocalLLaMA LocalLLaMA"
Reddit Link 2025-07-22T11:02Z [--] followers, [---] engagements

"Llama [--] Japanese Evals LocalLLaMA LocalLLaMA"
Reddit Link 2025-04-10T16:37Z [--] followers, [---] engagements

"Current state of training on AMD Radeon [----] XTX (with benchmarks) LocalLLaMA LocalLLaMA"
Reddit Link 2024-02-18T15:00Z [--] followers, [----] engagements

"Shisa V2 - a family of new JA/EN bilingual models LocalLLaMA LocalLLaMA"
Reddit Link 2025-04-14T16:10Z [--] followers, [---] engagements

"Shisa V2 405B: The strongest model ever built in Japan (JA/EN) LocalLLaMA LocalLLaMA"
Reddit Link 2025-06-04T09:34Z [--] followers, [----] engagements

"Testing Quant Quality for Shisa V2 405B LocalLLaMA LocalLLaMA"
Reddit Link 2025-06-07T19:23Z [--] followers, [---] engagements

"Shisa V2.1: Improved Japanese (JA/EN) Models (1.2B-70B) LocalLLaMA LocalLLaMA"
Reddit Link 2025-12-11T17:26Z [--] followers, [---] engagements

"Shisa 7B: a new JA/EN bilingual model based on Mistral 7B I've worked w/ Jon Durbin (Airoboros etc) over the past [--] weeks or so to train Shisa 7B(https://huggingface.co/augmxnt/shisa-7b-v1) a new fully open source bilingual Japanese and English model. We took Mistral 7B and pre-trained with an additional 8B JA tokens with a new custom extended tokenizer that is 2X more efficient in Japanese than the original Mistral tokenizer. The new base model shisa-base-7b-v1(https://huggingface.co/augmxnt/shisa-base-7b-v1) is also available for anyone to build on. Highlights: * By open source we mean"
Reddit Link 2023-12-07T14:14Z [----] followers, [----] engagements

"Hardware Canucks: Intel vs AMD Laptops in [----] - What a Mess. AMDLaptops AMDLaptops"
Reddit Link 2024-05-02T17:11Z [----] followers, [---] engagements

"Qwen2-7B-Instruct-deccp (Abliterated) So figure this might be of interest to some people. Over the weekend I created did some analysis and exploration on what Qwen [--] 7B Instruct's trying to characterize the breadth/depth of the RL model's Chinese censorship. tldr: it's a lot * augmxnt/Qwen2-7B-Instruct-deccp(https://huggingface.co/augmxnt/Qwen2-7B-Instruct-deccp) - here's an abliterated model if anyone wants to play around with it. It doesn't get rid of all refusals and sometimes the non-refusals are worse but you know there you go * TransformerLens doesn't support Qwen2 yet so I based my"
Reddit Link 2024-06-09T11:22Z [----] followers, [---] engagements

"Answer.AI - What policy makers need to know about AI (and what goes wrong if they dont) LocalLLaMA LocalLLaMA"
Reddit Link 2024-06-18T16:53Z [----] followers, [--] engagements

"voicechat2 - An open source fast fully local AI voicechat using WebSockets Earlier this week I released a new WebSocket version of a AI voice-to-voice chat server for the Hackster/AMD Pervasive AI Developer Contest(https://www.hackster.io/contests/amd2023/). The project is open sourced under an Apache [---] license and I figure there are probably some people here that might enjoy it: https://github.com/lhl/voicechat2(https://github.com/lhl/voicechat2) Besides being fully open source fully local (whisper.cpp llama.cpp Coqui TTS or StyleTTS2) and using WebSockets instead of being local"
Reddit Link 2024-08-04T12:26Z [----] followers, [---] engagements

"September [----] Update: AMD GPU (mostly RDNA3) AI/LLM Notes Over the weekend I went through my various notes and did a thorough update of my AMD GPU resource doc here: https://llm-tracker.info/howto/AMD-GPUs(https://llm-tracker.info/howto/AMD-GPUs) Over the past few years I've ended up with a fair amount of AMD gear including a W7900 and [----] XTX (RDNA3 gfx1100) which have official (although still somewhat second class) ROCm support and I wanted to check for myself how things were. Anyway sharing an update in case other people find it useful. A quick list of highlights: * I run these cards on"
Reddit Link 2024-09-30T11:06Z 10K followers, [---] engagements

"September [----] Update: AMD GPU (mostly RDNA3) AI/LLM Notes ROCm ROCm"
Reddit Link 2024-09-30T11:07Z [----] followers, [--] engagements

"Testing llama.cpp with Intel's Xe2 iGPU (Core Ultra [--] 258V w/ Arc Graphics 140V) I have a Lunar Lake laptop (see my in-progress Linux review(https://github.com/lhl/linuxlaptops/wiki/2024-MSI-Prestige-13-AI--Evo-A2VM)) and recently sat down and did some testing on how llama.cpp works with it. * Chips and Cheese has the most in-depth analysis of the iGPU(https://chipsandcheese.com/p/lunar-lakes-igpu-debut-of-intels) which includes architectural and real world comparisons w/ the prior-gen Xe-LPG as well as RDNA [---] (in the AMD Ryzen AI [--] HX [---] w/ Radeon 890M). * The 258V has 32GB of LPDDR5-8533"
Reddit Link 2024-11-01T20:16Z [--] followers, [---] engagements

"AMD Strix Halo (Ryzen AI Max+ 395) GPU LLM Performance ROCm ROCm"
Reddit Link 2025-05-15T08:04Z 10.2K followers, [--] engagements

"Shisa V2 405B: The strongest model ever built in Japan Hey so we've released the latest member of our Shisa V2(https://www.reddit.com/r/LocalLLaMA/comments/1jz2lll/shisa_v2_a_family_of_new_jaen_bilingual_models/) family of open bilingual (JA/EN) models: Shisa V2 405B(https://shisa.ai/posts/shisa-v2-405b/) * Llama [---] 405B Fine Tune inherits the Llama [---] license * Not just our JA mix but also KO + ZH-TW in additional to 405B's native multilingual * Beats GPT-4 & Turbo in JA/EN matches latest GPT-4o and DeepSeek-V3 in JA MT-Bench (it's not a reasoning or code model but yes ) * Based on our"
Reddit Link 2025-06-04T08:06Z 10.3K followers, [--] engagements

"AMD Radeon [----] XT/XTX Inference Performance Comparisons LocalLLaMA LocalLLaMA"
Reddit Link 2024-11-07T15:56Z [--] followers, [---] engagements

"Improving Poor vLLM Benchmarks (w/o reproducibility grr) ROCm ROCm"
Reddit Link 2024-11-02T18:05Z [--] followers, [--] engagements

"Revisting llama.cpp speculative decoding w/ Qwen2.5-Coder 32B (AMD vs Nvidia results) LocalLLaMA LocalLLaMA"
Reddit Link 2024-12-31T19:24Z [--] followers, [---] engagements

"218 GB/s real-world MBW on AMD Al Max+ [---] (Strix Halo) - The Phawx Review LocalLLaMA LocalLLaMA"
Reddit Link 2025-02-18T14:53Z [----] followers, [---] engagements

"Faster llama.cpp ROCm performance for AMD RDNA3 (tested on Strix Halo/Ryzen AI Max 395) LocalLLaMA LocalLLaMA"
Reddit Link 2025-10-30T19:07Z [--] followers, [---] engagements

"torchtune vs axolotl vs unsloth Trainer Performance Comparison LocalLLaMA LocalLLaMA"
Reddit Link 2024-06-17T15:10Z [--] followers, [---] engagements

"1 year of Keto/IF by the numbers keto keto"
Reddit Link 2025-11-11T12:26Z [--] followers, [---] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@randomfoo2
/creator/reddit::randomfoo2