Dark | Light
# ![@limegreenpeper1 Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::1203210782663512065.png) @limegreenpeper1 limegreenpeper753

limegreenpeper753 posts on X about $2413t, ultra, llamacpp, build the most. They currently have [-------] followers and [--] posts still getting attention that total [------] engagements in the last [--] hours.

### Engagements: [------] [#](/creator/twitter::1203210782663512065/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1203210782663512065/c:line/m:interactions.svg)

- [--] Month [-----] -77%
- [--] Months [------] +10,422%

### Mentions: [--] [#](/creator/twitter::1203210782663512065/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1203210782663512065/c:line/m:posts_active.svg)


### Followers: [-------] [#](/creator/twitter::1203210782663512065/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1203210782663512065/c:line/m:followers.svg)

- [--] Month [---] +2.60%
- [--] Months [---] +1,363%

### CreatorRank: [---------] [#](/creator/twitter::1203210782663512065/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1203210782663512065/c:line/m:influencer_rank.svg)

### Social Influence

**Social category influence**
[technology brands](/list/technology-brands)  [stocks](/list/stocks) 

**Social topic influence**
[$2413t](/topic/$2413t) #674, [ultra](/topic/ultra), [llamacpp](/topic/llamacpp), [build](/topic/build), [for all](/topic/for-all), [kimi](/topic/kimi), [max](/topic/max), [fit](/topic/fit), [inference](/topic/inference), [$023h](/topic/$023h)

**Top accounts mentioned or mentioned by**
[@alfredplpl](/creator/undefined) [@tori29umai](/creator/undefined) [@gillelandkristi](/creator/undefined) [@hawkymisc](/creator/undefined) [@vmlxllm](/creator/undefined)

**Top assets mentioned**
[Lineage, Inc. (LINE)](/topic/$line)
### Top Social Posts
Top posts by engagements in the last [--] hours

"@alfredplpl Dataset 3000step -p 'This image is saying "". The background is white. The letter is black.' -np '' -s [--] --cfg [---] Train Dataset https://huggingface.co/datasets/alfredplpl/image-text-pairs-ja-cc0 https://github.com/FlyMyAI/flymyai-lora-trainer https://huggingface.co/datasets/alfredplpl/image-text-pairs-ja-cc0 https://github.com/FlyMyAI/flymyai-lora-trainer"  
[X Link](https://x.com/limegreenpeper1/status/1956818475143643428)  2025-08-16T20:40Z [--] followers, [---] engagements


"@alfredplpl lora ""  -p 'This image is saying "". The background is white. The letter is black.' -np '' -s [--] --cfg [---] --seed 8484741726198430554"  
[X Link](https://x.com/limegreenpeper1/status/1957031043749568816)  2025-08-17T10:45Z [--] followers, [--] engagements


"https://huggingface.co/microsoft/VibeVoice-1.5B https://huggingface.co/microsoft/VibeVoice-1.5B"  
[X Link](https://x.com/limegreenpeper1/status/1960247215588295055)  2025-08-26T07:45Z [--] followers, [--] engagements


"Qwen-image-edit-2509 3090(280W) : nunchaku-lightning-8steps : [--] sec (no offroad 20GB VRAM) M3 ultra : bf16 with lora lightning 8steps : [---] sec"  
[X Link](https://x.com/limegreenpeper1/status/1971526225291268394)  2025-09-26T10:44Z [--] followers, [---] engagements


"hunyuan_video_1.5 on ComfyUI No easy cache Generation time 121F 20Steps. (vast) [-----] with sageattention : 220s : 3m40s $0.23/h -4090(cu118 no xformers) : 430s : 7m10s $0.17/h [-----] limit 200W : 16m41s (local)"  
[X Link](https://x.com/limegreenpeper1/status/1992081287839126000)  2025-11-22T04:02Z [---] followers, [---] engagements


"Thanks for unsloth team and Original Qwen team. unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF UD-Q6_K_XL on M3 Ultra https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF"  
[X Link](https://x.com/limegreenpeper1/status/1994973188430991449)  2025-11-30T03:34Z [--] followers, [---] engagements


"Stable-Diffusion.cpp build for winarm64 OpenCL Generate time SD1.5-461s FLUX.1dev-1734s Both cpu generation. something wrong https://github.com/leejet/stable-diffusion.cpp https://github.com/leejet/stable-diffusion.cpp"  
[X Link](https://x.com/limegreenpeper1/status/1995047980223832099)  2025-11-30T08:31Z [--] followers, [--] engagements


"Stable-Diffusion.cpp buildllama.cpp build"  
[X Link](https://x.com/limegreenpeper1/status/1995048261917589754)  2025-11-30T08:32Z [--] followers, [--] engagements


": win arm64 GPU llama.cpp llama.cpp on winarm64 using GPU gemma-3-12B-it-QAT-Q4_0.gguf"  
[X Link](https://x.com/limegreenpeper1/status/1995054896777953516)  2025-11-30T08:58Z [--] followers, [--] engagements


"PrecompileWheels for torch 2.9.1+cu130+cp310+5090 on hf Template: vastai/base-image:cuda-13.0.1-cudnn-devel-ubuntu24.04-py310 -sageattention-2.2.0 -sageattn3-1.0.0 -flash_attn-2.8.3"  
[X Link](https://x.com/limegreenpeper1/status/1999770744595857840)  2025-12-13T09:17Z [---] followers, [---] engagements


"lambda aarch64+h100 compile original flash_attn-2.7 venv torch2.9.1-cu128-cp310 -flash_attn_3-3.0.0b1-cp39-abi3-linux_aarch64.whl -sageattention-2.2.0-cp310-cp310-linux_aarch64.whl"  
[X Link](https://x.com/limegreenpeper1/status/2000007165260747088)  2025-12-14T00:57Z [--] followers, [--] engagements


"Qwen-Image-Layered using MPS on M3 Ultra #1 original #2-4 layers looks available from RAM 128GB https://huggingface.co/Qwen/Qwen-Image-Layered https://huggingface.co/Qwen/Qwen-Image-Layered"  
[X Link](https://x.com/limegreenpeper1/status/2002177343574716671)  2025-12-20T00:40Z [---] followers, [----] engagements


"Qwen-Image-Layered on hf space(Thanks for original team) I think best solution is hf space with ZeroGPU (if PRO Account) https://huggingface.co/spaces/Qwen/Qwen-Image-Layered https://huggingface.co/spaces/Qwen/Qwen-Image-Layered"  
[X Link](https://x.com/limegreenpeper1/status/2002211636074135822)  2025-12-20T02:57Z [---] followers, [---] engagements


"Running: TurboDiffusion on DGX Spark CP3.12 + build TurboDiffusion and dependencies (SpargeAttn flash-attention etc) Thanks for all team. https://github.com/thu-ml/TurboDiffusion https://github.com/thu-ml/TurboDiffusion"  
[X Link](https://x.com/limegreenpeper1/status/2005470695019745461)  2025-12-29T02:47Z [--] followers, [---] engagements


"Running HY-Motion-1.0 on DGX Spark (couldn't export) patched: requests.txt: PyYAML==6.0 - PyYAML fbxsdkpy==2020.1.pos - #fbxsdkpy==2020.1.pos gradio_app.py: disable_prompt_engineering = os.environ.get("DISABLE_PROMPT_ENGINEERING" False) - True https://github.com/Tencent-Hunyuan/HY-Motion-1.0/ https://github.com/Tencent-Hunyuan/HY-Motion-1.0/"  
[X Link](https://x.com/limegreenpeper1/status/2006185416769024249)  2025-12-31T02:07Z [---] followers, [----] engagements


"gradio_app.py DISABLE_PROMPT_ENGINEERING="True" GRADIO_SERVER_NAME="0.0.0.0" uv run gradio_app.py"  
[X Link](https://x.com/limegreenpeper1/status/2006198911938019658)  2025-12-31T03:01Z [--] followers, [--] engagements


"HY-Motion-1.0 with Rewrite on DGX Spark patched: hymotion/prompt_engineering/prompt_rewrite.py"  
[X Link](https://x.com/limegreenpeper1/status/2006232833711563199)  2025-12-31T05:15Z [--] followers, [---] engagements


"HY-Motion-1.0 with Rewrite on DGX Spark modified non quantization patched: comment out. line [---] # load_in_4bit=True hymotion/prompt_engineering/prompt_rewrite.py"  
[X Link](https://x.com/limegreenpeper1/status/2006241517816668221)  2025-12-31T05:50Z [--] followers, [---] engagements


"LTX-2-T2V_Full_wLora on DGX Spark (960x544 241f) conda + torchcodec build + LTX-2 + ComfuUI+ custom_nodes Thanks for NVIDIA fourm https://forums.developer.nvidia.com/t/cant-install-torch-torchaudio-torchcodec/348660/15 https://forums.developer.nvidia.com/t/cant-install-torch-torchaudio-torchcodec/348660/15"  
[X Link](https://x.com/limegreenpeper1/status/2008487840120975424)  2026-01-06T10:36Z [---] followers, [----] engagements


"LTX-2-T2V_FP8_wLora 1920x1088 241f on DGX Spark (no sound noise) FP8"  
[X Link](https://x.com/limegreenpeper1/status/2008505128186507276)  2026-01-06T11:45Z [---] followers, [----] engagements


"Running: ComfyUI-LTX2-Kijiai-distilled-NoAudio on MPS(M3Ultra) -distilled-GGUF(PR patch or Fork) is Fast -Need to disable Audio flows Thanks to https://github.com/city96/ComfyUI-GGUF/pull/399 https://github.com/city96/ComfyUI-GGUF/pull/399"  
[X Link](https://x.com/limegreenpeper1/status/2009787228374094127)  2026-01-10T00:39Z [---] followers, [----] engagements


"@tori29umai &Tips ComfyUI-SyntaxNodes with ml-sharp on DGX Spark -SHARP 3D Gaussian Splatdefaultply (Syntax-nodes audionode & ml-sharp torch gsplat pip install )"  
[X Link](https://x.com/limegreenpeper1/status/2009846780675141711)  2026-01-10T04:36Z [---] followers, [---] engagements


"ComfyUI-LTX2-Kijiai-dev-LoRA-withAudio on MPS(M3Ultra) Latest(looks completly running both images and audio) -Disable Enhancer: due to 'LTXAVTEModel_' object has no attribute 'processor' -Set VAELoader KJ weight_dtype: fp16 Thans for all"  
[X Link](https://x.com/limegreenpeper1/status/2009907394923614540)  2026-01-10T08:37Z [---] followers, [----] engagements


"Example: LTX-2 using Kijai's GGUF(Fork node) and community workflow with Audio on DGX Spark (The video includes audio.) She says "happy new year" in Japanese"  
[X Link](https://x.com/limegreenpeper1/status/2009999062335103097)  2026-01-10T14:41Z [---] followers, [----] engagements


": seed hf fal/Qwen-Image-Edit-2511-Multiple-Angles-LoRA All [--] Prompts queue $ cat list1 while read line; do echo $line; uv run "./image_qwen_image_edit_2511_Mult_angle_api.json" "$line"; sleep 1; done http://run-api.py http://run-api.py"  
[X Link](https://x.com/limegreenpeper1/status/2010221177642594530)  2026-01-11T05:24Z [---] followers, [---] engagements


"GLM-Images I2VT2V on HF Pro spaces (Thanks) ComfyUI https://huggingface.co/spaces/multimodalart/GLM-Image https://huggingface.co/spaces/multimodalart/GLM-Image"  
[X Link](https://x.com/limegreenpeper1/status/2011462742356738120)  2026-01-14T15:37Z [---] followers, [---] engagements


"flux-2-klein-base-9b(not fp8) T2I #1 M3 Ultra: 7.06s/it #2 DGX Spark: 2.47s/it https://docs.comfy.org/tutorials/flux/flux-2-klein https://docs.comfy.org/tutorials/flux/flux-2-klein"  
[X Link](https://x.com/limegreenpeper1/status/2012111196963635699)  2026-01-16T10:34Z [---] followers, [---] engagements


"@GillelandKristi I simply used the latest ComfyUI and the workflow that was presented"  
[X Link](https://x.com/limegreenpeper1/status/2012177013021245911)  2026-01-16T14:55Z [---] followers, [--] engagements


"black-forest-labs/FLUX.2-klein-9B Running I2I on Japanese prompt both DGX Spark(1.86s/it) and M3Ultra(7.51s/it)non FP8"  
[X Link](https://x.com/limegreenpeper1/status/2012324444900086084)  2026-01-17T00:41Z [---] followers, [---] engagements


"I recommend referring to the official forum for DGX Spark information. You can see various challenges there. 8x DGX Spark https://forums.developer.nvidia.com/t/6x-spark-setup/354399/34 https://forums.developer.nvidia.com/t/6x-spark-setup/354399/34"  
[X Link](https://x.com/limegreenpeper1/status/2012408578670547255)  2026-01-17T06:16Z [---] followers, [---] engagements


"Running: HeartMuLa-ComfuUI on both DGX Spark and M3 Ultra with some patches. This document describes the method using Claude Code. Thanks to original team and benjiyaya. https://github.com/benjiyaya/HeartMuLa_ComfyUI https://github.com/benjiyaya/HeartMuLa_ComfyUI"  
[X Link](https://x.com/limegreenpeper1/status/2013808149212385282)  2026-01-21T02:57Z [---] followers, [---] engagements


"Achieve the same functionality with the GLM-4.7 CodingPlan"  
[X Link](https://x.com/limegreenpeper1/status/2013824330854178965)  2026-01-21T04:01Z [---] followers, [--] engagements


"hf unsloth/Kimi-K2.5-GGUF UD-IQ3_XXS on M3 Ultra. (21.62tok/sec) Thanks to the Original KIMI Team and the unsloth Team"  
[X Link](https://x.com/anyuser/status/2016793878930870518)  2026-01-29T08:41Z [---] followers, [---] engagements


"unsloth-DeepSeek-V3.2-UD-Q4_K_XL on M3 Ultra CTX [------] (Max) [-----] tok / sec"  
[X Link](https://x.com/anyuser/status/2017237920344903922)  2026-01-30T14:06Z [---] followers, [---] engagements


"ACE-Step-v1-3.5B (not v1.5) with ComfyUI on both Mac(cpu) and DGX Spark. (For Mac apply the patch that uses soundfiledoesn't tochcodec) https://github.com/hiroki-abe-58/ComfyUI-AceMusic https://github.com/hiroki-abe-58/ComfyUI-AceMusic"  
[X Link](https://x.com/anyuser/status/2019719251234582955)  2026-02-06T10:26Z [---] followers, [---] engagements


"Tips :  glm-4.7 [--]  claude-ops-4-6 glm-4.7 glm-4.7"  
[X Link](https://x.com/anyuser/status/2021408085596832246)  2026-02-11T02:16Z [---] followers, [--] engagements


"MioTTS-Inference MacStudio M3 Ultra *() MioTTS-0.1B-BF26.gguf Best of N *Setup flash-atten build https://github.com/Aratako/MioTTS-Inference https://github.com/Aratako/MioTTS-Inference"  
[X Link](https://x.com/anyuser/status/2021592388511170802)  2026-02-11T14:29Z [---] followers, [---] engagements


"GLM-5 using claude-code on M3 Ultra 440GB Model: unsloth/GLM-5-GGUF/UD-Q4_K_XL Guide: llama.cpp --jinja --temp [---] --top-p [----] --ctx-size [-----] --fit on https://unsloth.ai/docs/models/glm-5 https://unsloth.ai/docs/models/glm-5"  
[X Link](https://x.com/anyuser/status/2021979061606879374)  2026-02-12T16:05Z [---] followers, [---] engagements


"mlx-community/Step-3.5-Flash-8bit [----] tok/s with vllm-mlx (Not yet supported in LM Studio) on M3 Ultra(256GB)"  
[X Link](https://x.com/anyuser/status/2022466811648966724)  2026-02-14T00:23Z [---] followers, [---] engagements


"@hawkymisc () "glm-4.7"opus ()"  
[X Link](https://x.com/limegreenpeper1/status/2022513675878437185)  2026-02-14T03:30Z [---] followers, [--] engagements


"DGX Spark new kernal [----] [----]  --no-mmap  https://forums.developer.nvidia.com/t/apparently-mmap-is-still-slow-on-dgx-spark-on-linux-6-17/360538 https://forums.developer.nvidia.com/t/apparently-mmap-is-still-slow-on-dgx-spark-on-linux-6-17/360538"  
[X Link](https://x.com/anyuser/status/2022610765405131054)  2026-02-14T09:55Z [---] followers, [---] engagements


"mlx-community/MiniMax-M2.5-8bit CTX 192K(Max) 36.5tok/sec on M3Ultra(256GB)"  
[X Link](https://x.com/limegreenpeper1/status/2022458986667364647)  2026-02-13T23:52Z [---] followers, [---] engagements


"unsloth/Qwen3.5-397B-A17B-GGUF ()"  
[X Link](https://x.com/anyuser/status/2023330825434259653)  2026-02-16T09:37Z [---] followers, [---] engagements


"I keep watching (14 of 46) Tested on a M3 Ultra 512GB RAM using Inferencer app v1.10 Single inference [----] tokens/s @ [----] tokens Memory usage: [---] GiB https://huggingface.co/inferencerlabs/GLM-5-MLX-4.8bit https://huggingface.co/inferencerlabs/GLM-5-MLX-4.8bit"  
[X Link](https://x.com/limegreenpeper1/status/2021933612963443057)  2026-02-12T13:05Z [---] followers, [---] engagements


"unsloth/MiniMax-M2.5-GGUF UD-Q6_K_XL(CTX Max192K) in LM Studio on M3 Ultra (256GB) [-----] tok / s"  
[X Link](https://x.com/anyuser/status/2022971743716155770)  2026-02-15T09:50Z [---] followers, [---] engagements


"Qwen3-coder-next llama.cpp [-----] tok /s (126%) vs [-----] tok /s : UD-Q6_K_XL CTX_SIZE [------] *DGX Spark cmake & build $ cmake -B build -DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES="121" $ cmake --build build --config Release"  
[X Link](https://x.com/anyuser/status/2022989652828066210)  2026-02-15T11:01Z [---] followers, [---] engagements


"M3 Ultrallama.cpp [-----] tok/s (129%) vs [-----] tok/s : UD-Q6_K_XL CTX_SIZE 131072"  
[X Link](https://x.com/anyuser/status/2023016495593423111)  2026-02-15T12:48Z [---] followers, [--] engagements


"unsloth/Qwen3.5-397B-A17B-GGUF UD-Q4_K_XL on M3 Ultra(256GB) ctx 256K(Max) [-----] tok/s"  
[X Link](https://x.com/limegreenpeper1/status/2023348155694342236)  2026-02-16T10:46Z [---] followers, [--] engagements


"unsloth/Qwen3.5-397B-A17B-GGUF UD-Q6_K_XL on M3 Ultra(360GB) ctx 256K(Max) [-----] tok/s"  
[X Link](https://x.com/anyuser/status/2023379117673177405)  2026-02-16T12:49Z [---] followers, [--] engagements


"unsloth/Qwen3.5-397B-A17B-GGUF ()"  
[X Link](https://x.com/anyuser/status/2023330825434259653)  2026-02-16T09:37Z [---] followers, [---] engagements


"unsloth/Qwen3.5-397B-A17B-GGUF UD-Q6_K_XL on M3 Ultra(360GB) ctx 256K(Max) [-----] tok/s"  
[X Link](https://x.com/anyuser/status/2023379117673177405)  2026-02-16T12:49Z [---] followers, [--] engagements


"unsloth/Qwen3.5-397B-A17B-GGUF UD-Q4_K_XL on M3 Ultra(256GB) ctx 256K(Max) [-----] tok/s"  
[X Link](https://x.com/limegreenpeper1/status/2023348155694342236)  2026-02-16T10:46Z [---] followers, [--] engagements


"Qwen3-coder-next llama.cpp [-----] tok /s (126%) vs [-----] tok /s : UD-Q6_K_XL CTX_SIZE [------] *DGX Spark cmake & build $ cmake -B build -DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES="121" $ cmake --build build --config Release"  
[X Link](https://x.com/anyuser/status/2022989652828066210)  2026-02-15T11:01Z [---] followers, [---] engagements


"M3 Ultrallama.cpp [-----] tok/s (129%) vs [-----] tok/s : UD-Q6_K_XL CTX_SIZE 131072"  
[X Link](https://x.com/anyuser/status/2023016495593423111)  2026-02-15T12:48Z [---] followers, [--] engagements


"unsloth/MiniMax-M2.5-GGUF UD-Q6_K_XL(CTX Max192K) in LM Studio on M3 Ultra (256GB) [-----] tok / s"  
[X Link](https://x.com/anyuser/status/2022971743716155770)  2026-02-15T09:50Z [---] followers, [---] engagements


"DGX Spark new kernal [----] [----]  --no-mmap  https://forums.developer.nvidia.com/t/apparently-mmap-is-still-slow-on-dgx-spark-on-linux-6-17/360538 https://forums.developer.nvidia.com/t/apparently-mmap-is-still-slow-on-dgx-spark-on-linux-6-17/360538"  
[X Link](https://x.com/anyuser/status/2022610765405131054)  2026-02-14T09:55Z [---] followers, [---] engagements


"mlx-community/Step-3.5-Flash-8bit [----] tok/s with vllm-mlx (Not yet supported in LM Studio) on M3 Ultra(256GB)"  
[X Link](https://x.com/anyuser/status/2022466811648966724)  2026-02-14T00:23Z [---] followers, [---] engagements


"mlx-community/MiniMax-M2.5-8bit CTX 192K(Max) 36.5tok/sec on M3Ultra(256GB)"  
[X Link](https://x.com/limegreenpeper1/status/2022458986667364647)  2026-02-13T23:52Z [---] followers, [---] engagements


"I keep watching (14 of 46) Tested on a M3 Ultra 512GB RAM using Inferencer app v1.10 Single inference [----] tokens/s @ [----] tokens Memory usage: [---] GiB https://huggingface.co/inferencerlabs/GLM-5-MLX-4.8bit https://huggingface.co/inferencerlabs/GLM-5-MLX-4.8bit"  
[X Link](https://x.com/limegreenpeper1/status/2021933612963443057)  2026-02-12T13:05Z [---] followers, [---] engagements


"The comment is correct. Response limit: 2k tokens"  
[X Link](https://x.com/anyuser/status/2022224252184015236)  2026-02-13T08:20Z [---] followers, [--] engagements


"GLM-5 using claude-code on M3 Ultra 440GB Model: unsloth/GLM-5-GGUF/UD-Q4_K_XL Guide: llama.cpp --jinja --temp [---] --top-p [----] --ctx-size [-----] --fit on https://unsloth.ai/docs/models/glm-5 https://unsloth.ai/docs/models/glm-5"  
[X Link](https://x.com/anyuser/status/2021979061606879374)  2026-02-12T16:05Z [---] followers, [---] engagements


"MioTTS-Inference MacStudio M3 Ultra *() MioTTS-0.1B-BF26.gguf Best of N *Setup flash-atten build https://github.com/Aratako/MioTTS-Inference https://github.com/Aratako/MioTTS-Inference"  
[X Link](https://x.com/anyuser/status/2021592388511170802)  2026-02-11T14:29Z [---] followers, [---] engagements


"() 0.1B CPU https://huggingface.co/spaces/Aratako/MioTTS-0.1B-Demo https://huggingface.co/spaces/Aratako/MioTTS-0.1B-Demo"  
[X Link](https://x.com/anyuser/status/2021572993374179726)  2026-02-11T13:12Z [---] followers, [---] engagements


"Tips :  glm-4.7 [--]  claude-ops-4-6 glm-4.7 glm-4.7"  
[X Link](https://x.com/anyuser/status/2021408085596832246)  2026-02-11T02:16Z [---] followers, [--] engagements


"Officially launching X API Pay-Per-Use The core of X developers are indie builders early stage products startups and hobbyists Its time to open up our X API ecosystem and instill a new wave of next generation X apps Were so back. http://developer.x.com http://developer.x.com"  
[X Link](https://x.com/anyuser/status/2019881223666233717)  2026-02-06T21:09Z 675K followers, 2.1M engagements


"ACE-Step-v1-3.5B (not v1.5) with ComfyUI on both Mac(cpu) and DGX Spark. (For Mac apply the patch that uses soundfiledoesn't tochcodec) https://github.com/hiroki-abe-58/ComfyUI-AceMusic https://github.com/hiroki-abe-58/ComfyUI-AceMusic"  
[X Link](https://x.com/anyuser/status/2019719251234582955)  2026-02-06T10:26Z [---] followers, [---] engagements


"Running on Mac(MPS) Just add --gpu-only flag --gpu-only http://main.py http://main.py"  
[X Link](https://x.com/anyuser/status/2019752703774056900)  2026-02-06T12:39Z [---] followers, [--] engagements


"unsloth-DeepSeek-V3.2-UD-Q4_K_XL on M3 Ultra CTX [------] (Max) [-----] tok / sec"  
[X Link](https://x.com/anyuser/status/2017237920344903922)  2026-01-30T14:06Z [---] followers, [---] engagements


"hf unsloth/Kimi-K2.5-GGUF UD-IQ3_XXS on M3 Ultra. (21.62tok/sec) Thanks to the Original KIMI Team and the unsloth Team"  
[X Link](https://x.com/anyuser/status/2016793878930870518)  2026-01-29T08:41Z [---] followers, [---] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@limegreenpeper1 Avatar @limegreenpeper1 limegreenpeper753

limegreenpeper753 posts on X about $2413t, ultra, llamacpp, build the most. They currently have [-------] followers and [--] posts still getting attention that total [------] engagements in the last [--] hours.

Engagements: [------] #

Engagements Line Chart

  • [--] Month [-----] -77%
  • [--] Months [------] +10,422%

Mentions: [--] #

Mentions Line Chart

Followers: [-------] #

Followers Line Chart

  • [--] Month [---] +2.60%
  • [--] Months [---] +1,363%

CreatorRank: [---------] #

CreatorRank Line Chart

Social Influence

Social category influence technology brands stocks

Social topic influence $2413t #674, ultra, llamacpp, build, for all, kimi, max, fit, inference, $023h

Top accounts mentioned or mentioned by @alfredplpl @tori29umai @gillelandkristi @hawkymisc @vmlxllm

Top assets mentioned Lineage, Inc. (LINE)

Top Social Posts

Top posts by engagements in the last [--] hours

"@alfredplpl Dataset 3000step -p 'This image is saying "". The background is white. The letter is black.' -np '' -s [--] --cfg [---] Train Dataset https://huggingface.co/datasets/alfredplpl/image-text-pairs-ja-cc0 https://github.com/FlyMyAI/flymyai-lora-trainer https://huggingface.co/datasets/alfredplpl/image-text-pairs-ja-cc0 https://github.com/FlyMyAI/flymyai-lora-trainer"
X Link 2025-08-16T20:40Z [--] followers, [---] engagements

"@alfredplpl lora "" -p 'This image is saying "". The background is white. The letter is black.' -np '' -s [--] --cfg [---] --seed 8484741726198430554"
X Link 2025-08-17T10:45Z [--] followers, [--] engagements

"https://huggingface.co/microsoft/VibeVoice-1.5B https://huggingface.co/microsoft/VibeVoice-1.5B"
X Link 2025-08-26T07:45Z [--] followers, [--] engagements

"Qwen-image-edit-2509 3090(280W) : nunchaku-lightning-8steps : [--] sec (no offroad 20GB VRAM) M3 ultra : bf16 with lora lightning 8steps : [---] sec"
X Link 2025-09-26T10:44Z [--] followers, [---] engagements

"hunyuan_video_1.5 on ComfyUI No easy cache Generation time 121F 20Steps. (vast) [-----] with sageattention : 220s : 3m40s $0.23/h -4090(cu118 no xformers) : 430s : 7m10s $0.17/h [-----] limit 200W : 16m41s (local)"
X Link 2025-11-22T04:02Z [---] followers, [---] engagements

"Thanks for unsloth team and Original Qwen team. unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF UD-Q6_K_XL on M3 Ultra https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF"
X Link 2025-11-30T03:34Z [--] followers, [---] engagements

"Stable-Diffusion.cpp build for winarm64 OpenCL Generate time SD1.5-461s FLUX.1dev-1734s Both cpu generation. something wrong https://github.com/leejet/stable-diffusion.cpp https://github.com/leejet/stable-diffusion.cpp"
X Link 2025-11-30T08:31Z [--] followers, [--] engagements

"Stable-Diffusion.cpp buildllama.cpp build"
X Link 2025-11-30T08:32Z [--] followers, [--] engagements

": win arm64 GPU llama.cpp llama.cpp on winarm64 using GPU gemma-3-12B-it-QAT-Q4_0.gguf"
X Link 2025-11-30T08:58Z [--] followers, [--] engagements

"PrecompileWheels for torch 2.9.1+cu130+cp310+5090 on hf Template: vastai/base-image:cuda-13.0.1-cudnn-devel-ubuntu24.04-py310 -sageattention-2.2.0 -sageattn3-1.0.0 -flash_attn-2.8.3"
X Link 2025-12-13T09:17Z [---] followers, [---] engagements

"lambda aarch64+h100 compile original flash_attn-2.7 venv torch2.9.1-cu128-cp310 -flash_attn_3-3.0.0b1-cp39-abi3-linux_aarch64.whl -sageattention-2.2.0-cp310-cp310-linux_aarch64.whl"
X Link 2025-12-14T00:57Z [--] followers, [--] engagements

"Qwen-Image-Layered using MPS on M3 Ultra #1 original #2-4 layers looks available from RAM 128GB https://huggingface.co/Qwen/Qwen-Image-Layered https://huggingface.co/Qwen/Qwen-Image-Layered"
X Link 2025-12-20T00:40Z [---] followers, [----] engagements

"Qwen-Image-Layered on hf space(Thanks for original team) I think best solution is hf space with ZeroGPU (if PRO Account) https://huggingface.co/spaces/Qwen/Qwen-Image-Layered https://huggingface.co/spaces/Qwen/Qwen-Image-Layered"
X Link 2025-12-20T02:57Z [---] followers, [---] engagements

"Running: TurboDiffusion on DGX Spark CP3.12 + build TurboDiffusion and dependencies (SpargeAttn flash-attention etc) Thanks for all team. https://github.com/thu-ml/TurboDiffusion https://github.com/thu-ml/TurboDiffusion"
X Link 2025-12-29T02:47Z [--] followers, [---] engagements

"Running HY-Motion-1.0 on DGX Spark (couldn't export) patched: requests.txt: PyYAML==6.0 - PyYAML fbxsdkpy==2020.1.pos - #fbxsdkpy==2020.1.pos gradio_app.py: disable_prompt_engineering = os.environ.get("DISABLE_PROMPT_ENGINEERING" False) - True https://github.com/Tencent-Hunyuan/HY-Motion-1.0/ https://github.com/Tencent-Hunyuan/HY-Motion-1.0/"
X Link 2025-12-31T02:07Z [---] followers, [----] engagements

"gradio_app.py DISABLE_PROMPT_ENGINEERING="True" GRADIO_SERVER_NAME="0.0.0.0" uv run gradio_app.py"
X Link 2025-12-31T03:01Z [--] followers, [--] engagements

"HY-Motion-1.0 with Rewrite on DGX Spark patched: hymotion/prompt_engineering/prompt_rewrite.py"
X Link 2025-12-31T05:15Z [--] followers, [---] engagements

"HY-Motion-1.0 with Rewrite on DGX Spark modified non quantization patched: comment out. line [---] # load_in_4bit=True hymotion/prompt_engineering/prompt_rewrite.py"
X Link 2025-12-31T05:50Z [--] followers, [---] engagements

"LTX-2-T2V_Full_wLora on DGX Spark (960x544 241f) conda + torchcodec build + LTX-2 + ComfuUI+ custom_nodes Thanks for NVIDIA fourm https://forums.developer.nvidia.com/t/cant-install-torch-torchaudio-torchcodec/348660/15 https://forums.developer.nvidia.com/t/cant-install-torch-torchaudio-torchcodec/348660/15"
X Link 2026-01-06T10:36Z [---] followers, [----] engagements

"LTX-2-T2V_FP8_wLora 1920x1088 241f on DGX Spark (no sound noise) FP8"
X Link 2026-01-06T11:45Z [---] followers, [----] engagements

"Running: ComfyUI-LTX2-Kijiai-distilled-NoAudio on MPS(M3Ultra) -distilled-GGUF(PR patch or Fork) is Fast -Need to disable Audio flows Thanks to https://github.com/city96/ComfyUI-GGUF/pull/399 https://github.com/city96/ComfyUI-GGUF/pull/399"
X Link 2026-01-10T00:39Z [---] followers, [----] engagements

"@tori29umai &Tips ComfyUI-SyntaxNodes with ml-sharp on DGX Spark -SHARP 3D Gaussian Splatdefaultply (Syntax-nodes audionode & ml-sharp torch gsplat pip install )"
X Link 2026-01-10T04:36Z [---] followers, [---] engagements

"ComfyUI-LTX2-Kijiai-dev-LoRA-withAudio on MPS(M3Ultra) Latest(looks completly running both images and audio) -Disable Enhancer: due to 'LTXAVTEModel_' object has no attribute 'processor' -Set VAELoader KJ weight_dtype: fp16 Thans for all"
X Link 2026-01-10T08:37Z [---] followers, [----] engagements

"Example: LTX-2 using Kijai's GGUF(Fork node) and community workflow with Audio on DGX Spark (The video includes audio.) She says "happy new year" in Japanese"
X Link 2026-01-10T14:41Z [---] followers, [----] engagements

": seed hf fal/Qwen-Image-Edit-2511-Multiple-Angles-LoRA All [--] Prompts queue $ cat list1 while read line; do echo $line; uv run "./image_qwen_image_edit_2511_Mult_angle_api.json" "$line"; sleep 1; done http://run-api.py http://run-api.py"
X Link 2026-01-11T05:24Z [---] followers, [---] engagements

"GLM-Images I2VT2V on HF Pro spaces (Thanks) ComfyUI https://huggingface.co/spaces/multimodalart/GLM-Image https://huggingface.co/spaces/multimodalart/GLM-Image"
X Link 2026-01-14T15:37Z [---] followers, [---] engagements

"flux-2-klein-base-9b(not fp8) T2I #1 M3 Ultra: 7.06s/it #2 DGX Spark: 2.47s/it https://docs.comfy.org/tutorials/flux/flux-2-klein https://docs.comfy.org/tutorials/flux/flux-2-klein"
X Link 2026-01-16T10:34Z [---] followers, [---] engagements

"@GillelandKristi I simply used the latest ComfyUI and the workflow that was presented"
X Link 2026-01-16T14:55Z [---] followers, [--] engagements

"black-forest-labs/FLUX.2-klein-9B Running I2I on Japanese prompt both DGX Spark(1.86s/it) and M3Ultra(7.51s/it)non FP8"
X Link 2026-01-17T00:41Z [---] followers, [---] engagements

"I recommend referring to the official forum for DGX Spark information. You can see various challenges there. 8x DGX Spark https://forums.developer.nvidia.com/t/6x-spark-setup/354399/34 https://forums.developer.nvidia.com/t/6x-spark-setup/354399/34"
X Link 2026-01-17T06:16Z [---] followers, [---] engagements

"Running: HeartMuLa-ComfuUI on both DGX Spark and M3 Ultra with some patches. This document describes the method using Claude Code. Thanks to original team and benjiyaya. https://github.com/benjiyaya/HeartMuLa_ComfyUI https://github.com/benjiyaya/HeartMuLa_ComfyUI"
X Link 2026-01-21T02:57Z [---] followers, [---] engagements

"Achieve the same functionality with the GLM-4.7 CodingPlan"
X Link 2026-01-21T04:01Z [---] followers, [--] engagements

"hf unsloth/Kimi-K2.5-GGUF UD-IQ3_XXS on M3 Ultra. (21.62tok/sec) Thanks to the Original KIMI Team and the unsloth Team"
X Link 2026-01-29T08:41Z [---] followers, [---] engagements

"unsloth-DeepSeek-V3.2-UD-Q4_K_XL on M3 Ultra CTX [------] (Max) [-----] tok / sec"
X Link 2026-01-30T14:06Z [---] followers, [---] engagements

"ACE-Step-v1-3.5B (not v1.5) with ComfyUI on both Mac(cpu) and DGX Spark. (For Mac apply the patch that uses soundfiledoesn't tochcodec) https://github.com/hiroki-abe-58/ComfyUI-AceMusic https://github.com/hiroki-abe-58/ComfyUI-AceMusic"
X Link 2026-02-06T10:26Z [---] followers, [---] engagements

"Tips : glm-4.7 [--] claude-ops-4-6 glm-4.7 glm-4.7"
X Link 2026-02-11T02:16Z [---] followers, [--] engagements

"MioTTS-Inference MacStudio M3 Ultra *() MioTTS-0.1B-BF26.gguf Best of N *Setup flash-atten build https://github.com/Aratako/MioTTS-Inference https://github.com/Aratako/MioTTS-Inference"
X Link 2026-02-11T14:29Z [---] followers, [---] engagements

"GLM-5 using claude-code on M3 Ultra 440GB Model: unsloth/GLM-5-GGUF/UD-Q4_K_XL Guide: llama.cpp --jinja --temp [---] --top-p [----] --ctx-size [-----] --fit on https://unsloth.ai/docs/models/glm-5 https://unsloth.ai/docs/models/glm-5"
X Link 2026-02-12T16:05Z [---] followers, [---] engagements

"mlx-community/Step-3.5-Flash-8bit [----] tok/s with vllm-mlx (Not yet supported in LM Studio) on M3 Ultra(256GB)"
X Link 2026-02-14T00:23Z [---] followers, [---] engagements

"@hawkymisc () "glm-4.7"opus ()"
X Link 2026-02-14T03:30Z [---] followers, [--] engagements

"DGX Spark new kernal [----] [----] --no-mmap https://forums.developer.nvidia.com/t/apparently-mmap-is-still-slow-on-dgx-spark-on-linux-6-17/360538 https://forums.developer.nvidia.com/t/apparently-mmap-is-still-slow-on-dgx-spark-on-linux-6-17/360538"
X Link 2026-02-14T09:55Z [---] followers, [---] engagements

"mlx-community/MiniMax-M2.5-8bit CTX 192K(Max) 36.5tok/sec on M3Ultra(256GB)"
X Link 2026-02-13T23:52Z [---] followers, [---] engagements

"unsloth/Qwen3.5-397B-A17B-GGUF ()"
X Link 2026-02-16T09:37Z [---] followers, [---] engagements

"I keep watching (14 of 46) Tested on a M3 Ultra 512GB RAM using Inferencer app v1.10 Single inference [----] tokens/s @ [----] tokens Memory usage: [---] GiB https://huggingface.co/inferencerlabs/GLM-5-MLX-4.8bit https://huggingface.co/inferencerlabs/GLM-5-MLX-4.8bit"
X Link 2026-02-12T13:05Z [---] followers, [---] engagements

"unsloth/MiniMax-M2.5-GGUF UD-Q6_K_XL(CTX Max192K) in LM Studio on M3 Ultra (256GB) [-----] tok / s"
X Link 2026-02-15T09:50Z [---] followers, [---] engagements

"Qwen3-coder-next llama.cpp [-----] tok /s (126%) vs [-----] tok /s : UD-Q6_K_XL CTX_SIZE [------] *DGX Spark cmake & build $ cmake -B build -DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES="121" $ cmake --build build --config Release"
X Link 2026-02-15T11:01Z [---] followers, [---] engagements

"M3 Ultrallama.cpp [-----] tok/s (129%) vs [-----] tok/s : UD-Q6_K_XL CTX_SIZE 131072"
X Link 2026-02-15T12:48Z [---] followers, [--] engagements

"unsloth/Qwen3.5-397B-A17B-GGUF UD-Q4_K_XL on M3 Ultra(256GB) ctx 256K(Max) [-----] tok/s"
X Link 2026-02-16T10:46Z [---] followers, [--] engagements

"unsloth/Qwen3.5-397B-A17B-GGUF UD-Q6_K_XL on M3 Ultra(360GB) ctx 256K(Max) [-----] tok/s"
X Link 2026-02-16T12:49Z [---] followers, [--] engagements

"unsloth/Qwen3.5-397B-A17B-GGUF ()"
X Link 2026-02-16T09:37Z [---] followers, [---] engagements

"unsloth/Qwen3.5-397B-A17B-GGUF UD-Q6_K_XL on M3 Ultra(360GB) ctx 256K(Max) [-----] tok/s"
X Link 2026-02-16T12:49Z [---] followers, [--] engagements

"unsloth/Qwen3.5-397B-A17B-GGUF UD-Q4_K_XL on M3 Ultra(256GB) ctx 256K(Max) [-----] tok/s"
X Link 2026-02-16T10:46Z [---] followers, [--] engagements

"Qwen3-coder-next llama.cpp [-----] tok /s (126%) vs [-----] tok /s : UD-Q6_K_XL CTX_SIZE [------] *DGX Spark cmake & build $ cmake -B build -DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES="121" $ cmake --build build --config Release"
X Link 2026-02-15T11:01Z [---] followers, [---] engagements

"M3 Ultrallama.cpp [-----] tok/s (129%) vs [-----] tok/s : UD-Q6_K_XL CTX_SIZE 131072"
X Link 2026-02-15T12:48Z [---] followers, [--] engagements

"unsloth/MiniMax-M2.5-GGUF UD-Q6_K_XL(CTX Max192K) in LM Studio on M3 Ultra (256GB) [-----] tok / s"
X Link 2026-02-15T09:50Z [---] followers, [---] engagements

"DGX Spark new kernal [----] [----] --no-mmap https://forums.developer.nvidia.com/t/apparently-mmap-is-still-slow-on-dgx-spark-on-linux-6-17/360538 https://forums.developer.nvidia.com/t/apparently-mmap-is-still-slow-on-dgx-spark-on-linux-6-17/360538"
X Link 2026-02-14T09:55Z [---] followers, [---] engagements

"mlx-community/Step-3.5-Flash-8bit [----] tok/s with vllm-mlx (Not yet supported in LM Studio) on M3 Ultra(256GB)"
X Link 2026-02-14T00:23Z [---] followers, [---] engagements

"mlx-community/MiniMax-M2.5-8bit CTX 192K(Max) 36.5tok/sec on M3Ultra(256GB)"
X Link 2026-02-13T23:52Z [---] followers, [---] engagements

"I keep watching (14 of 46) Tested on a M3 Ultra 512GB RAM using Inferencer app v1.10 Single inference [----] tokens/s @ [----] tokens Memory usage: [---] GiB https://huggingface.co/inferencerlabs/GLM-5-MLX-4.8bit https://huggingface.co/inferencerlabs/GLM-5-MLX-4.8bit"
X Link 2026-02-12T13:05Z [---] followers, [---] engagements

"The comment is correct. Response limit: 2k tokens"
X Link 2026-02-13T08:20Z [---] followers, [--] engagements

"GLM-5 using claude-code on M3 Ultra 440GB Model: unsloth/GLM-5-GGUF/UD-Q4_K_XL Guide: llama.cpp --jinja --temp [---] --top-p [----] --ctx-size [-----] --fit on https://unsloth.ai/docs/models/glm-5 https://unsloth.ai/docs/models/glm-5"
X Link 2026-02-12T16:05Z [---] followers, [---] engagements

"MioTTS-Inference MacStudio M3 Ultra *() MioTTS-0.1B-BF26.gguf Best of N *Setup flash-atten build https://github.com/Aratako/MioTTS-Inference https://github.com/Aratako/MioTTS-Inference"
X Link 2026-02-11T14:29Z [---] followers, [---] engagements

"() 0.1B CPU https://huggingface.co/spaces/Aratako/MioTTS-0.1B-Demo https://huggingface.co/spaces/Aratako/MioTTS-0.1B-Demo"
X Link 2026-02-11T13:12Z [---] followers, [---] engagements

"Tips : glm-4.7 [--] claude-ops-4-6 glm-4.7 glm-4.7"
X Link 2026-02-11T02:16Z [---] followers, [--] engagements

"Officially launching X API Pay-Per-Use The core of X developers are indie builders early stage products startups and hobbyists Its time to open up our X API ecosystem and instill a new wave of next generation X apps Were so back. http://developer.x.com http://developer.x.com"
X Link 2026-02-06T21:09Z 675K followers, 2.1M engagements

"ACE-Step-v1-3.5B (not v1.5) with ComfyUI on both Mac(cpu) and DGX Spark. (For Mac apply the patch that uses soundfiledoesn't tochcodec) https://github.com/hiroki-abe-58/ComfyUI-AceMusic https://github.com/hiroki-abe-58/ComfyUI-AceMusic"
X Link 2026-02-06T10:26Z [---] followers, [---] engagements

"Running on Mac(MPS) Just add --gpu-only flag --gpu-only http://main.py http://main.py"
X Link 2026-02-06T12:39Z [---] followers, [--] engagements

"unsloth-DeepSeek-V3.2-UD-Q4_K_XL on M3 Ultra CTX [------] (Max) [-----] tok / sec"
X Link 2026-01-30T14:06Z [---] followers, [---] engagements

"hf unsloth/Kimi-K2.5-GGUF UD-IQ3_XXS on M3 Ultra. (21.62tok/sec) Thanks to the Original KIMI Team and the unsloth Team"
X Link 2026-01-29T08:41Z [---] followers, [---] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@limegreenpeper1
/creator/twitter::limegreenpeper1