#  @Ok-Scarcity-7875 Ok-Scarcity-7875 Ok-Scarcity-7875 posts on Reddit about zilliqa, token, zilpepe, stablecoins the most. They currently have [---] followers and [--] posts still getting attention that total [--] engagements in the last [--] hours. ### Engagements: [--] [#](/creator/reddit::t2_7mqkdgze2/interactions)  - [--] Week [---] +42% - [--] Month [---] +29% - [--] Months [-----] +62% - [--] Year [-----] +139% ### Mentions: [--] [#](/creator/reddit::t2_7mqkdgze2/posts_active)  - [--] Week [--] no change - [--] Month [--] no change - [--] Months [--] -14% - [--] Year [--] +467% ### Followers: [---] [#](/creator/reddit::t2_7mqkdgze2/followers)  - [--] Months [---] no change - [--] Year [---] +117% ### CreatorRank: undefined [#](/creator/reddit::t2_7mqkdgze2/influencer_rank)  ### Social Influence **Social category influence** [cryptocurrencies](/list/cryptocurrencies) [finance](/list/finance) [technology brands](/list/technology-brands) [stocks](/list/stocks) [social networks](/list/social-networks) **Social topic influence** [zilliqa](/topic/zilliqa) #56, [token](/topic/token), [zilpepe](/topic/zilpepe), [stablecoins](/topic/stablecoins), [dai](/topic/dai), [matter](/topic/matter), [ai](/topic/ai), [where can](/topic/where-can), [target](/topic/target), [thorchain](/topic/thorchain) **Top accounts mentioned or mentioned by** [@200222](/creator/undefined) **Top assets mentioned** [Zilliqa (ZIL)](/topic/zilliqa) [USELESS COIN (USELESS)](/topic/useless) [Alphabet Inc Class A (GOOGL)](/topic/$googl) ### Top Social Posts Top posts by engagements in the last [--] hours "Stablecoins - Have something like Dai and zileur zilliqa zilliqa" [Reddit Link](https://redd.it/19an427) 2024-01-19T17:26Z [--] followers, [----] engagements "Zilliqa should start to focus on hypes that matter like AI zilliqa zilliqa" [Reddit Link](https://redd.it/1f1trtg) 2024-08-26T17:24Z [--] followers, [--] engagements "Where can I swap tokens as of now zilliqa zilliqa" [Reddit Link](https://redd.it/1lm0seu) 2025-06-27T18:13Z [--] followers, [--] engagements "Zilliqa should target into bridges for memecoins and other tokens sitting on ETH zilliqa zilliqa" [Reddit Link](https://redd.it/1gqhklt) 2024-11-13T17:13Z [--] followers, [---] engagements "Make Zilliqa great (again) zilliqa zilliqa" [Reddit Link](https://redd.it/1lqtd6c) 2025-07-03T16:08Z [--] followers, [--] engagements "Most useless Token - ZilPepe" [Reddit Link](https://redd.it/15qw6x6) 2023-08-14T14:25Z [--] followers, [---] engagements "Integrate Zilliqa into ThorChain (RUNE)" [Reddit Link](https://redd.it/15ueswu) 2023-08-18T08:42Z [--] followers, [----] engagements "Most useless Token - ZilPepe" [Reddit Link](https://redd.it/15qw6x6) 2023-08-14T14:25Z [--] followers, [---] engagements "Integrate Zilliqa into ThorChain (RUNE) zilliqa zilliqa" [Reddit Link](https://redd.it/15ueswu) 2023-08-18T08:42Z [--] followers, [----] engagements "Access to gpt4 via PayPal without subscription Is there a way to pay per 100k tokens (in 100k steps) in advance for gpt4 and other models I really would like to use gpt4 and pay for it but I have no credit card nor do I want to subscribe to something on a yearly basis. ChatGPT ChatGPT" [Reddit Link](https://redd.it/1cs0seu) 2024-05-14T19:27Z [--] followers, [--] engagements "TTS of large texts with translation in real-time using Llama [---] 3B uncensored I wrote a nice script which can you read large texts in real time like an audio book. Thanks to voice cloning the text is also read by anyone you want On top you can also let the script translate the text in your desired language via a LLM. I used https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-uncensored-GGUF(https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-uncensored-GGUF) with llama.cpp for this task. Imagine you want to read something difficult in a foreign language but you are both too lazy" [Reddit Link](https://redd.it/1fw2pqd) 2024-10-04T16:01Z [--] followers, [---] engagements ""No-brainer technique" gets the strawberry problem done with r1 distill I was experimenting with with the new deepseek distilled model and had the idea to stop them somehow from overthinking. I tried multiple things like prompting not to overthink or even edit inside of the think tags. Long story short a technique I call the "no-brainer" technique solves the strawberry test in like 90% of the cases: https://preview.redd.it/36992td1teee1.pngwidth=1430&format=png&auto=webp&s=d902d71c3566029338b6bafa921a3a32af7a8ecb The trick is to let the model first write the answer and then remove everything" [Reddit Link](https://redd.it/1i6sxlr) 2025-01-21T20:50Z [---] followers, [--] engagements "Can I disable think with reasoning models using API **EDIT:** **I use llama.cpp** --- I want to test how reasoning models compare to non-reasoning models in certain tasks but reasoning models without using their reasoning. Is there a way I can do API calls like this : import requests import json # Define the API endpoint and your API key api_url = "https://api.openai.com/v1/chat/completions" api_key = "your_api_key_here" # Define the prompt and the messages prompt = "Write a short poem about the stars." messages = "role": "user" "content": prompt # Set up the headers for the API request" [Reddit Link](https://redd.it/1ivjxhi) 2025-02-22T14:29Z [---] followers, [--] engagements "Is possible to use Google Play credits to use Gemini API without credit card but paid tier deleted googlecloud googlecloud" [Reddit Link](https://redd.it/1jtirmp) 2025-04-07T11:16Z [--] followers, [--] engagements "MVP Presentation with work title "Voice Reader" https://reddit.com/link/1k22ee3/video/8vbsz6zdukve1/player Hello I'm trying to figure out what I can offer. I'm normally always stuck in the "perfect idea" - "development" - "nothing" - "perfect idea" cycle. But this time I want to learn how to overcome this. I have multiple ideas and this is one of them: I call it "Voice Reader" for now and I developed it because I was tired of reading everything my self and instead have it read to me like an audio book. I know that other similar products do exist and maybe this exact product. However the USP" [Reddit Link](https://redd.it/1k22ee3) 2025-04-18T11:14Z [---] followers, [--] engagements "MVP Presentation with work title "Voice Reader" SaaS SaaS" [Reddit Link](https://redd.it/1k27ves) 2025-04-18T15:36Z [---] followers, [--] engagements "Feedback - What's the best place I recently posted this: https://www.reddit.com/r/SaaS/comments/1k27ves/mvp_presentation_with_work_title_voice_reader/utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button(https://www.reddit.com/r/SaaS/comments/1k27ves/mvp_presentation_with_work_title_voice_reader/utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) and got no feedback other than I got no feedback. So it seems that nobody cares. Okay maybe the idea is not worth it then but still wondering why no one even replied like: "This" [Reddit Link](https://redd.it/1k7eze4) 2025-04-25T07:21Z [---] followers, [--] engagements "MoE is cool but does not solve speed when it comes to long context I really enjoy coding with Gemini [---] Pro but if I want to use something local qwen3-30b-a3b-128k seems to be the best pick right now for my Hardware. However if run it on CPU only (GPU does evaluation) where I have 128GB RAM the performance drops from 12Tk/s to [--] Tk/s with just 25k context which is nothing for Gemini [---] Pro. I guess at 50k context I'm at [--] Tk/s which is basically unusable. So either VRAM becomes more affordable or a new technique which also solves slow evaluation and generation for long contexts is needed." [Reddit Link](https://redd.it/1kc6cp7) 2025-05-01T11:26Z [---] followers, [--] engagements "All songs gone deleted SunoAI SunoAI" [Reddit Link](https://redd.it/1kd2rqk) 2025-05-02T14:46Z [--] followers, [--] engagements "OK MoE IS awesome Recently I posted this: https://www.reddit.com/r/LocalLLaMA/comments/1kc6cp7/moe_is_cool_but_does_not_solve_speed_when_it/(https://www.reddit.com/r/LocalLLaMA/comments/1kc6cp7/moe_is_cool_but_does_not_solve_speed_when_it/) I now want to correct myself as I have figured out that simply reducing a few layers (from [--] - 40) gives me **massive** more context I did not expect that as it seems that context VRAM / RAM consumption is not bound to total parameter count here but to the relatively tiny parameter count of the active experts A normal 32B non-MoE model would require much" [Reddit Link](https://redd.it/1kdbt84) 2025-05-02T21:04Z [---] followers, [---] engagements "Where is the prompt If I want to reuse the prompt of a song I have generated there is no prompt shown. This is so stupid.Only lyrics and style description are shown but prompt is empty when I switch to simple. Where is the prompt Also why it is not shown in song details like in Udio SunoAI SunoAI" [Reddit Link](https://redd.it/1kfly9e) 2025-05-05T20:25Z [---] followers, [--] engagements "jazz vocal jazz ballad Summer Dream (when you wake up) deleted SunoAI SunoAI" [Reddit Link](https://redd.it/1kggmpd) 2025-05-06T21:43Z [--] followers, [--] engagements "I don't get go sorry I'm stupid - Why did cosumi pass I basically played a variation of this (link below) to win my first go game Without this video and just coping in reverse what he does was the only way to win for me. https://www.youtube.com/watchv=IyAOuA1Y5No(https://www.youtube.com/watchv=IyAOuA1Y5No) I do understand that I (black player) have won by [--] points. What I do not understand is why cosumi passed. Couldn't it reduce the point by I have won I do not understand this territory thing. How is it defined who is surrounding whom I don't get how to define who is prisoner and who is the" [Reddit Link](https://redd.it/1l9ys2a) 2025-06-12T21:41Z [--] followers, [--] engagements "I don't get it why is this a stalemate I did the under promotion exercise in lichess. If I choose knight as promotion for the pawn I get a stalemate here. But how is that possible if black is in check and can't move chessbeginners chessbeginners" [Reddit Link](https://redd.it/1m0de2d) 2025-07-15T09:39Z [--] followers, [--] engagements "Is lichess kind of elite I'm just wondering how my lichess rapid rating is at [---] which is only better than 8.3% of the players while on chess.com(http://chess.com) I have now reached [---] which is better than 51.8% of the players. To be fair on lichess I got [---] rated games while on chess.com(http://chess.com) it's only [--]. Still it shouldn't be that different. To me I'm one of the worst players on lichess while on chess.com(http://chess.com) I'm just beginner like. chessbeginners chessbeginners" [Reddit Link](https://redd.it/1m7l8e3) 2025-07-23T21:05Z [--] followers, [--] engagements "How long does Zilliqa take to finalize a single state machine zilliqa zilliqa" [Reddit Link](https://redd.it/1h21klw) 2024-11-28T21:40Z [--] followers, [--] engagements "Network is not working since [--] hours LINK: https://runescan.io/de/blocks(https://runescan.io/de/blocks) THORChain THORChain" [Reddit Link](https://redd.it/1hcvv6a) 2024-12-12T21:26Z [--] followers, [--] engagements "vllm vs llama.cpp on single GPU parallel requests in Q1 [----] I have searched the web and I did not found one up to date source which can tell me which of both llama.cpp or vllm is faster on a single GPU like RTX [----] as of now (Q1 2025). I only found one year old posts on reddit. So does somebody know which framework is faster **at time of writing** both for a single request and parallel requests (multiple slots) Is right now vllm still faster on multi GPU setups or has that changed and llama.cpp is as fast or even faster right now Thank you 🙂 LocalLLaMA LocalLLaMA" [Reddit Link](https://redd.it/1iwit2p) 2025-02-23T19:52Z [---] followers, [--] engagements "I have tokens on ZilPay - how to migrate to [---] zilliqa zilliqa" [Reddit Link](https://redd.it/1k4gfg9) 2025-04-21T16:27Z [--] followers, [--] engagements "Still not using Gemini as long they do not offer prepaid for their API GoogleGeminiAI GoogleGeminiAI" [Reddit Link](https://redd.it/1p0afpv) 2025-11-18T12:55Z [--] followers, [--] engagements "Gemini [--] has best vision singularity singularity" [Reddit Link](https://redd.it/1p1o7hq) 2025-11-20T00:04Z [--] followers, [--] engagements "Any provider who offer stable quality LocalLLaMA LocalLLaMA" [Reddit Link](https://redd.it/1p34w3t) 2025-11-21T18:10Z [--] followers, [--] engagements Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
@Ok-Scarcity-7875 Ok-Scarcity-7875Ok-Scarcity-7875 posts on Reddit about zilliqa, token, zilpepe, stablecoins the most. They currently have [---] followers and [--] posts still getting attention that total [--] engagements in the last [--] hours.
Social category influence cryptocurrencies finance technology brands stocks social networks
Social topic influence zilliqa #56, token, zilpepe, stablecoins, dai, matter, ai, where can, target, thorchain
Top accounts mentioned or mentioned by @200222
Top assets mentioned Zilliqa (ZIL) USELESS COIN (USELESS) Alphabet Inc Class A (GOOGL)
Top posts by engagements in the last [--] hours
"Stablecoins - Have something like Dai and zileur zilliqa zilliqa"
Reddit Link 2024-01-19T17:26Z [--] followers, [----] engagements
"Zilliqa should start to focus on hypes that matter like AI zilliqa zilliqa"
Reddit Link 2024-08-26T17:24Z [--] followers, [--] engagements
"Where can I swap tokens as of now zilliqa zilliqa"
Reddit Link 2025-06-27T18:13Z [--] followers, [--] engagements
"Zilliqa should target into bridges for memecoins and other tokens sitting on ETH zilliqa zilliqa"
Reddit Link 2024-11-13T17:13Z [--] followers, [---] engagements
"Make Zilliqa great (again) zilliqa zilliqa"
Reddit Link 2025-07-03T16:08Z [--] followers, [--] engagements
"Most useless Token - ZilPepe"
Reddit Link 2023-08-14T14:25Z [--] followers, [---] engagements
"Integrate Zilliqa into ThorChain (RUNE)"
Reddit Link 2023-08-18T08:42Z [--] followers, [----] engagements
"Most useless Token - ZilPepe"
Reddit Link 2023-08-14T14:25Z [--] followers, [---] engagements
"Integrate Zilliqa into ThorChain (RUNE) zilliqa zilliqa"
Reddit Link 2023-08-18T08:42Z [--] followers, [----] engagements
"Access to gpt4 via PayPal without subscription Is there a way to pay per 100k tokens (in 100k steps) in advance for gpt4 and other models I really would like to use gpt4 and pay for it but I have no credit card nor do I want to subscribe to something on a yearly basis. ChatGPT ChatGPT"
Reddit Link 2024-05-14T19:27Z [--] followers, [--] engagements
"TTS of large texts with translation in real-time using Llama [---] 3B uncensored I wrote a nice script which can you read large texts in real time like an audio book. Thanks to voice cloning the text is also read by anyone you want On top you can also let the script translate the text in your desired language via a LLM. I used https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-uncensored-GGUF(https://huggingface.co/mradermacher/Llama-3.2-3B-Instruct-uncensored-GGUF) with llama.cpp for this task. Imagine you want to read something difficult in a foreign language but you are both too lazy"
Reddit Link 2024-10-04T16:01Z [--] followers, [---] engagements
""No-brainer technique" gets the strawberry problem done with r1 distill I was experimenting with with the new deepseek distilled model and had the idea to stop them somehow from overthinking. I tried multiple things like prompting not to overthink or even edit inside of the think tags. Long story short a technique I call the "no-brainer" technique solves the strawberry test in like 90% of the cases: https://preview.redd.it/36992td1teee1.pngwidth=1430&format=png&auto=webp&s=d902d71c3566029338b6bafa921a3a32af7a8ecb The trick is to let the model first write the answer and then remove everything"
Reddit Link 2025-01-21T20:50Z [---] followers, [--] engagements
"Can I disable think with reasoning models using API EDIT: I use llama.cpp --- I want to test how reasoning models compare to non-reasoning models in certain tasks but reasoning models without using their reasoning. Is there a way I can do API calls like this : import requests import json # Define the API endpoint and your API key api_url = "https://api.openai.com/v1/chat/completions" api_key = "your_api_key_here" # Define the prompt and the messages prompt = "Write a short poem about the stars." messages = "role": "user" "content": prompt # Set up the headers for the API request"
Reddit Link 2025-02-22T14:29Z [---] followers, [--] engagements
"Is possible to use Google Play credits to use Gemini API without credit card but paid tier deleted googlecloud googlecloud"
Reddit Link 2025-04-07T11:16Z [--] followers, [--] engagements
"MVP Presentation with work title "Voice Reader" https://reddit.com/link/1k22ee3/video/8vbsz6zdukve1/player Hello I'm trying to figure out what I can offer. I'm normally always stuck in the "perfect idea" - "development" - "nothing" - "perfect idea" cycle. But this time I want to learn how to overcome this. I have multiple ideas and this is one of them: I call it "Voice Reader" for now and I developed it because I was tired of reading everything my self and instead have it read to me like an audio book. I know that other similar products do exist and maybe this exact product. However the USP"
Reddit Link 2025-04-18T11:14Z [---] followers, [--] engagements
"MVP Presentation with work title "Voice Reader" SaaS SaaS"
Reddit Link 2025-04-18T15:36Z [---] followers, [--] engagements
"Feedback - What's the best place I recently posted this: https://www.reddit.com/r/SaaS/comments/1k27ves/mvp_presentation_with_work_title_voice_reader/utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button(https://www.reddit.com/r/SaaS/comments/1k27ves/mvp_presentation_with_work_title_voice_reader/utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) and got no feedback other than I got no feedback. So it seems that nobody cares. Okay maybe the idea is not worth it then but still wondering why no one even replied like: "This"
Reddit Link 2025-04-25T07:21Z [---] followers, [--] engagements
"MoE is cool but does not solve speed when it comes to long context I really enjoy coding with Gemini [---] Pro but if I want to use something local qwen3-30b-a3b-128k seems to be the best pick right now for my Hardware. However if run it on CPU only (GPU does evaluation) where I have 128GB RAM the performance drops from 12Tk/s to [--] Tk/s with just 25k context which is nothing for Gemini [---] Pro. I guess at 50k context I'm at [--] Tk/s which is basically unusable. So either VRAM becomes more affordable or a new technique which also solves slow evaluation and generation for long contexts is needed."
Reddit Link 2025-05-01T11:26Z [---] followers, [--] engagements
"All songs gone deleted SunoAI SunoAI"
Reddit Link 2025-05-02T14:46Z [--] followers, [--] engagements
"OK MoE IS awesome Recently I posted this: https://www.reddit.com/r/LocalLLaMA/comments/1kc6cp7/moe_is_cool_but_does_not_solve_speed_when_it/(https://www.reddit.com/r/LocalLLaMA/comments/1kc6cp7/moe_is_cool_but_does_not_solve_speed_when_it/) I now want to correct myself as I have figured out that simply reducing a few layers (from [--] - 40) gives me massive more context I did not expect that as it seems that context VRAM / RAM consumption is not bound to total parameter count here but to the relatively tiny parameter count of the active experts A normal 32B non-MoE model would require much"
Reddit Link 2025-05-02T21:04Z [---] followers, [---] engagements
"Where is the prompt If I want to reuse the prompt of a song I have generated there is no prompt shown. This is so stupid.Only lyrics and style description are shown but prompt is empty when I switch to simple. Where is the prompt Also why it is not shown in song details like in Udio SunoAI SunoAI"
Reddit Link 2025-05-05T20:25Z [---] followers, [--] engagements
"jazz vocal jazz ballad Summer Dream (when you wake up) deleted SunoAI SunoAI"
Reddit Link 2025-05-06T21:43Z [--] followers, [--] engagements
"I don't get go sorry I'm stupid - Why did cosumi pass I basically played a variation of this (link below) to win my first go game Without this video and just coping in reverse what he does was the only way to win for me. https://www.youtube.com/watchv=IyAOuA1Y5No(https://www.youtube.com/watchv=IyAOuA1Y5No) I do understand that I (black player) have won by [--] points. What I do not understand is why cosumi passed. Couldn't it reduce the point by I have won I do not understand this territory thing. How is it defined who is surrounding whom I don't get how to define who is prisoner and who is the"
Reddit Link 2025-06-12T21:41Z [--] followers, [--] engagements
"I don't get it why is this a stalemate I did the under promotion exercise in lichess. If I choose knight as promotion for the pawn I get a stalemate here. But how is that possible if black is in check and can't move chessbeginners chessbeginners"
Reddit Link 2025-07-15T09:39Z [--] followers, [--] engagements
"Is lichess kind of elite I'm just wondering how my lichess rapid rating is at [---] which is only better than 8.3% of the players while on chess.com(http://chess.com) I have now reached [---] which is better than 51.8% of the players. To be fair on lichess I got [---] rated games while on chess.com(http://chess.com) it's only [--]. Still it shouldn't be that different. To me I'm one of the worst players on lichess while on chess.com(http://chess.com) I'm just beginner like. chessbeginners chessbeginners"
Reddit Link 2025-07-23T21:05Z [--] followers, [--] engagements
"How long does Zilliqa take to finalize a single state machine zilliqa zilliqa"
Reddit Link 2024-11-28T21:40Z [--] followers, [--] engagements
"Network is not working since [--] hours LINK: https://runescan.io/de/blocks(https://runescan.io/de/blocks) THORChain THORChain"
Reddit Link 2024-12-12T21:26Z [--] followers, [--] engagements
"vllm vs llama.cpp on single GPU parallel requests in Q1 [----] I have searched the web and I did not found one up to date source which can tell me which of both llama.cpp or vllm is faster on a single GPU like RTX [----] as of now (Q1 2025). I only found one year old posts on reddit. So does somebody know which framework is faster at time of writing both for a single request and parallel requests (multiple slots) Is right now vllm still faster on multi GPU setups or has that changed and llama.cpp is as fast or even faster right now Thank you 🙂 LocalLLaMA LocalLLaMA"
Reddit Link 2025-02-23T19:52Z [---] followers, [--] engagements
"I have tokens on ZilPay - how to migrate to [---] zilliqa zilliqa"
Reddit Link 2025-04-21T16:27Z [--] followers, [--] engagements
"Still not using Gemini as long they do not offer prepaid for their API GoogleGeminiAI GoogleGeminiAI"
Reddit Link 2025-11-18T12:55Z [--] followers, [--] engagements
"Gemini [--] has best vision singularity singularity"
Reddit Link 2025-11-20T00:04Z [--] followers, [--] engagements
"Any provider who offer stable quality LocalLLaMA LocalLLaMA"
Reddit Link 2025-11-21T18:10Z [--] followers, [--] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
/creator/reddit::Ok-Scarcity-7875