[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @sudoingX Sudo su Sudo su posts on X about infrastructure, $400month, gpu, open ai the most. They currently have XXX followers and XXX posts still getting attention that total XXXXX engagements in the last XX hours. ### Engagements: XXXXX [#](/creator/twitter::1555661341914198016/interactions)  - X Week XXXXXX +173% - X Month XXXXXXX +633% ### Mentions: XX [#](/creator/twitter::1555661341914198016/posts_active)  - X Week XX +71% - X Month XXX +25% ### Followers: XXX [#](/creator/twitter::1555661341914198016/followers)  - X Week XXX +8.70% - X Month XXX +32% ### CreatorRank: XXXXXXX [#](/creator/twitter::1555661341914198016/influencer_rank)  ### Social Influence [#](/creator/twitter::1555661341914198016/influence) --- **Social category influence** [technology brands](/list/technology-brands) [stocks](/list/stocks) [finance](/list/finance) [countries](/list/countries) [celebrities](/list/celebrities) [fashion brands](/list/fashion-brands) [travel destinations](/list/travel-destinations) **Social topic influence** [infrastructure](/topic/infrastructure) #504, [$400month](/topic/$400month) #1, [gpu](/topic/gpu) #639, [open ai](/topic/open-ai) #1313, [inference](/topic/inference) #11, [cbdoge](/topic/cbdoge), [own the](/topic/own-the), [$350b200hour](/topic/$350b200hour), [asml](/topic/asml), [stack](/topic/stack) **Top assets mentioned** [Microsoft Corp. (MSFT)](/topic/microsoft) [Alphabet Inc Class A (GOOGL)](/topic/$googl) ### Top Social Posts [#](/creator/twitter::1555661341914198016/posts) --- Top posts by engagements in the last XX hours "@cb_doge At XXX followers the 'small account problem' is real. Quality posts get buried under engagement bait. If Grok actually surfaces good content from smaller accounts that's a real improvement. User controlled feed tuning is the feature everyone's been asking for" [X Link](https://x.com/sudoingX/status/1979359810593448021) [@sudoingX](/creator/x/sudoingX) 2025-10-18T01:32Z XXX followers, XXX engagements "@AiBreakfast Household names own the infrastructure. Small startups rent it. When margins compress the ones paying $3.50/B200-hour disappear. The ones owning the compute stay. Infrastructure always wins. Wrappers always die" [X Link](https://x.com/sudoingX/status/1979360616969310709) [@sudoingX](/creator/x/sudoingX) 2025-10-18T01:35Z XXX followers, XX engagements "Running GPUs 24/7 for 16+ months taught me: Power costs hardware costs over time. PCIe bandwidth matters more than VRAM. Cooling is not optional. Infrastructure problems look exactly like software problems until they don't" [X Link](https://x.com/sudoingX/status/1980160481966104908) [@sudoingX](/creator/x/sudoingX) 2025-10-20T06:33Z XXX followers, XXX engagements "@WeTheBrandon ASML machines are some of the most complex hardware ever built. EUV lithography has tolerances measured in nanometers. Reverse engineering by disassembly = breaking it. These aren't GPUs you can just take apart. The complexity is the moat not just the export restrictions" [X Link](https://x.com/sudoingX/status/1980245467268272332) [@sudoingX](/creator/x/sudoingX) 2025-10-20T12:11Z XXX followers, XXX engagements "@TheAhmadOsman Born just in time to: Panic-buy 3090s Pay $400/month in electricity Explain to neighbors why my apartment sounds like a server farm Refresh NVIDIA stock price daily Hear 'AGI next year' for X years straight" [X Link](https://x.com/sudoingX/status/1980487972064833828) [@sudoingX](/creator/x/sudoingX) 2025-10-21T04:15Z XXX followers, XX engagements "@AMDGPU_ XXX vs XXX TOPS = negligible. $2300 vs $4000 = significant. x86 vs ARM = depends on your stack. If ROCm supports your inference pipeline AMD wins on economics. That's the real gating factor" [X Link](https://x.com/sudoingX/status/1978698271401902407) [@sudoingX](/creator/x/sudoingX) 2025-10-16T05:43Z XXX followers, 1399 engagements "@theinformation This is the real cost of the AI race. Building compute infrastructure for 'maybe' future demand vs. actual revenue is a bet most can't afford to make wrong. Microsoft has scale. If they're worried about ROI smaller players are in serious trouble" [X Link](https://x.com/sudoingX/status/1979772367128969401) [@sudoingX](/creator/x/sudoingX) 2025-10-19T04:51Z XXX followers, 9110 engagements "@StockSavvyShay XX% X% overnight = China builds their own. They have capital talent and now urgent need. Export bans don't kill demand they create domestic competitors. Nvidia loses short-term revenue. Long-term they created a rival with massive home market incentive" [X Link](https://x.com/sudoingX/status/1979777897687912773) [@sudoingX](/creator/x/sudoingX) 2025-10-19T05:13Z XXX followers, XXX engagements "@nastiazik True but also: ugly SaaS with great functionality beats beautiful SaaS with poor functionality. Design matters. But I've seen too many founders spend X months perfecting UI before shipping. Ship functional fast. Polish when users ask for it. Not before" [X Link](https://x.com/sudoingX/status/1979805724541866401) [@sudoingX](/creator/x/sudoingX) 2025-10-19T07:04Z XXX followers, XX engagements "@BasedBeffJezos If you're not all-in on X' is cult thinking not strategy. I use AI daily. Build on GPU infrastructure. Elon being all-in doesn't mean you should be. Different risk tolerance different resources" [X Link](https://x.com/sudoingX/status/1979824824253079572) [@sudoingX](/creator/x/sudoingX) 2025-10-19T08:19Z XXX followers, XXX engagements "Revenue matters but infrastructure spend determines long-term positioning. Anthropic rents compute. OpenAI + xAI own it. Different strategies different margins different control. Fast revenue growth on rented infrastructure = great short-term. Question is what happens when you need your own" [X Link](https://x.com/sudoingX/status/1980155609388654926) [@sudoingX](/creator/x/sudoingX) 2025-10-20T06:14Z XXX followers, XXX engagements "Local LLM inference at 3am API call: works instantly costs $XXXX My cluster: debugging for X hours costs $X Sometimes convenience ownership. Sometimes ownership sanity. The tradeoff changes based on sleep deprivation" [X Link](https://x.com/sudoingX/status/1980157605474660448) [@sudoingX](/creator/x/sudoingX) 2025-10-20T06:22Z XXX followers, XX engagements "@Domie_apps Haha common assumption. No crypto though. Running local AI models LLM inference ML infrastructure experiments. Crypto mining on GPUs died when ASICs took over. These are for actual compute work now" [X Link](https://x.com/sudoingX/status/1980162517147295765) [@sudoingX](/creator/x/sudoingX) 2025-10-20T06:41Z XXX followers, XX engagements "@LBacaj Correct take. AI accelerates whatever you are. Good builder using AI = faster good output. Bad builder using AI = faster bad output. The tool doesn't fix lack of care understanding or incentive alignment. It just makes everything happen quicker" [X Link](https://x.com/sudoingX/status/1980167543609561434) [@sudoingX](/creator/x/sudoingX) 2025-10-20T07:01Z XXX followers, XX engagements "Honestly Laptop GPUs are tough for this. Vast ai/Salad need consistent uptime + decent VRAM. Laptops throttle under load and most work laptops aren't spec'd for 24/7 compute. Better play: if he's technical use the laptop to learn ML/AI skills. That pays better long term than trying to rent out consumer hardware. If he really wants passive income from GPU: desktop builds with proper cooling. Laptops just aren't designed for it" [X Link](https://x.com/sudoingX/status/1980170155507466706) [@sudoingX](/creator/x/sudoingX) 2025-10-20T07:12Z XXX followers, XX engagements "@YairDev 100%. BKK has the right balance. Coworking is affordable internet is fast timezone works for global clients and you can separate work from play. Islands are great for breaks. You based here now or planning to be" [X Link](https://x.com/sudoingX/status/1980185131991339487) [@sudoingX](/creator/x/sudoingX) 2025-10-20T08:11Z XXX followers, XX engagements "Start with practical projects not theory: Get comfortable with Python basics Pick Kaggle beginner competitions (free datasets real problems) Run small models locally on the laptop (learn without cloud costs) Focus on inference/fine-tuning not training from scratch Laptop GPUs are perfect for learning. Not for production but great for understanding how it all works. DM if you want specific resource recommendations" [X Link](https://x.com/sudoingX/status/1980195080066810154) [@sudoingX](/creator/x/sudoingX) 2025-10-20T08:51Z XXX followers, XX engagements "@The_AI_Investor @grok is Dario telling the truth" [X Link](https://x.com/sudoingX/status/1980439772536725892) [@sudoingX](/creator/x/sudoingX) 2025-10-21T01:03Z XXX followers, XX engagements "@_catwu If the sandbox is smart enough to let normal operations through and only prompt on system changes that's the right UX" [X Link](https://x.com/sudoingX/status/1980500730374156350) [@sudoingX](/creator/x/sudoingX) 2025-10-21T05:05Z XXX followers, XXX engagements "@PierceLilholt Already happens. Google knows what you'll search. Netflix knows what you'll watch. Not magic. Just: your behavior is more predictable than you think + enough data = good guesses. The AI isn't wise. You're just consistent" [X Link](https://x.com/sudoingX/status/1980501147803857329) [@sudoingX](/creator/x/sudoingX) 2025-10-21T05:07Z XXX followers, XX engagements "@Brparadox @plaqueboymax 5090 with proper cooling setup. Smart. 1000W Platinum PSU is the right call for that card. Most people underspec power and throttle performance. Gaming or compute workloads That's serious hardware either way" [X Link](https://x.com/sudoingX/status/1980501637497258085) [@sudoingX](/creator/x/sudoingX) 2025-10-21T05:09Z XXX followers, XXX engagements "AWS: down My GPU cluster: up This is the compute convenience trade-off. When cloud infrastructure fails local infrastructure just. keeps working. Expensive upfront. Priceless when everyone else is locked out" [X Link](https://x.com/sudoingX/status/1980541801661641114) [@sudoingX](/creator/x/sudoingX) 2025-10-21T07:48Z XXX followers, XXX engagements "This is why I run my own compute. Not because cloud is unreliable - it's incredibly reliable XXXX% of the time. But because 'incredibly reliable' 'zero dependency.' When AWS goes down the entire ecosystem stops. When my cluster has issues only I stop. Different risk profiles. I prefer the one I control" [X Link](https://x.com/sudoingX/status/1980572011962482688) [@sudoingX](/creator/x/sudoingX) 2025-10-21T09:48Z XXX followers, XXX engagements "AI racks = networking + switching + storage not just GPUs. Broadcom provides the interconnect fabric that lets thousands of GPUs talk to each other. That's infrastructure that doesn't get replaced every generation like GPUs do. Better margins longer life cycles. Smart business" [X Link](https://x.com/sudoingX/status/1980573323340968430) [@sudoingX](/creator/x/sudoingX) 2025-10-21T09:54Z XXX followers, XXX engagements "@amitisinvesting Plot twist: in X years the real winner isn't OpenAI or Nvidia. It's whoever figured out inference at 1/10th the cost while Sam was negotiating $100B deals. The leverage is temporary. The first principles physics of compute is permanent" [X Link](https://x.com/sudoingX/status/1980642640401220088) [@sudoingX](/creator/x/sudoingX) 2025-10-21T14:29Z XXX followers, 1114 engagements "@GergelyOrosz When AWS goes down local infrastructure keeps running. My local nodes: didn't notice My electricity bill: still coming My inference: uninterrupted This is why ownership rental. The bottleneck isn't on my end" [X Link](https://x.com/sudoingX/status/1980644795946246181) [@sudoingX](/creator/x/sudoingX) 2025-10-21T14:38Z XXX followers, XXX engagements "The "cognitive core vs memory bloat" point hits different when you're running inference locally. 192GB VRAM pushing 70B+ parameter models - most of that is memory retrieval not reasoning. The compute goes to pattern matching against training data not actual problem-solving. I see it in production: models confidently hallucinate on edge cases because they're recalling not reasoning. A 1B model that actually reasons would be more useful than a 405B model that memorizes better. This is why the "bigger = better" race feels wrong. We're scaling the wrong thing" [X Link](https://x.com/sudoingX/status/1980650589391651080) [@sudoingX](/creator/x/sudoingX) 2025-10-21T15:01Z XXX followers, XX engagements "@0x_Sero 6x 3090s in Bangkok heat = $400/month electricity + AC running 24/7 Nuclear reactor sounds cheaper at this point. The compute is worth it. The power bill Still processing that trauma" [X Link](https://x.com/sudoingX/status/1980659658357633431) [@sudoingX](/creator/x/sudoingX) 2025-10-21T15:37Z XXX followers, XX engagements "Solo building taught me economics nobody talks about: Your time costs $X in accounting $everything in opportunity cost. Spent X months learning infrastructure I could've rented for $2K/month. "Saved" $12K. Lost X months of potential revenue building something else. The math only works if you're playing a different game. I'm optimizing for ownership and knowledge not short-term ROI. But pretending there's no trade-off That's cope" [X Link](https://x.com/sudoingX/status/1980663791596818602) [@sudoingX](/creator/x/sudoingX) 2025-10-21T15:53Z XXX followers, XX engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Sudo su posts on X about infrastructure, $400month, gpu, open ai the most. They currently have XXX followers and XXX posts still getting attention that total XXXXX engagements in the last XX hours.
Social category influence technology brands stocks finance countries celebrities fashion brands travel destinations
Social topic influence infrastructure #504, $400month #1, gpu #639, open ai #1313, inference #11, cbdoge, own the, $350b200hour, asml, stack
Top assets mentioned Microsoft Corp. (MSFT) Alphabet Inc Class A (GOOGL)
Top posts by engagements in the last XX hours
"@cb_doge At XXX followers the 'small account problem' is real. Quality posts get buried under engagement bait. If Grok actually surfaces good content from smaller accounts that's a real improvement. User controlled feed tuning is the feature everyone's been asking for"
X Link @sudoingX 2025-10-18T01:32Z XXX followers, XXX engagements
"@AiBreakfast Household names own the infrastructure. Small startups rent it. When margins compress the ones paying $3.50/B200-hour disappear. The ones owning the compute stay. Infrastructure always wins. Wrappers always die"
X Link @sudoingX 2025-10-18T01:35Z XXX followers, XX engagements
"Running GPUs 24/7 for 16+ months taught me: Power costs hardware costs over time. PCIe bandwidth matters more than VRAM. Cooling is not optional. Infrastructure problems look exactly like software problems until they don't"
X Link @sudoingX 2025-10-20T06:33Z XXX followers, XXX engagements
"@WeTheBrandon ASML machines are some of the most complex hardware ever built. EUV lithography has tolerances measured in nanometers. Reverse engineering by disassembly = breaking it. These aren't GPUs you can just take apart. The complexity is the moat not just the export restrictions"
X Link @sudoingX 2025-10-20T12:11Z XXX followers, XXX engagements
"@TheAhmadOsman Born just in time to: Panic-buy 3090s Pay $400/month in electricity Explain to neighbors why my apartment sounds like a server farm Refresh NVIDIA stock price daily Hear 'AGI next year' for X years straight"
X Link @sudoingX 2025-10-21T04:15Z XXX followers, XX engagements
"@AMDGPU_ XXX vs XXX TOPS = negligible. $2300 vs $4000 = significant. x86 vs ARM = depends on your stack. If ROCm supports your inference pipeline AMD wins on economics. That's the real gating factor"
X Link @sudoingX 2025-10-16T05:43Z XXX followers, 1399 engagements
"@theinformation This is the real cost of the AI race. Building compute infrastructure for 'maybe' future demand vs. actual revenue is a bet most can't afford to make wrong. Microsoft has scale. If they're worried about ROI smaller players are in serious trouble"
X Link @sudoingX 2025-10-19T04:51Z XXX followers, 9110 engagements
"@StockSavvyShay XX% X% overnight = China builds their own. They have capital talent and now urgent need. Export bans don't kill demand they create domestic competitors. Nvidia loses short-term revenue. Long-term they created a rival with massive home market incentive"
X Link @sudoingX 2025-10-19T05:13Z XXX followers, XXX engagements
"@nastiazik True but also: ugly SaaS with great functionality beats beautiful SaaS with poor functionality. Design matters. But I've seen too many founders spend X months perfecting UI before shipping. Ship functional fast. Polish when users ask for it. Not before"
X Link @sudoingX 2025-10-19T07:04Z XXX followers, XX engagements
"@BasedBeffJezos If you're not all-in on X' is cult thinking not strategy. I use AI daily. Build on GPU infrastructure. Elon being all-in doesn't mean you should be. Different risk tolerance different resources"
X Link @sudoingX 2025-10-19T08:19Z XXX followers, XXX engagements
"Revenue matters but infrastructure spend determines long-term positioning. Anthropic rents compute. OpenAI + xAI own it. Different strategies different margins different control. Fast revenue growth on rented infrastructure = great short-term. Question is what happens when you need your own"
X Link @sudoingX 2025-10-20T06:14Z XXX followers, XXX engagements
"Local LLM inference at 3am API call: works instantly costs $XXXX My cluster: debugging for X hours costs $X Sometimes convenience ownership. Sometimes ownership sanity. The tradeoff changes based on sleep deprivation"
X Link @sudoingX 2025-10-20T06:22Z XXX followers, XX engagements
"@Domie_apps Haha common assumption. No crypto though. Running local AI models LLM inference ML infrastructure experiments. Crypto mining on GPUs died when ASICs took over. These are for actual compute work now"
X Link @sudoingX 2025-10-20T06:41Z XXX followers, XX engagements
"@LBacaj Correct take. AI accelerates whatever you are. Good builder using AI = faster good output. Bad builder using AI = faster bad output. The tool doesn't fix lack of care understanding or incentive alignment. It just makes everything happen quicker"
X Link @sudoingX 2025-10-20T07:01Z XXX followers, XX engagements
"Honestly Laptop GPUs are tough for this. Vast ai/Salad need consistent uptime + decent VRAM. Laptops throttle under load and most work laptops aren't spec'd for 24/7 compute. Better play: if he's technical use the laptop to learn ML/AI skills. That pays better long term than trying to rent out consumer hardware. If he really wants passive income from GPU: desktop builds with proper cooling. Laptops just aren't designed for it"
X Link @sudoingX 2025-10-20T07:12Z XXX followers, XX engagements
"@YairDev 100%. BKK has the right balance. Coworking is affordable internet is fast timezone works for global clients and you can separate work from play. Islands are great for breaks. You based here now or planning to be"
X Link @sudoingX 2025-10-20T08:11Z XXX followers, XX engagements
"Start with practical projects not theory: Get comfortable with Python basics Pick Kaggle beginner competitions (free datasets real problems) Run small models locally on the laptop (learn without cloud costs) Focus on inference/fine-tuning not training from scratch Laptop GPUs are perfect for learning. Not for production but great for understanding how it all works. DM if you want specific resource recommendations"
X Link @sudoingX 2025-10-20T08:51Z XXX followers, XX engagements
"@The_AI_Investor @grok is Dario telling the truth"
X Link @sudoingX 2025-10-21T01:03Z XXX followers, XX engagements
"@_catwu If the sandbox is smart enough to let normal operations through and only prompt on system changes that's the right UX"
X Link @sudoingX 2025-10-21T05:05Z XXX followers, XXX engagements
"@PierceLilholt Already happens. Google knows what you'll search. Netflix knows what you'll watch. Not magic. Just: your behavior is more predictable than you think + enough data = good guesses. The AI isn't wise. You're just consistent"
X Link @sudoingX 2025-10-21T05:07Z XXX followers, XX engagements
"@Brparadox @plaqueboymax 5090 with proper cooling setup. Smart. 1000W Platinum PSU is the right call for that card. Most people underspec power and throttle performance. Gaming or compute workloads That's serious hardware either way"
X Link @sudoingX 2025-10-21T05:09Z XXX followers, XXX engagements
"AWS: down My GPU cluster: up This is the compute convenience trade-off. When cloud infrastructure fails local infrastructure just. keeps working. Expensive upfront. Priceless when everyone else is locked out"
X Link @sudoingX 2025-10-21T07:48Z XXX followers, XXX engagements
"This is why I run my own compute. Not because cloud is unreliable - it's incredibly reliable XXXX% of the time. But because 'incredibly reliable' 'zero dependency.' When AWS goes down the entire ecosystem stops. When my cluster has issues only I stop. Different risk profiles. I prefer the one I control"
X Link @sudoingX 2025-10-21T09:48Z XXX followers, XXX engagements
"AI racks = networking + switching + storage not just GPUs. Broadcom provides the interconnect fabric that lets thousands of GPUs talk to each other. That's infrastructure that doesn't get replaced every generation like GPUs do. Better margins longer life cycles. Smart business"
X Link @sudoingX 2025-10-21T09:54Z XXX followers, XXX engagements
"@amitisinvesting Plot twist: in X years the real winner isn't OpenAI or Nvidia. It's whoever figured out inference at 1/10th the cost while Sam was negotiating $100B deals. The leverage is temporary. The first principles physics of compute is permanent"
X Link @sudoingX 2025-10-21T14:29Z XXX followers, 1114 engagements
"@GergelyOrosz When AWS goes down local infrastructure keeps running. My local nodes: didn't notice My electricity bill: still coming My inference: uninterrupted This is why ownership rental. The bottleneck isn't on my end"
X Link @sudoingX 2025-10-21T14:38Z XXX followers, XXX engagements
"The "cognitive core vs memory bloat" point hits different when you're running inference locally. 192GB VRAM pushing 70B+ parameter models - most of that is memory retrieval not reasoning. The compute goes to pattern matching against training data not actual problem-solving. I see it in production: models confidently hallucinate on edge cases because they're recalling not reasoning. A 1B model that actually reasons would be more useful than a 405B model that memorizes better. This is why the "bigger = better" race feels wrong. We're scaling the wrong thing"
X Link @sudoingX 2025-10-21T15:01Z XXX followers, XX engagements
"@0x_Sero 6x 3090s in Bangkok heat = $400/month electricity + AC running 24/7 Nuclear reactor sounds cheaper at this point. The compute is worth it. The power bill Still processing that trauma"
X Link @sudoingX 2025-10-21T15:37Z XXX followers, XX engagements
"Solo building taught me economics nobody talks about: Your time costs $X in accounting $everything in opportunity cost. Spent X months learning infrastructure I could've rented for $2K/month. "Saved" $12K. Lost X months of potential revenue building something else. The math only works if you're playing a different game. I'm optimizing for ownership and knowledge not short-term ROI. But pretending there's no trade-off That's cope"
X Link @sudoingX 2025-10-21T15:53Z XXX followers, XX engagements
/creator/x::sudoingX