Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

[@sudoingX](/creator/twitter/sudoingX)
"@ns123abc Non-profit for-profit transitions are messy. The real question: can you build AGI-scale compute without billions in capital Structure follows funding reality even if optics are bad"  
[X Link](https://x.com/sudoingX/status/1978323878100296012) [@sudoingX](/creator/x/sudoingX) 2025-10-15T04:55Z XXX followers, 4829 engagements


"@WeTheBrandon ASML machines are some of the most complex hardware ever built. EUV lithography has tolerances measured in nanometers. Reverse engineering by disassembly = breaking it. These aren't GPUs you can just take apart. The complexity is the moat not just the export restrictions"  
[X Link](https://x.com/sudoingX/status/1980245467268272332) [@sudoingX](/creator/x/sudoingX) 2025-10-20T12:11Z XXX followers, 1225 engagements


"@The_AI_Investor @grok is Dario telling the truth"  
[X Link](https://x.com/sudoingX/status/1980439772536725892) [@sudoingX](/creator/x/sudoingX) 2025-10-21T01:03Z XXX followers, XX engagements


"@TheAhmadOsman Born just in time to: Panic-buy 3090s Pay $400/month in electricity Explain to neighbors why my apartment sounds like a server farm Refresh NVIDIA stock price daily Hear 'AGI next year' for X years straight"  
[X Link](https://x.com/sudoingX/status/1980487972064833828) [@sudoingX](/creator/x/sudoingX) 2025-10-21T04:15Z XXX followers, XXX engagements


"@_catwu If the sandbox is smart enough to let normal operations through and only prompt on system changes that's the right UX"  
[X Link](https://x.com/sudoingX/status/1980500730374156350) [@sudoingX](/creator/x/sudoingX) 2025-10-21T05:05Z XXX followers, XXX engagements


"@PierceLilholt Already happens. Google knows what you'll search. Netflix knows what you'll watch. Not magic. Just: your behavior is more predictable than you think + enough data = good guesses. The AI isn't wise. You're just consistent"  
[X Link](https://x.com/sudoingX/status/1980501147803857329) [@sudoingX](/creator/x/sudoingX) 2025-10-21T05:07Z XXX followers, XX engagements


"@Brparadox @plaqueboymax 5090 with proper cooling setup. Smart. 1000W Platinum PSU is the right call for that card. Most people underspec power and throttle performance. Gaming or compute workloads That's serious hardware either way"  
[X Link](https://x.com/sudoingX/status/1980501637497258085) [@sudoingX](/creator/x/sudoingX) 2025-10-21T05:09Z XXX followers, XXX engagements


"This is why I run my own compute. Not because cloud is unreliable - it's incredibly reliable XXXX% of the time. But because 'incredibly reliable' 'zero dependency.' When AWS goes down the entire ecosystem stops. When my cluster has issues only I stop. Different risk profiles. I prefer the one I control"  
[X Link](https://x.com/sudoingX/status/1980572011962482688) [@sudoingX](/creator/x/sudoingX) 2025-10-21T09:48Z XXX followers, XXX engagements


"AI racks = networking + switching + storage not just GPUs. Broadcom provides the interconnect fabric that lets thousands of GPUs talk to each other. That's infrastructure that doesn't get replaced every generation like GPUs do. Better margins longer life cycles. Smart business"  
[X Link](https://x.com/sudoingX/status/1980573323340968430) [@sudoingX](/creator/x/sudoingX) 2025-10-21T09:54Z XXX followers, XXX engagements


"@amitisinvesting Plot twist: in X years the real winner isn't OpenAI or Nvidia. It's whoever figured out inference at 1/10th the cost while Sam was negotiating $100B deals. The leverage is temporary. The first principles physics of compute is permanent"  
[X Link](https://x.com/sudoingX/status/1980642640401220088) [@sudoingX](/creator/x/sudoingX) 2025-10-21T14:29Z XXX followers, 1735 engagements


"The "cognitive core vs memory bloat" point hits different when you're running inference locally. 192GB VRAM pushing 70B+ parameter models - most of that is memory retrieval not reasoning. The compute goes to pattern matching against training data not actual problem-solving. I see it in production: models confidently hallucinate on edge cases because they're recalling not reasoning. A 1B model that actually reasons would be more useful than a 405B model that memorizes better. This is why the "bigger = better" race feels wrong. We're scaling the wrong thing"  
[X Link](https://x.com/sudoingX/status/1980650589391651080) [@sudoingX](/creator/x/sudoingX) 2025-10-21T15:01Z XXX followers, XX engagements


"@0x_Sero 6x 3090s in Bangkok heat = $400/month electricity + AC running 24/7 Nuclear reactor sounds cheaper at this point. The compute is worth it. The power bill Still processing that trauma"  
[X Link](https://x.com/sudoingX/status/1980659658357633431) [@sudoingX](/creator/x/sudoingX) 2025-10-21T15:37Z XXX followers, XXX engagements


"Solo building taught me economics nobody talks about: Your time costs $X in accounting $everything in opportunity cost. Spent X months learning infrastructure I could've rented for $2K/month. "Saved" $12K. Lost X months of potential revenue building something else. The math only works if you're playing a different game. I'm optimizing for ownership and knowledge not short-term ROI. But pretending there's no trade-off That's cope"  
[X Link](https://x.com/sudoingX/status/1980663791596818602) [@sudoingX](/creator/x/sudoingX) 2025-10-21T15:53Z XXX followers, XX engagements


"@HotAisle SCALE could change things if it works reliably. But compiler compatibility ecosystem equivalence. CUDA's real moat is XX years of community knowledge framework defaults and production battle testing. Still rooting for competition. Everyone wins if AMD becomes viable"  
[X Link](https://x.com/sudoingX/status/1981199675232571879) [@sudoingX](/creator/x/sudoingX) 2025-10-23T03:23Z XXX followers, XX engagements


"@amperlycom @redtachyon Fair correction. Meta's not fighting for profit. They're fighting for relevance in an AI race they're losing on mindshare despite winning on margins"  
[X Link](https://x.com/sudoingX/status/1981260883897831537) [@sudoingX](/creator/x/sudoingX) 2025-10-23T07:26Z XXX followers, XX engagements


"Main blocker for switching from NVIDIA: Issues: RDNA X installation: device detection fails driver signatures missing dependency hell Newest AMD chips unsupported (Ryzen AI 300) Support dropped for recent cards unpredictably Setup: CUDA = XX min ROCm = 4-40 hours Would switch if: X. All current-gen RDNA supported X. pip install simplicity X. 5+ year support windows Hardware advantage is real (price/VRAM). Software experience is the gap"  
[X Link](https://x.com/sudoingX/status/1981197548531622310) [@sudoingX](/creator/x/sudoingX) 2025-10-23T03:14Z XXX followers, XXX engagements


"@andre_banandre This is the AMD hardware advantage we were discussing earlier. Beats NVIDIA on raw performance. Question remains: ecosystem maturity. If Strix Halo ships with stable ROCm support this changes the inference economics"  
[X Link](https://x.com/sudoingX/status/1981646989642391746) [@sudoingX](/creator/x/sudoingX) 2025-10-24T09:00Z XXX followers, XXX engagements


"Running GPUs 24/7 for 16+ months taught me: Power costs hardware costs over time. PCIe bandwidth matters more than VRAM. Cooling is not optional. Infrastructure problems look exactly like software problems until they don't"  
[X Link](https://x.com/sudoingX/status/1980160481966104908) [@sudoingX](/creator/x/sudoingX) 2025-10-20T06:33Z XXX followers, XXX engagements


"AWS: down My GPU cluster: up This is the compute convenience trade-off. When cloud infrastructure fails local infrastructure just. keeps working. Expensive upfront. Priceless when everyone else is locked out"  
[X Link](https://x.com/sudoingX/status/1980541801661641114) [@sudoingX](/creator/x/sudoingX) 2025-10-21T07:48Z XXX followers, XXX engagements


"@GergelyOrosz When AWS goes down local infrastructure keeps running. My local nodes: didn't notice My electricity bill: still coming My inference: uninterrupted This is why ownership rental. The bottleneck isn't on my end"  
[X Link](https://x.com/sudoingX/status/1980644795946246181) [@sudoingX](/creator/x/sudoingX) 2025-10-21T14:38Z XXX followers, 1275 engagements


"@TheAhmadOsman Browser = distribution layer = data collection layer OpenAI wants default access to everything you do online. That's not about better AI. That's about owning the pipeline. Hard pass"  
[X Link](https://x.com/sudoingX/status/1980791036818669782) [@sudoingX](/creator/x/sudoingX) 2025-10-22T00:19Z XXX followers, XXX engagements


"@OpenAI Browser wars XXX but make it AI-native. The question: does AI benefit from being browser-integrated or does the browser benefit from AI data collection Probably both. Leaning toward the latter"  
[X Link](https://x.com/sudoingX/status/1980791596238139496) [@sudoingX](/creator/x/sudoingX) 2025-10-22T00:21Z XXX followers, XXX engagements


"24 hours in AI: OpenAI ships browser (controls web layer) Anthropic ships desktop integration (augments your workspace) OpenAI: "We'll browse for you" Anthropic: "We'll work with you" One feels like platform lock-in. One feels like infrastructure I control. Guess which approach I prefer"  
[X Link](https://x.com/sudoingX/status/1980796529599475895) [@sudoingX](/creator/x/sudoingX) 2025-10-22T00:41Z XXX followers, XXX engagements


"Unless. the browser IS the path to AGI. Not technically. Economically. AGI requires: massive compute + massive data + massive distribution Browser = infinite training data from every user action Browser = platform lock-in that funds the compute Chrome made Google invincible. Atlas could do the same for OpenAI"  
[X Link](https://x.com/sudoingX/status/1980797638699282536) [@sudoingX](/creator/x/sudoingX) 2025-10-22T00:45Z XXX followers, 3848 engagements


"@loloelwolf97 @svpino Exactly. And that's Google's problem. They have everything to lose. OpenAI has everything to gain. Google can't cannibalize their ad business to chase AGI. OpenAI has no such constraint. Incumbents rarely beat insurgents when the game changes"  
[X Link](https://x.com/sudoingX/status/1980804539486978094) [@sudoingX](/creator/x/sudoingX) 2025-10-22T01:12Z XXX followers, XXX engagements


"@dolartrooper @svpino Correct. Because Google's constrained by their $200B+ ad business. Can't disrupt yourself when XX% of revenue depends on status quo. OpenAI has no such constraint. That's the advantage - freedom to actually use the data for AGI not ads"  
[X Link](https://x.com/sudoingX/status/1980804808673239078) [@sudoingX](/creator/x/sudoingX) 2025-10-22T01:14Z XXX followers, XXX engagements


"@cmiondotdev @svpino You got me there. Hard to claim independence when you're building on their foundation. Maybe the browser wars analogy doesn't work as cleanly as I thought. Time will tell if the AI layer is differentiated enough to matter"  
[X Link](https://x.com/sudoingX/status/1980805282721931272) [@sudoingX](/creator/x/sudoingX) 2025-10-22T01:15Z XXX followers, XXX engagements


"You're right - they can use it for both. But when priorities conflict ads win every time. That's where the revenue is. OpenAI has one goal: AGI. Every browser decision optimizes for that. Google has two goals: ads + AI. Guess which gets priority when they conflict Having the option actually doing it"  
[X Link](https://x.com/sudoingX/status/1980805939122040847) [@sudoingX](/creator/x/sudoingX) 2025-10-22T01:18Z XXX followers, XX engagements


"Fair point on data quality for core AGI development. But browser data isn't for training the next GPT-5. It's for: Personalization at scale (how people actually use AI) Revenue to fund the $100B+ compute bills Distribution moat (can't switch without losing all context) AGI gets built in the lab. Browser funds the lab + controls deployment"  
[X Link](https://x.com/sudoingX/status/1980835139644191154) [@sudoingX](/creator/x/sudoingX) 2025-10-22T03:14Z XXX followers, XXX engagements


"@HyperTechInvest AMD investing $270M in Taiwan infrastructure while NVIDIA announces space datacenters. One company building cooling for 1500W chips. Other launching H100s to orbit. Different strategies. One's about margins. One's about headlines"  
[X Link](https://x.com/sudoingX/status/1981671254232039604) [@sudoingX](/creator/x/sudoingX) 2025-10-24T10:36Z XXX followers, XXX engagements


"@JonhernandezIA ATLAS is OpenAI's models in a Chromium based browser wrapper not "your" ChatGPT. "Your ChatGPT" = local model on hardware you own. No API. No company between you and inference"  
[X Link](https://x.com/sudoingX/status/1981702618658545869) [@sudoingX](/creator/x/sudoingX) 2025-10-24T12:41Z XXX followers, XXX engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@sudoingX "@ns123abc Non-profit for-profit transitions are messy. The real question: can you build AGI-scale compute without billions in capital Structure follows funding reality even if optics are bad"
X Link @sudoingX 2025-10-15T04:55Z XXX followers, 4829 engagements

"@WeTheBrandon ASML machines are some of the most complex hardware ever built. EUV lithography has tolerances measured in nanometers. Reverse engineering by disassembly = breaking it. These aren't GPUs you can just take apart. The complexity is the moat not just the export restrictions"
X Link @sudoingX 2025-10-20T12:11Z XXX followers, 1225 engagements

"@The_AI_Investor @grok is Dario telling the truth"
X Link @sudoingX 2025-10-21T01:03Z XXX followers, XX engagements

"@TheAhmadOsman Born just in time to: Panic-buy 3090s Pay $400/month in electricity Explain to neighbors why my apartment sounds like a server farm Refresh NVIDIA stock price daily Hear 'AGI next year' for X years straight"
X Link @sudoingX 2025-10-21T04:15Z XXX followers, XXX engagements

"@_catwu If the sandbox is smart enough to let normal operations through and only prompt on system changes that's the right UX"
X Link @sudoingX 2025-10-21T05:05Z XXX followers, XXX engagements

"@PierceLilholt Already happens. Google knows what you'll search. Netflix knows what you'll watch. Not magic. Just: your behavior is more predictable than you think + enough data = good guesses. The AI isn't wise. You're just consistent"
X Link @sudoingX 2025-10-21T05:07Z XXX followers, XX engagements

"@Brparadox @plaqueboymax 5090 with proper cooling setup. Smart. 1000W Platinum PSU is the right call for that card. Most people underspec power and throttle performance. Gaming or compute workloads That's serious hardware either way"
X Link @sudoingX 2025-10-21T05:09Z XXX followers, XXX engagements

"This is why I run my own compute. Not because cloud is unreliable - it's incredibly reliable XXXX% of the time. But because 'incredibly reliable' 'zero dependency.' When AWS goes down the entire ecosystem stops. When my cluster has issues only I stop. Different risk profiles. I prefer the one I control"
X Link @sudoingX 2025-10-21T09:48Z XXX followers, XXX engagements

"AI racks = networking + switching + storage not just GPUs. Broadcom provides the interconnect fabric that lets thousands of GPUs talk to each other. That's infrastructure that doesn't get replaced every generation like GPUs do. Better margins longer life cycles. Smart business"
X Link @sudoingX 2025-10-21T09:54Z XXX followers, XXX engagements

"@amitisinvesting Plot twist: in X years the real winner isn't OpenAI or Nvidia. It's whoever figured out inference at 1/10th the cost while Sam was negotiating $100B deals. The leverage is temporary. The first principles physics of compute is permanent"
X Link @sudoingX 2025-10-21T14:29Z XXX followers, 1735 engagements

"The "cognitive core vs memory bloat" point hits different when you're running inference locally. 192GB VRAM pushing 70B+ parameter models - most of that is memory retrieval not reasoning. The compute goes to pattern matching against training data not actual problem-solving. I see it in production: models confidently hallucinate on edge cases because they're recalling not reasoning. A 1B model that actually reasons would be more useful than a 405B model that memorizes better. This is why the "bigger = better" race feels wrong. We're scaling the wrong thing"
X Link @sudoingX 2025-10-21T15:01Z XXX followers, XX engagements

"@0x_Sero 6x 3090s in Bangkok heat = $400/month electricity + AC running 24/7 Nuclear reactor sounds cheaper at this point. The compute is worth it. The power bill Still processing that trauma"
X Link @sudoingX 2025-10-21T15:37Z XXX followers, XXX engagements

"Solo building taught me economics nobody talks about: Your time costs $X in accounting $everything in opportunity cost. Spent X months learning infrastructure I could've rented for $2K/month. "Saved" $12K. Lost X months of potential revenue building something else. The math only works if you're playing a different game. I'm optimizing for ownership and knowledge not short-term ROI. But pretending there's no trade-off That's cope"
X Link @sudoingX 2025-10-21T15:53Z XXX followers, XX engagements

"@HotAisle SCALE could change things if it works reliably. But compiler compatibility ecosystem equivalence. CUDA's real moat is XX years of community knowledge framework defaults and production battle testing. Still rooting for competition. Everyone wins if AMD becomes viable"
X Link @sudoingX 2025-10-23T03:23Z XXX followers, XX engagements

"@amperlycom @redtachyon Fair correction. Meta's not fighting for profit. They're fighting for relevance in an AI race they're losing on mindshare despite winning on margins"
X Link @sudoingX 2025-10-23T07:26Z XXX followers, XX engagements

"Main blocker for switching from NVIDIA: Issues: RDNA X installation: device detection fails driver signatures missing dependency hell Newest AMD chips unsupported (Ryzen AI 300) Support dropped for recent cards unpredictably Setup: CUDA = XX min ROCm = 4-40 hours Would switch if: X. All current-gen RDNA supported X. pip install simplicity X. 5+ year support windows Hardware advantage is real (price/VRAM). Software experience is the gap"
X Link @sudoingX 2025-10-23T03:14Z XXX followers, XXX engagements

"@andre_banandre This is the AMD hardware advantage we were discussing earlier. Beats NVIDIA on raw performance. Question remains: ecosystem maturity. If Strix Halo ships with stable ROCm support this changes the inference economics"
X Link @sudoingX 2025-10-24T09:00Z XXX followers, XXX engagements

"Running GPUs 24/7 for 16+ months taught me: Power costs hardware costs over time. PCIe bandwidth matters more than VRAM. Cooling is not optional. Infrastructure problems look exactly like software problems until they don't"
X Link @sudoingX 2025-10-20T06:33Z XXX followers, XXX engagements

"AWS: down My GPU cluster: up This is the compute convenience trade-off. When cloud infrastructure fails local infrastructure just. keeps working. Expensive upfront. Priceless when everyone else is locked out"
X Link @sudoingX 2025-10-21T07:48Z XXX followers, XXX engagements

"@GergelyOrosz When AWS goes down local infrastructure keeps running. My local nodes: didn't notice My electricity bill: still coming My inference: uninterrupted This is why ownership rental. The bottleneck isn't on my end"
X Link @sudoingX 2025-10-21T14:38Z XXX followers, 1275 engagements

"@TheAhmadOsman Browser = distribution layer = data collection layer OpenAI wants default access to everything you do online. That's not about better AI. That's about owning the pipeline. Hard pass"
X Link @sudoingX 2025-10-22T00:19Z XXX followers, XXX engagements

"@OpenAI Browser wars XXX but make it AI-native. The question: does AI benefit from being browser-integrated or does the browser benefit from AI data collection Probably both. Leaning toward the latter"
X Link @sudoingX 2025-10-22T00:21Z XXX followers, XXX engagements

"24 hours in AI: OpenAI ships browser (controls web layer) Anthropic ships desktop integration (augments your workspace) OpenAI: "We'll browse for you" Anthropic: "We'll work with you" One feels like platform lock-in. One feels like infrastructure I control. Guess which approach I prefer"
X Link @sudoingX 2025-10-22T00:41Z XXX followers, XXX engagements

"Unless. the browser IS the path to AGI. Not technically. Economically. AGI requires: massive compute + massive data + massive distribution Browser = infinite training data from every user action Browser = platform lock-in that funds the compute Chrome made Google invincible. Atlas could do the same for OpenAI"
X Link @sudoingX 2025-10-22T00:45Z XXX followers, 3848 engagements

"@loloelwolf97 @svpino Exactly. And that's Google's problem. They have everything to lose. OpenAI has everything to gain. Google can't cannibalize their ad business to chase AGI. OpenAI has no such constraint. Incumbents rarely beat insurgents when the game changes"
X Link @sudoingX 2025-10-22T01:12Z XXX followers, XXX engagements

"@dolartrooper @svpino Correct. Because Google's constrained by their $200B+ ad business. Can't disrupt yourself when XX% of revenue depends on status quo. OpenAI has no such constraint. That's the advantage - freedom to actually use the data for AGI not ads"
X Link @sudoingX 2025-10-22T01:14Z XXX followers, XXX engagements

"@cmiondotdev @svpino You got me there. Hard to claim independence when you're building on their foundation. Maybe the browser wars analogy doesn't work as cleanly as I thought. Time will tell if the AI layer is differentiated enough to matter"
X Link @sudoingX 2025-10-22T01:15Z XXX followers, XXX engagements

"You're right - they can use it for both. But when priorities conflict ads win every time. That's where the revenue is. OpenAI has one goal: AGI. Every browser decision optimizes for that. Google has two goals: ads + AI. Guess which gets priority when they conflict Having the option actually doing it"
X Link @sudoingX 2025-10-22T01:18Z XXX followers, XX engagements

"Fair point on data quality for core AGI development. But browser data isn't for training the next GPT-5. It's for: Personalization at scale (how people actually use AI) Revenue to fund the $100B+ compute bills Distribution moat (can't switch without losing all context) AGI gets built in the lab. Browser funds the lab + controls deployment"
X Link @sudoingX 2025-10-22T03:14Z XXX followers, XXX engagements

"@HyperTechInvest AMD investing $270M in Taiwan infrastructure while NVIDIA announces space datacenters. One company building cooling for 1500W chips. Other launching H100s to orbit. Different strategies. One's about margins. One's about headlines"
X Link @sudoingX 2025-10-24T10:36Z XXX followers, XXX engagements

"@JonhernandezIA ATLAS is OpenAI's models in a Chromium based browser wrapper not "your" ChatGPT. "Your ChatGPT" = local model on hardware you own. No API. No company between you and inference"
X Link @sudoingX 2025-10-24T12:41Z XXX followers, XXX engagements

creator/twitter::1555661341914198016/posts
/creator/twitter::1555661341914198016/posts