[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] [@RihardJarc](/creator/twitter/RihardJarc) "Ok all great but where is the MONEY going to come from OAI now with Stargate $NVDA and now $AMD has said to spend and build 26GW of data centers in the next few years. 1GW = $60B. So the total "committed" spend is now at $1.56T. In the last X years $AAPL $AMZN $META $MSFT and $GOOGL have generated a TOTAL of $1.4T of FCF in those X years" [X Link](https://x.com/RihardJarc/status/1975160855550362098) [@RihardJarc](/creator/x/RihardJarc) 2025-10-06T11:26Z 62.9K followers, 264.6K engagements "$GOOGL is the only frontier LLM provider that has the full stack already in place & working (distribution AI model data own cloud TPUs). If $GOOGL wins the consumer LLM race it is not only bad for OAI but also for $NVDA. $GOOGL is the only one not totally beholden to $NVDA" [X Link](https://x.com/RihardJarc/status/1978821636963885074) [@RihardJarc](/creator/x/RihardJarc) 2025-10-16T13:53Z 62.9K followers, 57.2K engagements "Since the launch of GPT5 it does feel like OpenAI is trying to juice everything it has to increase engagement growth numbers etc. And one can understand why to meet their plans they will have to raise +$1T in the next few years. These numbers from db if correct might be part of the motivation. If growth starts to slow down it is not only OAI's problem but the whole industry's problem" [X Link](https://x.com/RihardJarc/status/1978837302836084974) [@RihardJarc](/creator/x/RihardJarc) 2025-10-16T14:55Z 62.9K followers, 22.1K engagements "The real AI data center shortage. Energy and Transformers: - "waiting times for orders that usually take a year are now XX months long. He doesn't think the delay times will improve next year; he believes they will worsen until 2030 with 2028 being the peak." - "Transformer prices have gone up from what was usually within $3M-$4 for XXX MVA-200 MVA to now $7-$8M or even $10M"" [X Link](https://x.com/RihardJarc/status/1978859248680612159) [@RihardJarc](/creator/x/RihardJarc) 2025-10-16T16:23Z 62.9K followers, 151.2K engagements "The cloud business economics on renting GPUs neoclouds and $NVDA's chokehold has transformed it into a low-margin business: - " $ORCL has found it challenging to generate a gross margin of more than XX% from renting out Nvidia chips that came out one or two years ago according to a new internal Oracle document that hasnt been previously reported. " - "In the August quarter $ORCL lost money on AMD GPUs which became generally available in fall 2024 after properly accounting for depreciation the document shows" - "About half of $ORCL's GPU rental revenue in the August quarter came from Hopper" [X Link](https://x.com/RihardJarc/status/1979178739784200507) [@RihardJarc](/creator/x/RihardJarc) 2025-10-17T13:32Z 62.9K followers, 54.3K engagements "It's not about GPUs anymore; it's all about POWER. A comment from a $GOOGL employee working on datacenters: Getting GPUs and TPUs is not a bottleneck. "Power lack of available power reliable power has become the biggest bottleneck for us." on @AlphaSenseInc" [X Link](https://x.com/RihardJarc/status/1973737827872784653) [@RihardJarc](/creator/x/RihardJarc) 2025-10-02T13:12Z 62.9K followers, 821.3K engagements "My take on GPUs having a real 1-2 year life usefulness instead of +4 years is opening up many questions so let me explain in more detail: First all of the counter arguments are the following "but H100 A100 are still being used and they are 3-5 years old" "customers will use old GPUs for inference workloads" "big tech is using old GPUs for internal workloads" Here is why this is the wrong thinking: X. People forget that $NVDA has gone to a 1-year product cycle in 2024 (not sooner) so Blackwell is still the product of a 2-year product cycle. Before Blackwell Hopper -H100 H200 was the product" [X Link](https://x.com/RihardJarc/status/1978466473317114027) [@RihardJarc](/creator/x/RihardJarc) 2025-10-15T14:22Z 62.9K followers, 337.1K engagements "An interesting comment from a Former $GOOGL employee: - The real excellence from TPUs is not coming from the chips themselves but the software & ecosystem that $GOOGL has optimized for the TPU. Could make a similar argument for $NVDA. found on @AlphaSenseInc" [X Link](https://x.com/RihardJarc/status/1978547126763164143) [@RihardJarc](/creator/x/RihardJarc) 2025-10-15T19:42Z 62.9K followers, 31.2K engagements "OpenAI still has a big majority of GenAI Traffic market share but $GOOGL's Gemini has taken significant market share over the last few months growing from the X% range to XXXX% today" [X Link](https://x.com/RihardJarc/status/1979895399818354883) [@RihardJarc](/creator/x/RihardJarc) 2025-10-19T13:00Z 62.9K followers, 123.1K engagements "Two main takeaways from the WSJ article on OAI and $NVDA. - You can see how highly $NVDA's Jensen's opinion is on $GOOGL's TPUs (Jensen wouldn't be concerned let alone call a client about any chips of competitors that he didn't find would be real alternatives) - If $NVDA ended up guaranteeing some of the OpenAI loans for data center build out it would be the final straw and another proof that we are OUT of organic capital. The AI technological cycle will continue and is early but the financial cycle wrapping this tech cycle is coming to its limits with these insane numbers" [X Link](https://x.com/RihardJarc/status/1980619631225155630) [@RihardJarc](/creator/x/RihardJarc) 2025-10-21T12:58Z 62.9K followers, 78.9K engagements "The CEO of $ARM explains where the bottlenecks for AI data centers are right now: - power & power regulation - $TSM "which is the only game in town" for 3nm and 2nm - HBM memory There is a reason why $TSM $MU and SK Hynix are a significant part of my portfolio" [X Link](https://x.com/RihardJarc/status/1882778555844841879) [@RihardJarc](/creator/x/RihardJarc) 2025-01-24T13:12Z 62.8K followers, 93.4K engagements "I have increased my portfolio positions in the semiconductor layer recently with $TSM $NVDA and $AMD buys as I believe we are severely underestimating where we are in this AI cycle. We are still in the era where we are looking at AI from a Search information retrieval lens (hence all the focus around LLMs replacing $GOOGL Search etc) but we will soon enter an era where the AI does things (is a worker assistant etc.). The TAM of information retrieval is well known as you can think of something like a $GOOGL Search but the TAM of doing things is an order of magnitude larger and investors have" [X Link](https://x.com/RihardJarc/status/1948705604446347639) [@RihardJarc](/creator/x/RihardJarc) 2025-07-25T11:23Z 62.8K followers, 51.5K engagements "Some interesting comments from a Former $GOOGL employee who worked on chips on the TPU/ $NVDA debate: X. According to him $GOOGL's TPUs are from 25-30% to 2x better compared to $NVDA depending on the AI use-case. X. He mentions that TPUs are not just built for inference but for both training and inference. The last generation V6 Ironwood is more specifically built for inference but the prior designs were built for both. X. He mentions that there are currently customers who actively use $GOOGL TPUs for both training and inference. The data that he saw says that TPUs have a 2-4% market share in" [X Link](https://x.com/RihardJarc/status/1973031724645831000) [@RihardJarc](/creator/x/RihardJarc) 2025-09-30T14:26Z 62.8K followers, 135.9K engagements "Big deal OpenAI and $AMD. OAI could take a XX% stake in $AMD. OAI will deploy up to X GW of $AMD GPUs over multiple years beginning in 2026 with X GW. Damn in $NVDA's case OAI buys $NVDA chips and $NVDA gets stake but in $AMD's case OAI buys $AMD but $AMD must give stake" [X Link](https://x.com/RihardJarc/status/1975153776299544846) [@RihardJarc](/creator/x/RihardJarc) 2025-10-06T10:58Z 62.8K followers, 60.1K engagements "Sold out of my $AMD $TSM and $NVDA position today. Yes I am a big long-term believer in AI and the effects of it but I am also a realist and a rational investor. I will be publishing an article this week sharing in detail my thoughts and some of the newest alternative data" [X Link](https://x.com/RihardJarc/status/1975195823941263675) [@RihardJarc](/creator/x/RihardJarc) 2025-10-06T13:45Z 62.8K followers, 331K engagements "The problem with $META is that Zuck has publicly stated that $META will have by far the greatest compute per researcher. Now that OAI has made a series of crazy and huge deals with $NVDA and $AMD Zuck will IMO double down and put a heavy CapEx number on their next earnings call. Zuck has recently said on a podcast that even if we are in a bubble he is willing to overspend a few hundred billion rather than lose the AI race. While $META shareholders want $META to succeed in AI trying to mirror the size of OAI's commitment to spending at this stage is IMO not reasonable and won't be met with" [X Link](https://x.com/RihardJarc/status/1975579868248101102) [@RihardJarc](/creator/x/RihardJarc) 2025-10-07T15:11Z 62.8K followers, 96.7K engagements "An interesting comment from a Former $META employee. ENERGY is the biggest bottleneck right now. Even if $META wants to spend $100-$150B on CapEx for AI infrastructure they can't. It is not just $NVDA. Transformers power equipment cooling equipment and the availability of power it is all limited right now. Schneider Electric is completely booked until 2030. Even if you have the money you can't spend it" [X Link](https://x.com/RihardJarc/status/1950213045264797796) [@RihardJarc](/creator/x/RihardJarc) 2025-07-29T15:13Z 62.9K followers, 1.5M engagements "I just published my cautious view on the current state of the AI market & why I have trimmed or sold many of my positions. - We are running out of organic capital - GPUs are a fast-depreciating asset - Valuations are factoring in a very small chance of things slowing down" [X Link](https://x.com/RihardJarc/status/1976303077733888234) [@RihardJarc](/creator/x/RihardJarc) 2025-10-09T15:05Z 62.9K followers, 198K engagements "Another announcement $AVGO and OpenAI deal for another 10GW of capacity. So add $500B-$600B of additional investments to the $1.5T number from OAI already. Sure why not go to $2T from $1.5T who cares" [X Link](https://x.com/RihardJarc/status/1977723067976573325) [@RihardJarc](/creator/x/RihardJarc) 2025-10-13T13:08Z 62.9K followers, 51.1K engagements "$GOOGL's TPUs will in the long run probably turn out to be one of their best investments in history. - $GOOGL processes over XXX quadrillion tokens/month - OpenAI's API processes XXX trillion/month $GOOGL with AI overviews and Gemini is already showing you how cost/effective you can run GenAI with the help of TPUs at enormous scale" [X Link](https://x.com/RihardJarc/status/1977735859949482043) [@RihardJarc](/creator/x/RihardJarc) 2025-10-13T13:59Z 62.9K followers, 166K engagements "An interview with a former OpenAI employee that came over the weekend confirming my hunch which I already wrote about in the article last week that the biggest friction between OpenAI and $MSFT is around infrastructure capacity: X. OAI requests were "leading to massive amounts of CapEx potential that $MSFT would be on the hook for" X. "Microsoft is getting a request from its biggest customer/investment that they want an almost inexhaustible amount of GPUs. That's fine in terms of predicting your demand for a one-year period from today say but what about in the year 2029 when you end up having" [X Link](https://x.com/RihardJarc/status/1977743106997928445) [@RihardJarc](/creator/x/RihardJarc) 2025-10-13T14:27Z 62.9K followers, 118.1K engagements "Some golden nuggets from an interview with a former $GOOGL employee on TPUs and the future of Search: X. According to him $NVDA is specifically bought to satisfy customer demand (on the cloud). $GOOGL doesn't use $NVDA GPUs for production use cases on their own products like Gemini. Every first-party service that $GOOGL has is powered by TPUs. X. $GOOGL uses TPUs because they are cheaper on a performance basis you don't have to rely on $NVDA and because the entire $GOOGL stack is optimized for TPUs all the way to the top. X. According to him $GOOGL is investing heavily to build out more TPUs" [X Link](https://x.com/RihardJarc/status/1978808195234595280) [@RihardJarc](/creator/x/RihardJarc) 2025-10-16T13:00Z 62.9K followers, 404.3K engagements "A semiconductor consultant working with hyperscalers shares some interesting insights on the current $NVDA/ASIC buildout: X. He thinks that some smaller data center deals might start hitting some headwinds because there is backlash starting in some areas from residents because they are worried about electrical grid capacity overloads increasing electric bills water use noise and traffic associated with the data center. This backlash is forcing some data centers to change their plans resulting in 6-12 month delays. On the other hand he does see significant tailwinds from these massive" [X Link](https://x.com/RihardJarc/status/1980990124638023808) [@RihardJarc](/creator/x/RihardJarc) 2025-10-22T13:30Z 62.9K followers, 51.6K engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
@RihardJarc
"Ok all great but where is the MONEY going to come from OAI now with Stargate $NVDA and now $AMD has said to spend and build 26GW of data centers in the next few years. 1GW = $60B. So the total "committed" spend is now at $1.56T. In the last X years $AAPL $AMZN $META $MSFT and $GOOGL have generated a TOTAL of $1.4T of FCF in those X years"
X Link @RihardJarc 2025-10-06T11:26Z 62.9K followers, 264.6K engagements
"$GOOGL is the only frontier LLM provider that has the full stack already in place & working (distribution AI model data own cloud TPUs). If $GOOGL wins the consumer LLM race it is not only bad for OAI but also for $NVDA. $GOOGL is the only one not totally beholden to $NVDA"
X Link @RihardJarc 2025-10-16T13:53Z 62.9K followers, 57.2K engagements
"Since the launch of GPT5 it does feel like OpenAI is trying to juice everything it has to increase engagement growth numbers etc. And one can understand why to meet their plans they will have to raise +$1T in the next few years. These numbers from db if correct might be part of the motivation. If growth starts to slow down it is not only OAI's problem but the whole industry's problem"
X Link @RihardJarc 2025-10-16T14:55Z 62.9K followers, 22.1K engagements
"The real AI data center shortage. Energy and Transformers: - "waiting times for orders that usually take a year are now XX months long. He doesn't think the delay times will improve next year; he believes they will worsen until 2030 with 2028 being the peak." - "Transformer prices have gone up from what was usually within $3M-$4 for XXX MVA-200 MVA to now $7-$8M or even $10M""
X Link @RihardJarc 2025-10-16T16:23Z 62.9K followers, 151.2K engagements
"The cloud business economics on renting GPUs neoclouds and $NVDA's chokehold has transformed it into a low-margin business: - " $ORCL has found it challenging to generate a gross margin of more than XX% from renting out Nvidia chips that came out one or two years ago according to a new internal Oracle document that hasnt been previously reported. " - "In the August quarter $ORCL lost money on AMD GPUs which became generally available in fall 2024 after properly accounting for depreciation the document shows" - "About half of $ORCL's GPU rental revenue in the August quarter came from Hopper"
X Link @RihardJarc 2025-10-17T13:32Z 62.9K followers, 54.3K engagements
"It's not about GPUs anymore; it's all about POWER. A comment from a $GOOGL employee working on datacenters: Getting GPUs and TPUs is not a bottleneck. "Power lack of available power reliable power has become the biggest bottleneck for us." on @AlphaSenseInc"
X Link @RihardJarc 2025-10-02T13:12Z 62.9K followers, 821.3K engagements
"My take on GPUs having a real 1-2 year life usefulness instead of +4 years is opening up many questions so let me explain in more detail: First all of the counter arguments are the following "but H100 A100 are still being used and they are 3-5 years old" "customers will use old GPUs for inference workloads" "big tech is using old GPUs for internal workloads" Here is why this is the wrong thinking: X. People forget that $NVDA has gone to a 1-year product cycle in 2024 (not sooner) so Blackwell is still the product of a 2-year product cycle. Before Blackwell Hopper -H100 H200 was the product"
X Link @RihardJarc 2025-10-15T14:22Z 62.9K followers, 337.1K engagements
"An interesting comment from a Former $GOOGL employee: - The real excellence from TPUs is not coming from the chips themselves but the software & ecosystem that $GOOGL has optimized for the TPU. Could make a similar argument for $NVDA. found on @AlphaSenseInc"
X Link @RihardJarc 2025-10-15T19:42Z 62.9K followers, 31.2K engagements
"OpenAI still has a big majority of GenAI Traffic market share but $GOOGL's Gemini has taken significant market share over the last few months growing from the X% range to XXXX% today"
X Link @RihardJarc 2025-10-19T13:00Z 62.9K followers, 123.1K engagements
"Two main takeaways from the WSJ article on OAI and $NVDA. - You can see how highly $NVDA's Jensen's opinion is on $GOOGL's TPUs (Jensen wouldn't be concerned let alone call a client about any chips of competitors that he didn't find would be real alternatives) - If $NVDA ended up guaranteeing some of the OpenAI loans for data center build out it would be the final straw and another proof that we are OUT of organic capital. The AI technological cycle will continue and is early but the financial cycle wrapping this tech cycle is coming to its limits with these insane numbers"
X Link @RihardJarc 2025-10-21T12:58Z 62.9K followers, 78.9K engagements
"The CEO of $ARM explains where the bottlenecks for AI data centers are right now: - power & power regulation - $TSM "which is the only game in town" for 3nm and 2nm - HBM memory There is a reason why $TSM $MU and SK Hynix are a significant part of my portfolio"
X Link @RihardJarc 2025-01-24T13:12Z 62.8K followers, 93.4K engagements
"I have increased my portfolio positions in the semiconductor layer recently with $TSM $NVDA and $AMD buys as I believe we are severely underestimating where we are in this AI cycle. We are still in the era where we are looking at AI from a Search information retrieval lens (hence all the focus around LLMs replacing $GOOGL Search etc) but we will soon enter an era where the AI does things (is a worker assistant etc.). The TAM of information retrieval is well known as you can think of something like a $GOOGL Search but the TAM of doing things is an order of magnitude larger and investors have"
X Link @RihardJarc 2025-07-25T11:23Z 62.8K followers, 51.5K engagements
"Some interesting comments from a Former $GOOGL employee who worked on chips on the TPU/ $NVDA debate: X. According to him $GOOGL's TPUs are from 25-30% to 2x better compared to $NVDA depending on the AI use-case. X. He mentions that TPUs are not just built for inference but for both training and inference. The last generation V6 Ironwood is more specifically built for inference but the prior designs were built for both. X. He mentions that there are currently customers who actively use $GOOGL TPUs for both training and inference. The data that he saw says that TPUs have a 2-4% market share in"
X Link @RihardJarc 2025-09-30T14:26Z 62.8K followers, 135.9K engagements
"Big deal OpenAI and $AMD. OAI could take a XX% stake in $AMD. OAI will deploy up to X GW of $AMD GPUs over multiple years beginning in 2026 with X GW. Damn in $NVDA's case OAI buys $NVDA chips and $NVDA gets stake but in $AMD's case OAI buys $AMD but $AMD must give stake"
X Link @RihardJarc 2025-10-06T10:58Z 62.8K followers, 60.1K engagements
"Sold out of my $AMD $TSM and $NVDA position today. Yes I am a big long-term believer in AI and the effects of it but I am also a realist and a rational investor. I will be publishing an article this week sharing in detail my thoughts and some of the newest alternative data"
X Link @RihardJarc 2025-10-06T13:45Z 62.8K followers, 331K engagements
"The problem with $META is that Zuck has publicly stated that $META will have by far the greatest compute per researcher. Now that OAI has made a series of crazy and huge deals with $NVDA and $AMD Zuck will IMO double down and put a heavy CapEx number on their next earnings call. Zuck has recently said on a podcast that even if we are in a bubble he is willing to overspend a few hundred billion rather than lose the AI race. While $META shareholders want $META to succeed in AI trying to mirror the size of OAI's commitment to spending at this stage is IMO not reasonable and won't be met with"
X Link @RihardJarc 2025-10-07T15:11Z 62.8K followers, 96.7K engagements
"An interesting comment from a Former $META employee. ENERGY is the biggest bottleneck right now. Even if $META wants to spend $100-$150B on CapEx for AI infrastructure they can't. It is not just $NVDA. Transformers power equipment cooling equipment and the availability of power it is all limited right now. Schneider Electric is completely booked until 2030. Even if you have the money you can't spend it"
X Link @RihardJarc 2025-07-29T15:13Z 62.9K followers, 1.5M engagements
"I just published my cautious view on the current state of the AI market & why I have trimmed or sold many of my positions. - We are running out of organic capital - GPUs are a fast-depreciating asset - Valuations are factoring in a very small chance of things slowing down"
X Link @RihardJarc 2025-10-09T15:05Z 62.9K followers, 198K engagements
"Another announcement $AVGO and OpenAI deal for another 10GW of capacity. So add $500B-$600B of additional investments to the $1.5T number from OAI already. Sure why not go to $2T from $1.5T who cares"
X Link @RihardJarc 2025-10-13T13:08Z 62.9K followers, 51.1K engagements
"$GOOGL's TPUs will in the long run probably turn out to be one of their best investments in history. - $GOOGL processes over XXX quadrillion tokens/month - OpenAI's API processes XXX trillion/month $GOOGL with AI overviews and Gemini is already showing you how cost/effective you can run GenAI with the help of TPUs at enormous scale"
X Link @RihardJarc 2025-10-13T13:59Z 62.9K followers, 166K engagements
"An interview with a former OpenAI employee that came over the weekend confirming my hunch which I already wrote about in the article last week that the biggest friction between OpenAI and $MSFT is around infrastructure capacity: X. OAI requests were "leading to massive amounts of CapEx potential that $MSFT would be on the hook for" X. "Microsoft is getting a request from its biggest customer/investment that they want an almost inexhaustible amount of GPUs. That's fine in terms of predicting your demand for a one-year period from today say but what about in the year 2029 when you end up having"
X Link @RihardJarc 2025-10-13T14:27Z 62.9K followers, 118.1K engagements
"Some golden nuggets from an interview with a former $GOOGL employee on TPUs and the future of Search: X. According to him $NVDA is specifically bought to satisfy customer demand (on the cloud). $GOOGL doesn't use $NVDA GPUs for production use cases on their own products like Gemini. Every first-party service that $GOOGL has is powered by TPUs. X. $GOOGL uses TPUs because they are cheaper on a performance basis you don't have to rely on $NVDA and because the entire $GOOGL stack is optimized for TPUs all the way to the top. X. According to him $GOOGL is investing heavily to build out more TPUs"
X Link @RihardJarc 2025-10-16T13:00Z 62.9K followers, 404.3K engagements
"A semiconductor consultant working with hyperscalers shares some interesting insights on the current $NVDA/ASIC buildout: X. He thinks that some smaller data center deals might start hitting some headwinds because there is backlash starting in some areas from residents because they are worried about electrical grid capacity overloads increasing electric bills water use noise and traffic associated with the data center. This backlash is forcing some data centers to change their plans resulting in 6-12 month delays. On the other hand he does see significant tailwinds from these massive"
X Link @RihardJarc 2025-10-22T13:30Z 62.9K followers, 51.6K engagements
/creator/twitter::4765100357/posts