Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

![RihardJarc Avatar](https://lunarcrush.com/gi/w:24/cr:twitter::4765100357.png) Rihard Jarc [@RihardJarc](/creator/twitter/RihardJarc) on x 51K followers
Created: 2025-07-03 14:27:29 UTC

$AMD might be starting to catch up to $NVDA! I HIGHLY recommend this interview with a current high-ranking employee at $DELL, talking about $AMD vs $NVDA and the client demand:

X. According to him, $AMD's main problem is ROCm, which is immature compared to $NVDA's CUDA, and the second problem is networking. However, right now, he is receiving feedback from clients that the latest releases of ROCm have become significantly more stable, and $AMD has addressed many of the bugs. He now expects ROCm to be much more stable, with fewer bugs and less performance impact this time.

X. On the networking side, $AMD still lags behind $NVDA due to a generation gap. The big disadvantage for $AMD on the training side is that they don't have a rack-scale system, and they are not going to have that until MI400X. However, he does mention that $AMD is working with $AVGO, $DELL, and others to ensure they've a solid scale-out. He thinks their scale-out will be as good as any of the $NVDA Mellanox stuff.

X. There is a strong desire to develop alternatives to $NVDA. There isn't a customer in the world that would not leap at a viable alternative to $NVDA. He thinks that if $AMD can achieve XX% of H200 or B200 performance, they will see a substantial increase in demand. Much will depend on the stability of ROCm software. If they can get ROCM right, he thinks they won't beat $NVDA or be at parity when it comes to performance, but they will still pick up substantial demand. If $AMD is today at 3-5% of market share, he sees that doubling to 10%.

X. Now is the time for $AMD as a lot of clients are starting to scale their inference applications. $AMD has some advantages. For single-mode inferencing, their memory bandwidth is equivalent to a B300 because they have so much more memory. He thinks that $AMD for inferencing might end up being a very good

X. Another important thing that he mentioned is that $NVDA might anger a lot of their cloud clients with the launch of their project Lepton. Which is an orchestration platform for GPUs. He thinks $NVDA might want to drive a lot of traffic to those cloud clients of theirs, which they prefer or have an equity stake in, like $CRWV, that might drive even more motivation from the hyperscalers to have an alternative with $AMD.

X. He also thinks an interesting moment, even for the training market, will be when $NVDA comes out with Vera Rubin and $AMD comes out with MI400X, which is roughly in the same time frame. He thinks $AMD will still lack a bit, but it will be pretty close. $AMD might have an advantage with the MI400X stack with the x86 front end, while $NVDA will go with their $ARM-based processor.

The recent development of ROCm might be a pivotal moment for $AMD. I will dive deeper into this topic and publish an article with insights in the coming week on $AMD vs $NVDA. You can subscribe to it in my bio.

![](https://pbs.twimg.com/media/Gu8IgKwWgAAeqg8.png)

XXXXXX engagements

![Engagements Line Chart](https://lunarcrush.com/gi/w:600/p:tweet::1940779433972941046/c:line.svg)

**Related Topics**
[nvda](/topic/nvda)
[$nvdas](/topic/$nvdas)
[$amds](/topic/$amds)
[$dell](/topic/$dell)
[$amd](/topic/$amd)
[advanced micro devices](/topic/advanced-micro-devices)
[stocks technology](/topic/stocks-technology)
[$nvda](/topic/$nvda)

[Post Link](https://x.com/RihardJarc/status/1940779433972941046)

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

RihardJarc Avatar Rihard Jarc @RihardJarc on x 51K followers Created: 2025-07-03 14:27:29 UTC

$AMD might be starting to catch up to $NVDA! I HIGHLY recommend this interview with a current high-ranking employee at $DELL, talking about $AMD vs $NVDA and the client demand:

X. According to him, $AMD's main problem is ROCm, which is immature compared to $NVDA's CUDA, and the second problem is networking. However, right now, he is receiving feedback from clients that the latest releases of ROCm have become significantly more stable, and $AMD has addressed many of the bugs. He now expects ROCm to be much more stable, with fewer bugs and less performance impact this time.

X. On the networking side, $AMD still lags behind $NVDA due to a generation gap. The big disadvantage for $AMD on the training side is that they don't have a rack-scale system, and they are not going to have that until MI400X. However, he does mention that $AMD is working with $AVGO, $DELL, and others to ensure they've a solid scale-out. He thinks their scale-out will be as good as any of the $NVDA Mellanox stuff.

X. There is a strong desire to develop alternatives to $NVDA. There isn't a customer in the world that would not leap at a viable alternative to $NVDA. He thinks that if $AMD can achieve XX% of H200 or B200 performance, they will see a substantial increase in demand. Much will depend on the stability of ROCm software. If they can get ROCM right, he thinks they won't beat $NVDA or be at parity when it comes to performance, but they will still pick up substantial demand. If $AMD is today at 3-5% of market share, he sees that doubling to 10%.

X. Now is the time for $AMD as a lot of clients are starting to scale their inference applications. $AMD has some advantages. For single-mode inferencing, their memory bandwidth is equivalent to a B300 because they have so much more memory. He thinks that $AMD for inferencing might end up being a very good

X. Another important thing that he mentioned is that $NVDA might anger a lot of their cloud clients with the launch of their project Lepton. Which is an orchestration platform for GPUs. He thinks $NVDA might want to drive a lot of traffic to those cloud clients of theirs, which they prefer or have an equity stake in, like $CRWV, that might drive even more motivation from the hyperscalers to have an alternative with $AMD.

X. He also thinks an interesting moment, even for the training market, will be when $NVDA comes out with Vera Rubin and $AMD comes out with MI400X, which is roughly in the same time frame. He thinks $AMD will still lack a bit, but it will be pretty close. $AMD might have an advantage with the MI400X stack with the x86 front end, while $NVDA will go with their $ARM-based processor.

The recent development of ROCm might be a pivotal moment for $AMD. I will dive deeper into this topic and publish an article with insights in the coming week on $AMD vs $NVDA. You can subscribe to it in my bio.

XXXXXX engagements

Engagements Line Chart

Related Topics nvda $nvdas $amds $dell $amd advanced micro devices stocks technology $nvda

Post Link

post/tweet::1940779433972941046
/post/tweet::1940779433972941046