[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  Antonio Linares [@alc2022](/creator/twitter/alc2022) on x 90.4K followers Created: 2025-07-14 15:01:21 UTC X. LLMs are big. If you can store an LLM on chip, electrons travel a shorter distance from memory to compute, resulting in faster and cheaper inferences. To do that, you need a large amount of memory. The graph below shows that $AMD MI355X has one point six times more memory than $NVDA GB200.  XXXXX engagements  **Related Topics** [nvda](/topic/nvda) [$amd](/topic/$amd) [llm](/topic/llm) [advanced micro devices](/topic/advanced-micro-devices) [stocks technology](/topic/stocks-technology) [$nvda](/topic/$nvda) [Post Link](https://x.com/alc2022/status/1944774223366328681)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Antonio Linares @alc2022 on x 90.4K followers
Created: 2025-07-14 15:01:21 UTC
X. LLMs are big. If you can store an LLM on chip, electrons travel a shorter distance from memory to compute, resulting in faster and cheaper inferences. To do that, you need a large amount of memory.
The graph below shows that $AMD MI355X has one point six times more memory than $NVDA GB200.
XXXXX engagements
Related Topics nvda $amd llm advanced micro devices stocks technology $nvda
/post/tweet::1944774223366328681