[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

Inference

Nvidia's dominance in AI training is being challenged by the rise of AI inference, with AMD showing competitive performance and market analysts predicting a shift in favor of AMD for inference workloads. Meanwhile, advancements in AI chips like Tesla's AI5 and Google's AI accelerators are pushing the boundaries of performance and efficiency.

About Inference

AI Inference is the process of deploying trained AI models to make predictions or decisions on new data.

Engagements: XXXXXXXXX (24h)

Engagements Line Chart
Engagements 24-Hour Time-Series Raw Data
Current Value: XXXXXXXXX
Daily Average: XXXXXXXXX
X Week: XXXXXXXXXX -X%
X Month: XXXXXXXXXX -XX%
X Months: XXXXXXXXXXX +46%
X Year: XXXXXXXXXXX +126%
1-Year High: XXXXXXXXXX on 2025-02-13
1-Year Low: XXXXX on 2025-11-18

Engagements by network (24h): News: XXXXX Reddit: XXXXX TikTok: XXXXX X: XXXXXXXXX YouTube: XXXXXXX

Mentions: XXXXX (24h)

Mentions Line Chart
Mentions 24-Hour Time-Series Raw Data
Current Value: XXXXX
Daily Average: XXXXX
X Week: XXXXX -XX%
X Month: XXXXXX +18%
X Months: XXXXXXX +376%
X Year: XXXXXXX +359%
1-Year High: XXXXX on 2025-07-27
1-Year Low: XX on 2025-11-18

Mentions by network (24h): News: XX Reddit: XXX TikTok: XXX X: XXXXXX YouTube: XXXXX

Creators: XXXXX (24h)

Creators Line Chart
Creators 24-Hour Time-Series Raw Data
XXXXX unique social accounts have posts mentioning Inference in the last XX hours which is down XXXX% from XXXXX in the previous XX hours Daily Average: XXXXX
X Week: XXXXX -XX%
X Month: XXXXXX -XX%
X Months: XXXXXX +212%
X Year: XXXXXX +190%
1-Year High: XXXXX on 2025-07-26
1-Year Low: XX on 2025-11-18

The most influential creators that mention Inference in the last XX hours

Creator Rank Followers Posts Engagements
@nebiustf X XXXXX X XXXXXXX
@farzyness X XXXXXXX X XXXXXXX
@GavinSBaker X XXXXXXX X XXXXXXX
@crusoe X XXX X XXXXXX
@inference_labs X XXXXXX XX XXXXXX
@alz_zyd_ X XXXXXX X XXXXXX
@ezrafeilden X XXXXX X XXXXXX
@zephyr_z9 X XXXXXX X XXXXXX
@Yvng_Joe X XXXXX X XXXXXX
@nebiusai XX XXXXXX X XXXXXX

View More

Sentiment: XX%

Sentiment Line Chart
Sentiment 24-Hour Time-Series Raw Data
Current Value: XX%
Daily Average: XX%
X Week: XX% no change
X Month: XX% +1%
X Months: XX% +2%
X Year: XX% no change
1-Year High: XX% on 2025-10-30
1-Year Low: XX% on 2025-02-13

Most Supportive Themes:

Most Critical Themes:

Top Inference News

Top news links shared on social in the last XX hours

Showing a maximum of X news posts for non-authenticated requests. Use your API key in requests for full results.

"FAR Labs Gathers AI and DePIN Builders for Dubai Networking Night Press release Bitcoin News"
News Link @BitcoinNews 2025-12-03T15:02Z 3.2M followers, 7531 engagements

"Seagate Technology: AIs Silent Winner Up XXX% - Heres What Happens Next (NASDAQ:STX) Seeking Alpha"
News Link @SeekingAlpha 2025-12-02T19:00Z 228.6M followers, 1666 engagements

"Google debuts AI chips with 4X performance boost secures Anthropic megadeal worth billions VentureBeat"
News Link @VentureBeat 2025-11-06T13:44Z 208.1M followers, 9303 engagements

"Analog optical computer for AI inference and combinatorial optimization Nature"
News Link @NaturePortfolio 2025-09-05T13:59Z 228.6M followers, 2581 engagements

Top Inference Social Posts

Top posts by engagements in the last XX hours

Showing only X posts for non-authenticated requests. Use your API key in requests for full results.

"It has been no secret that Nvidia $NVDA has dominated the AI training cycle as its data center revenue has risen 12X in ten quarters yet the forthcoming AI inference wave means the AI accelerator industry will widen beyond Nvidia and benefit other stocks such as Broadcom $AVGO"
X Link @Beth_Kindig 2025-12-11T18:45Z 169.9K followers, 19.6K engagements

"$AMD MI355X $NVDA B200 on ROCm XXX Comp🧵 @AMD Instinct MI355X GPU outperforming or matching the NVIDIA Blackwell B200 GPU in large language model (LLM) inference workloads using the open-source vLLM engine. The focus is on production-scale scenarios with high concurrency (4128 requests) emphasizing throughput latency and cost efficiency. AMD highlights hardware-software co-optimizations like AITER kernels and QuickReduce as key to MI355X's edge particularly for mixture-of-experts (MoE) models and long-context tasks. Key Takeaways: Up to 1.4x higher throughput on DeepSeek-R1 at scale."
X Link @MikeLongTerm 2025-12-11T17:48Z 19.3K followers, 3860 engagements

""We expect that when we launch in late 2026 this will be the most powerful combination of sensors and inference compute in consumer vehicles in North America." -@Rivian RIP FSD"
X Link @ZacksJerryRig 2025-12-11T17:36Z 920.7K followers, 40.9K engagements

"🧬 Introducing LLaDA2.0 for the first time scaled to 100B as a Discrete Diffusion LLMs (dLLM) Featuring 16B (mini) and 100B (flash) MoE versions. With 2.1x faster inference than AR models and superior performance in Code Math and Agentic tasks we prove that at scale Diffusion is not just feasibleit's stronger and faster. 🌊 #AI #LLaDA #Diffusion #OpenSource #dllm"
X Link @ant_oss 2025-12-11T17:05Z XXX followers, 216.6K engagements