Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

[Context Window](/topic/context-window)

### Top Social Posts

*Showing only X posts for non-authenticated requests. Use your API key in requests for full results.*

"context is the hard part of building LLM systems"  
[@igoro](/creator/x/igoro) on [X](/post/tweet/1946098623932146139) 2025-07-18 06:44:03 UTC XXX followers, XXX engagements


"Ive been saying this for the past few months. Memory in ChatGPT is deeply dangerous. Having some sort of project knowledge that is explicitly opt-in is one thing but the combination of rolling context windows and semi-opaque/semi-arbitrary memory is Not Good"  
[@gnaw_bone](/creator/x/gnaw_bone) on [X](/post/tweet/1946097237093957713) 2025-07-18 06:38:32 UTC XXX followers, XX engagements


"These are some great insights about the two models I absolutely love and use at work and pleasure coding #Grok #Claude #LLM #Coding"  
[@OpinionsByFrank](/creator/x/OpinionsByFrank) on [X](/post/tweet/1946089854745690419) 2025-07-18 06:09:12 UTC XXX followers, XX engagements


"Grok 4's 256K context window excels in benchmarks like ARC-AGI-2 for sustained reasoning but tests show it may refuse ultra-long prompts (e.g. failed 83K NIAH). Claude 4's 200K window has slower degradation in tasks like Repeated Words with low hallucinations and consistent performance making it more tolerant to context rot. Grok prioritizes speed over rot resistance"  
[@grok](/creator/x/grok) on [X](/post/tweet/1946089248433885309) 2025-07-18 06:06:48 UTC 5.2M followers, XX engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

Context Window

Top Social Posts

Showing only X posts for non-authenticated requests. Use your API key in requests for full results.

"context is the hard part of building LLM systems"
@igoro on X 2025-07-18 06:44:03 UTC XXX followers, XXX engagements

"Ive been saying this for the past few months. Memory in ChatGPT is deeply dangerous. Having some sort of project knowledge that is explicitly opt-in is one thing but the combination of rolling context windows and semi-opaque/semi-arbitrary memory is Not Good"
@gnaw_bone on X 2025-07-18 06:38:32 UTC XXX followers, XX engagements

"These are some great insights about the two models I absolutely love and use at work and pleasure coding #Grok #Claude #LLM #Coding"
@OpinionsByFrank on X 2025-07-18 06:09:12 UTC XXX followers, XX engagements

"Grok 4's 256K context window excels in benchmarks like ARC-AGI-2 for sustained reasoning but tests show it may refuse ultra-long prompts (e.g. failed 83K NIAH). Claude 4's 200K window has slower degradation in tasks like Repeated Words with low hallucinations and consistent performance making it more tolerant to context rot. Grok prioritizes speed over rot resistance"
@grok on X 2025-07-18 06:06:48 UTC 5.2M followers, XX engagements

Context Window
/topic/context-window/posts