#  @cerebras Cerebras Cerebras posts on X about ai, inference, glm, up to the most. They currently have [------] followers and [---] posts still getting attention that total [------] engagements in the last [--] hours. ### Engagements: [------] [#](/creator/twitter::751545566778171392/interactions)  - [--] Week [-------] -90% - [--] Month [---------] +433% - [--] Months [----------] +229% - [--] Year [----------] +481% ### Mentions: [--] [#](/creator/twitter::751545566778171392/posts_active)  - [--] Week [--] no change - [--] Month [--] +139% - [--] Months [---] +31% - [--] Year [---] +94% ### Followers: [------] [#](/creator/twitter::751545566778171392/followers)  - [--] Week [------] +4.20% - [--] Month [------] +9.40% - [--] Months [------] +14% - [--] Year [------] +75% ### CreatorRank: [-------] [#](/creator/twitter::751545566778171392/influencer_rank)  ### Social Influence **Social category influence** [technology brands](/list/technology-brands) 18% [stocks](/list/stocks) 8% [finance](/list/finance) 3% [social networks](/list/social-networks) 2% [countries](/list/countries) 2% [currencies](/list/currencies) 1% [musicians](/list/musicians) 1% [events](/list/events) 1% [cryptocurrencies](/list/cryptocurrencies) 1% [nfl](/list/nfl) 1% **Social topic influence** [ai](/topic/ai) 14%, [inference](/topic/inference) 13%, [glm](/topic/glm) #41, [up to](/topic/up-to) 3%, [strong](/topic/strong) 3%, [open ai](/topic/open-ai) #2366, [we are](/topic/we-are) 3%, [realtime](/topic/realtime) 3%, [claude code](/topic/claude-code) 2%, [university of](/topic/university-of) 2% **Top accounts mentioned or mentioned by** [@sarahchieng](/creator/undefined) [@andrewdfeldman](/creator/undefined) [@grok](/creator/undefined) [@ai_hyperbull](/creator/undefined) [@llmjunky](/creator/undefined) [@paulsolt](/creator/undefined) [@windsurf](/creator/undefined) [@cognition](/creator/undefined) [@jimsbr](/creator/undefined) [@dmsobol](/creator/undefined) [@zaiorg](/creator/undefined) [@braintrust](/creator/undefined) [@cooolernemesis](/creator/undefined) [@learnwdaniel](/creator/undefined) [@sama](/creator/undefined) [@embirico](/creator/undefined) [@yontrtwt](/creator/undefined) [@aurexav](/creator/undefined) [@cantstopclick](/creator/undefined) [@iamadamreed](/creator/undefined) **Top assets mentioned** [FilesCoins Power Cu (FILECOIN)](/topic/files) [Microsoft Corp. (MSFT)](/topic/microsoft) [Braintrust (BTRST)](/topic/braintrust) [Spotify Technology (SPOT)](/topic/$spot) ### Top Social Posts Top posts by engagements in the last [--] hours "Throwback to last year's hackathon with @cline where 1000+ hackers signed up to play around with the latest model on Cerebras among them @archimagos who wrote about his experience. Missed out Join us tomorrow on Discord for our GLM [---] hackathon with $5000 and Cerebras Code for winners https://twitter.com/i/web/status/2014409721265193305 https://twitter.com/i/web/status/2014409721265193305" [X Link](https://x.com/cerebras/status/2014409721265193305) 2026-01-22T18:47Z 43K followers, [----] engagements "RSVP here: https://luma.com/pzfjr8qputm_source=x https://luma.com/pzfjr8qputm_source=x" [X Link](https://x.com/cerebras/status/2014409936139387246) 2026-01-22T18:48Z 42.8K followers, [----] engagements "Say hello to @dmsobol in Osaka on Jan 28" [X Link](https://x.com/cerebras/status/2015126857352667577) 2026-01-24T18:17Z 43K followers, [----] engagements "We're thrilled to announce the winner of the GLM4.7 Hackathon - X track cohosted with @cline The winners @Maaztwts and @enflect_ win $2500 USD and Cerebras Code plans for building a desktop AI assistant in 24hr hours using FAST inference. Built FlickAI in [--] hours at the @cerebras x @cline Vibe Coder Hackathon A desktop AI assistant that: Sees whats on your screen Wakes up instantly Helps with coding emails notes anything and everything. Built this with my teammate @enflect_ https://t.co/YDxVp911e0 Built FlickAI in [--] hours at the @cerebras x @cline Vibe Coder Hackathon A desktop AI assistant" [X Link](https://x.com/cerebras/status/2016295407912157615) 2026-01-27T23:41Z 43K followers, 21.9K engagements "RT @learnwdaniel: Super fast @openclaw with @cerebras is super ergonomic to use ngl" [X Link](https://x.com/cerebras/status/2017044850726883426) 2026-01-30T01:19Z 42.8K followers, [--] engagements "RT @learnwdaniel: http://x.com/i/article/2017390506582638592 http://x.com/i/article/2017390506582638592" [X Link](https://x.com/cerebras/status/2018411568896327721) 2026-02-02T19:49Z 41.7K followers, [--] engagements "RT @hi_im_isaac_: [--] claude code [--] mins [--] opencodes running @cerebras [--] secs" [X Link](https://x.com/cerebras/status/2020948266028859608) 2026-02-09T19:49Z 42.4K followers, [--] engagements "RT @stevekrouse: experimenting generating html as fast as possible [--]. using cerebras glm [---] @ 2k tps [--]. parent agent generates html scaff" [X Link](https://x.com/cerebras/status/2020970660009578580) 2026-02-09T21:18Z 42.3K followers, [--] engagements "GLM-4.7 from @Zai_org is live on Cerebras - Frontier intelligence for coding tool-driven agents and multi-turn reasoning - Record coding speed: [----] tokens per second (up to [----] TPS for other uses) - Strong price-performance: 10x higher than Sonnet 4.5" [X Link](https://x.com/cerebras/status/2009309525820444979) 2026-01-08T17:01Z 43.5K followers, 134.5K engagements "The @EPCCed at the University of Edinburghone of Europes leading supercomputing centershas developed new high-level libraries to program the Cerebras CS-3 and just open-sourced them for everyone else to use. When they measured the performance against GPU and CPU clusters they found: [--] faster than [---] Nvidia A100 GPUs on acoustic wave modeling [--] faster than [---] nodes of a Cray EX supercomputer Their compiler-generated code outperforms hand-tuned implementations The A100 is obviously not the latest GPU. For context Nvidia B200 has [--] the dense FP32 flops of A100. So as a first-order" [X Link](https://x.com/cerebras/status/2017335566187303240) 2026-01-30T20:34Z 43.5K followers, [----] engagements "GLM [---] is one of the top open-source models on LM Arenaand it's going toe-to-toe with Claude Opus [---] and Gemini Pro. We sat down with @ml_angelopoulos co-founder and CEO of @arena to break down 8000+ developer votes: Within [--] points of Gemini Pro in math & coding Frontier-level multi-turn & instruction following The open-source model devs are actually switching to The best part You can run it at 1500+ tokens/sec on Cerebrasfor free" [X Link](https://x.com/anyuser/status/2018731211095732391) 2026-02-03T17:00Z 43.5K followers, 28.6K engagements "https://www.cerebras.ai/blog/openai-codexspark https://www.cerebras.ai/blog/openai-codexspark" [X Link](https://x.com/cerebras/status/2022021678856122523) 2026-02-12T18:55Z 43.6K followers, 11.9K engagements "OpenAI๐คCerebras https://openai.com/index/cerebras-partnership/ https://openai.com/index/cerebras-partnership/" [X Link](https://x.com/cerebras/status/2011531740804964855) 2026-01-14T20:11Z 43.6K followers, 1.6M engagements "GLM [---] is one of the strongest open-source coding models availablebut most developers aren't prompting it correctly. We put together [--] rules to help you get the most out of it: - Front-load instructions (it has a strong recency bias) - Use firm language: "must" and "strictly" soft suggestions - Break complex tasks into smaller steps - Disable reasoning for simple tasks enable it for hard ones - Use critic agents for code review QA and validation - Pair it with a frontier model for the hardest 10% of workloads - and more GLM [---] hits 96% on Tau Bench and 86% on GPQA Diamond. At 1500" [X Link](https://x.com/anyuser/status/2017319319697580414) 2026-01-30T19:29Z 43.5K followers, 634.5K engagements "Learn more: https://www.cerebras.ai/blog/glm-4-7-migration-guide https://www.cerebras.ai/blog/glm-4-7-migration-guide" [X Link](https://x.com/anyuser/status/2017323418446926331) 2026-01-30T19:45Z 43.5K followers, [----] engagements "CerebrasSystems today announced the closing of a$1 billionSeries H financing at a post-money valuation of approximately$23 billion. The round was led by Tiger Global with participation fromBenchmarkFidelity Management & Research Company Atreides Management Alpha Wave Global Altimeter AMD Coatue and [----] Capital among others. For more information onCerebras visit https://www.cerebras.ai/press-release/cerebras-systems-raises-usd1-billion-series-h https://www.cerebras.ai/press-release/cerebras-systems-raises-usd1-billion-series-h" [X Link](https://x.com/anyuser/status/2019082493626818922) 2026-02-04T16:15Z 43.5K followers, 77.3K engagements "Who's your agent Codex [---] Opus [---] GLM [---] Codex [---] Opus [---] GLM 4.7" [X Link](https://x.com/anyuser/status/2019481255050633320) 2026-02-05T18:40Z 43.5K followers, 25.7K engagements "Fast inference = 6x markup Don't be giving us any ideas ๐ผ Our teams have been building with a 2.5x-faster version of Claude Opus [---]. Were now making it available as an early experiment via Claude Code and our API. Our teams have been building with a 2.5x-faster version of Claude Opus [---]. Were now making it available as an early experiment via Claude Code and our API" [X Link](https://x.com/cerebras/status/2020316477170282970) 2026-02-08T01:59Z 43.6K followers, 215.5K engagements "OpenAI Codex-Spark powered by Cerebras You can now just build things fasterat [----] tokens/s. https://twitter.com/i/web/status/2022021218208297302 https://twitter.com/i/web/status/2022021218208297302" [X Link](https://x.com/cerebras/status/2022021218208297302) 2026-02-12T18:53Z 43.6K followers, 281.6K engagements "@sama Cerebras ๐ค OpenAI" [X Link](https://x.com/cerebras/status/2022032711150669963) 2026-02-12T19:39Z 43.6K followers, [----] engagements "Collapse minutes to seconds hours to minutes and days to hours. That is what we are here to do" [X Link](https://x.com/cerebras/status/2022762593203699725) 2026-02-14T19:59Z 43.6K followers, 29.1K engagements "Cerebras is Thor's hammer for AI developers. We got you @steipete ๐ฆ" [X Link](https://x.com/cerebras/status/2023519423345402220) 2026-02-16T22:06Z 43.6K followers, 13.1K engagements "@karpathy @sama make it smarter. we'll make it faster" [X Link](https://x.com/cerebras/status/1964037882785804712) 2025-09-05T18:48Z 40.2K followers, 15K engagements "Crush by @charmcli powered by Cerebras Code Pro 2000t/s looks good on you ๐ โจ" [X Link](https://x.com/cerebras/status/1968063969899450491) 2025-09-16T21:26Z 36.9K followers, 26.2K engagements "On FridayCerebras withdrew our S-1.It had become stale and no longer reflected the current state of our business. Our business and financial position have evolved significantlyfor the bettersince our initial filing in 2024: In [----] we achieved record revenues. In [----] we are on track to achieve record revenues. Our inference businessisgrowing exponentially. We closed an - $. including Fidelity Atreides and Tiger Global who are major investors in IPOs as well as late-stage private companies. Withdrawing our S-1 was an . We will file a new offering document that better describes and" [X Link](https://x.com/cerebras/status/1975029034875814259) 2025-10-06T02:43Z 37K followers, 51.6K engagements "๐ง Cerebras Inference self-serve is finally here ๐ง Pay by credit card starting at $10 Run Qwen3 Coder GPT OSS & more at 2000+ TPS 20x the speed of GPU-based model providers Go ahead. Melt our wafers. http://cloud.cerebras.ai http://cloud.cerebras.ai http://cloud.cerebras.ai http://cloud.cerebras.ai" [X Link](https://x.com/cerebras/status/1977788509646971182) 2025-10-13T17:28Z 35.3K followers, 52.6K engagements "Wanna hear a cool story Our Oklahoma City datacenter was designed around water not air. To keep our wafer-scale systems running at peak performance we use a Tier III 6000-ton chilled-water plant inside a [------] sq ft F5-rated facility. Unlike traditional air-cooled systems our chilled-water design moves heat quietly and efficiently even under the heaviest AI workloads. It keeps hundreds of millions of cores perfectly balanced as they drive real-time inference at massive scale. A closed-loop system recycles and stores water maintaining stable cooling without drawing on external sources.even" [X Link](https://x.com/cerebras/status/1978191124805116171) 2025-10-14T20:08Z 34.9K followers, 305.5K engagements "Cerebras is now powering Cognition's latest code retrieval models directly in @windsurf Context retrieval has been one of the biggest bottlenecks in agentic coding. When you ask an agent to work on a large codebase it can spend 60% of its time just searching for relevant files. This retrieval process not only keeps you waiting but pollutes the context window with irrelevant snippets and racks up your inference bill. Cognition trained two specialized compact models using RL and deployed them on Cerebras for maximum speed. Querying huge 1M line codebases like React Vercel and PyTorch swe-grep" [X Link](https://x.com/cerebras/status/1978874694825840679) 2025-10-16T17:24Z 34.9K followers, 58.2K engagements "SWE-grep is truly some inspired ML from @cognition. Take a 1M line codebase like React. With multiple fast inference calls on Cerebras it can fetch & explain relevant code in seconds. Here are a few measurements taken on React Vercel PyTorch repos" [X Link](https://x.com/cerebras/status/1979274581916684486) 2025-10-17T19:53Z 34.9K followers, 16.2K engagements "๐งฎ What does 87B actually mean It is NOT [--] experts with 7B active parameters per token. Turns out its actually 13B active parameters. But wait where does 13B come from If youve ever tried to make sense of MoE math our next post in the MoE [---] guide by @dmsobol (and interactive calculator) breaks it all down" [X Link](https://x.com/cerebras/status/1980287649551106291) 2025-10-20T14:59Z 35.4K followers, 312.4K engagements "In [--] minute add evals logging and tracing to your inference stack so you can always pick the best model improve your prompts catch bugs with @braintrustdata and Cerebras inference. Avoid the AI deleting your entire codebase this halloween and remember our free tier gets you 1M+ free toks/day per model" [X Link](https://x.com/cerebras/status/1980331955498700981) 2025-10-20T17:55Z 35.4K followers, 299.1K engagements "full docs here: https://inference-docs.cerebras.ai/integrations/braintrust https://inference-docs.cerebras.ai/integrations/braintrust" [X Link](https://x.com/cerebras/status/1980333475111792717) 2025-10-20T18:01Z 35.4K followers, [----] engagements "Pay-as-you-go is now available on @awscloud Marketplace. Use your AWS accountno upfronts no lock-insto serve frontier models 20x faster than leading GPUs" [X Link](https://x.com/cerebras/status/1980751820399026287) 2025-10-21T21:43Z 34.9K followers, [----] engagements "Cerebras inference growth on @huggingface Reminds us of a certain Pink Floyd song" [X Link](https://x.com/cerebras/status/1981466958836224440) 2025-10-23T21:05Z 35.5K followers, 21.4K engagements "Do you want to use REAP models on Cerebras inference API Yes No Unsure Yes No Unsure" [X Link](https://x.com/cerebras/status/1981803884764143726) 2025-10-24T19:24Z 35.4K followers, [----] engagements "A beautiful Blackwell wafer from Nvidia. Too bad it's cut into pieces after the photo is taken ๐ข" [X Link](https://x.com/cerebras/status/1983311015224979737) 2025-10-28T23:12Z 36.8K followers, 22.3K engagements "GPT-OSS-Safeguard from @OpenAI is here. Open-weight safety-tuned transparent reasoning. Now available in private preview at Cerebras speedsโก https://www.cerebras.ai/build-with-us https://www.cerebras.ai/build-with-us" [X Link](https://x.com/cerebras/status/1983575502557270386) 2025-10-29T16:43Z 36.8K followers, [----] engagements "Today @cognition released SWE-1.5 the worlds fastest coding agent powered by Cerebras. SWE-1.5 achieves frontier-level coding ability comparable to Sonnet [---] and surpassing GPT-5. Cerebras and Cognition engineers worked hand in hand over the past few weeks training a custom draft model for super-fast spec decoding and building a custom priority system for smoother agent sessions. The result is the first coding agent that completes intricate software engineering tasks in 5-10 seconds instead of minutes. Going from start-stop coding to continuous flow is like using an AI computer ten years in" [X Link](https://x.com/cerebras/status/1983695672454074794) 2025-10-30T00:41Z 37.1K followers, 156.9K engagements "Cognition trained SWE-1.5 on Nvidia's flagship GB200 NVL72. Yet they switched over to Cerebras for inference. Imagine how much better we must be to make that worthwhile" [X Link](https://x.com/cerebras/status/1983972069768622103) 2025-10-30T18:59Z 35.3K followers, 26.6K engagements "@devsharma_8 @windsurf @cognition Deus ex machina" [X Link](https://x.com/cerebras/status/1983979173216710933) 2025-10-30T19:27Z 35.2K followers, [---] engagements "Even frontier models fail all the time. The difference is we fail in [--] sec and with one more prompt you get the right answer. Time to success = [--] sec. On GPT Codex it takes 22min just to find out it failed. ๐ซ Just ran the same prompt with Windsurf's new SWE-1 model and gpt-5-codex gpt-5-codex took over [--] minutes. SWE-1 took under [--] seconds. Neither worked ๐ Just ran the same prompt with Windsurf's new SWE-1 model and gpt-5-codex gpt-5-codex took over [--] minutes. SWE-1 took under [--] seconds. Neither worked ๐" [X Link](https://x.com/cerebras/status/1984314157655867621) 2025-10-31T17:38Z 36.8K followers, 39.1K engagements "๐ง Just deployed additional speed optimizations for @cognition SWE-1.5. The fastest measured request was an eye watering [----] token/s per Grafana dashboard. ๐คฏ๐คฏ" [X Link](https://x.com/cerebras/status/1984353299081150512) 2025-10-31T20:14Z 36.8K followers, 47.6K engagements "Behind the scenes with @swyx on how @windsurf built 20x faster search over your codebase" [X Link](https://x.com/cerebras/status/1986142493302526321) 2025-11-05T18:44Z 37K followers, 429.7K engagements "Cerebras beats Nvidia H100 but can it beat Blackwell Blackwell inference endpoints are finally out and its fast. It runs GPT-OSS-120B at [---] tokens/s leapfrogging H100 and Groq. Cerebras clocked in at [----] TPS - still #1. Looking forward to Rubin" [X Link](https://x.com/cerebras/status/1986569736659042433) 2025-11-06T23:01Z 36.9K followers, 33.4K engagements "Cerebras Code just got an UPGRADE. It's now powered by GLM [---] Pro Plans ($50): 300k 1M TPM @ 24M Tokens/day Max Plans ($200): 400k 1.5M TPM @ 120M Tokens/day Fastest GLM provider on the planet at [----] tokens/s and at 131K context. Get yours before we run out ๐" [X Link](https://x.com/cerebras/status/1986928779793736150) 2025-11-07T22:48Z 37.6K followers, 176.9K engagements "Introducing Cerebras for Nations our global initiative to advance and scale sovereign AI. How it works: [--] We will build world-class AI supercomputers with our WSE-3 chips and CS-3 systems [--] Co-develop state-of-the-art models and deploy with the worlds fastest inference [--] Invest locally in talent education and AI policy" [X Link](https://x.com/cerebras/status/1988342370308427809) 2025-11-11T20:25Z 37.1K followers, 14.2K engagements "@esthor we are glad you enjoy high density inference" [X Link](https://x.com/cerebras/status/1988415818863202711) 2025-11-12T01:17Z 36.9K followers, [--] engagements "See you in St. Louis for @Supercomputing 25" [X Link](https://x.com/cerebras/status/1989406562457371007) 2025-11-14T18:54Z 37.1K followers, [----] engagements "GLM-4.6 is now live on Cerebras ๐ง - [----] TPS: 17x faster than Claude - Worlds #1 tool calling model - Tasteful and reliable codegen - Self-serve API starting at $10 Try it now: http://cloud.cerebras.ai http://cloud.cerebras.ai http://cloud.cerebras.ai http://cloud.cerebras.ai" [X Link](https://x.com/cerebras/status/1990870044059054524) 2025-11-18T19:49Z 37.1K followers, 32.6K engagements "Learn more: https://www.cerebras.ai/blog/glm https://www.cerebras.ai/blog/glm" [X Link](https://x.com/cerebras/status/1990871222910136617) 2025-11-18T19:54Z 37.1K followers, [----] engagements "After [--] years at NVIDIA James Wang left and joined @cerebras. In this Big Chip Club episode @draecomino breaks down the bottlenecks of NVIDIA GPUs and what's keeping them OUT OF first place. Drop your follow-up questions below ๐" [X Link](https://x.com/cerebras/status/1991586054290518499) 2025-11-20T19:14Z 37.1K followers, 411.6K engagements "Cerebras inference ๐ is now live in @Microsoft Marketplace ๐ @Azure customers can now pair the Marketplace ecosystem with Cerebras speed scale & quality for real-time search voice coding agents & more" [X Link](https://x.com/cerebras/status/1991951622805880873) 2025-11-21T19:27Z 37.1K followers, [----] engagements "๐Lets go https://marketplace.microsoft.com/en-us/product/saas/12372016.cerebras-cloud-fast-inference-as-a-servicetab=overview https://marketplace.microsoft.com/en-us/product/saas/12372016.cerebras-cloud-fast-inference-as-a-servicetab=overview" [X Link](https://x.com/cerebras/status/1991951625209143335) 2025-11-21T19:27Z 37.1K followers, [----] engagements "Viva Las Cerebras ๐ฒWe are at AWS re:Invent all week. Come see live demos meet the team and spin up Cerebras in minutes using @awscloud Marketplace. Booth [----] in the Venetian" [X Link](https://x.com/cerebras/status/1995961544983855403) 2025-12-02T21:01Z 37.2K followers, [----] engagements "Cerebras is showcasing nine new papers at NeurIPS [----] spanning pretraining to inference: 1CODA orchestrate 32B models to beat 235B models 2Calibrated Reasoning 1-3x token savings in best-of-n sampling 3DREAM [---] faster VLM inference using spec decode 4CompleteP hyper parameter transfer from [--] to [---] layers 5Power Lines scaling laws for weight decay and batch size 6PTPP-Aware Adaptation predict adaptation performance before training 7Prot42 generate protein binders from sequence alone matching AlphaProteo 8Chem42 design drug-like molecules for specific protein targets with 6x fewer parameters" [X Link](https://x.com/cerebras/status/1996686383440969992) 2025-12-04T21:01Z 37.3K followers, [----] engagements "Introducing Jais-2 - a major leap for Arabic AI.Co-developed with @G42ai Inception and @mbzuai Jais-2 is a family of Arabic models built end-to-end on Cerebras from pretraining to inference. Jais-2 70B runs at [----] tokens/sec 20x faster than leading LLMs. It brings deep cultural grounding and dialect awareness to more than [---] million Arabic speakers worldwide. Learn more: https://www.cerebras.ai/blog/jais2 https://www.cerebras.ai/blog/jais2" [X Link](https://x.com/cerebras/status/1998535049315225753) 2025-12-09T23:27Z 37.3K followers, 16.2K engagements "Some of our top customers are still choosing Llama [---] 8B. For a while we jumped to whatever hottest latest model was taking up our twitter feed. ๐ But as we are quickly realizing to create a SOTA product you need a model that fits your exact use case. Heres what our customers tell us: a lot of the legwork is actually around prompting theres an art to selecting and combining multiple models benchmarks only show part of the picture. you have to understand the unique quirks of each model. Especially as model releases become more and more frequent we need a clear way to evaluate new models. We" [X Link](https://x.com/cerebras/status/1998854471708004454) 2025-12-10T20:36Z 37.3K followers, 345.5K engagements "๐ฝ A five-person Cerebras team approached one of the worlds largest semiconductor manufacturers with a radical proposal and TSMC backed the vision. We recently won TSMC North Americas Innovation Zone Demo of the Year earning the highest number of attendee votes across [--] showcased technologies" [X Link](https://x.com/cerebras/status/1999202011758121187) 2025-12-11T19:37Z 37.3K followers, [----] engagements "Tired of waiting for minutes for your AI coding assistant @cognition built agents that search reason & edit code in a few seconds. Powered by Cerebrasrunning at 1K tokens/sec with frontier-level accuracy" [X Link](https://x.com/cerebras/status/1999540379553611955) 2025-12-12T18:02Z 37.5K followers, 16.9K engagements "The most damaging AI failures dont make headlines they drive away users. Braintrust CEO @ankrgyl explains why context-breaking models kill UX and how evals turn user complaints into lasting fixes. If youre building an AI product this is the foundation of a reliable user experience. You can easily achieve this using tools like @cerebras and @braintrust https://inference-docs.cerebras.ai/integrations/braintrust https://inference-docs.cerebras.ai/integrations/braintrust" [X Link](https://x.com/cerebras/status/2001039090461163732) 2025-12-16T21:17Z 37.3K followers, [--] engagements "You're right that newer models offer impressive gains and for many teams it does make sense to upgrade quickly. What were seeing across our customer base though is that the best model isnt always the newest model. In real production workflows teams prioritize whichever model delivers the highest net performance for their specific use case. This means evaluating compatibility with existing prompts predictable behavior on edge cases stability across long-running workloads and cost efficiency at scale. The point isnt to avoid upgrading its to evaluate upgrades with specificity on actual use" [X Link](https://x.com/cerebras/status/2001052079117492518) 2025-12-16T22:09Z 37.3K followers, [--] engagements "DELOITTE TO REFUND AUSTRALIAN GOVERNMENT AFTER AI HALLUCINATIONS FOUND IN REPORT It's not about choosing the right model anymore. Most AI products dont fail because the model is bad. They fail because the system drops context or over-writes user work. Top teams take complaints like this and turn them directly into evals so regressions never ship twice. @ankrgyl explains what reliable AI UX actually looks like and how teams use Cerebras + Braintrust to turn user complaints into lasting fixes. Build with @cerebras and @braintrust" [X Link](https://x.com/cerebras/status/2001444556102078925) 2025-12-18T00:09Z 37.9K followers, [----] engagements "A proud moment today we have signed an MOU with the @ENERGY to deepen collaboration on next-generation AI and HPC in support of the @WhiteHouse Genesis Mission. This builds on years of real work and a strong partnership with the national labs. We're just getting started" [X Link](https://x.com/cerebras/status/2001755936105361652) 2025-12-18T20:46Z 40.3K followers, 13.5K engagements "RT @dmsobol: MoE models are compute efficient. Everyone knows that. But they are not parameter efficient. Why Our experts learn redundant" [X Link](https://x.com/cerebras/status/2008651275773399550) 2026-01-06T21:25Z 40.2K followers, [--] engagements "Inference speed determines what AI can actually do. ๐ [----] year made that clear. Real-time inference moved from production to proof. Not benchmarks. Not demos. With measurable results for our partners and customers. Read @andrewdfeldman reflection on [----] and groove with us in [----]. https://twitter.com/i/web/status/2008930460357857535 https://twitter.com/i/web/status/2008930460357857535" [X Link](https://x.com/cerebras/status/2008930460357857535) 2026-01-07T15:55Z 40.2K followers, [----] engagements "Read our blog: https://www.cerebras.ai/blog/glm-4-7 https://www.cerebras.ai/blog/glm-4-7" [X Link](https://x.com/anyuser/status/2009309527267594341) 2026-01-08T17:01Z 40.3K followers, [----] engagements "RT @SarahChieng: We are cooking quite the storm of events in [----] Cafe Compute continues to go global ๐ Omakase hackathons Barrys w" [X Link](https://x.com/anyuser/status/2009386690373157101) 2026-01-08T22:08Z 40.3K followers, [--] engagements "RT @SarahChieng: To put it bluntly the Chinese model labs have officially caught up. The latest release from @zai_org GLM-4.7 marks the" [X Link](https://x.com/cerebras/status/2009662877981122656) 2026-01-09T16:25Z 40.3K followers, [--] engagements "@Mayhem4Markets @Zai_org @vithursant19" [X Link](https://x.com/cerebras/status/2009722649891807514) 2026-01-09T20:23Z 40.1K followers, [----] engagements "@Chris65536 @Zai_org yes soon on open router. heard on prompt caching ๐" [X Link](https://x.com/cerebras/status/2009723899068723646) 2026-01-09T20:28Z 40.5K followers, [---] engagements "RT @p0: New cookbook: A real-time fact-checking app to showcase Parallels high-accuracy web search and @Cerebras blazing fast inference" [X Link](https://x.com/cerebras/status/2009750764093161736) 2026-01-09T22:14Z 40.3K followers, [--] engagements "RT @SarahChieng: A common misconception in AI right now is that everything were excited about is new. When I graduated from MIT in 2023" [X Link](https://x.com/cerebras/status/2009798195824734573) 2026-01-10T01:23Z 40.3K followers, [--] engagements "Everyone talks about our hardware @Cerebras. Few notice the software. Ryan Loney breaks down the hidden optimizations powering [--] faster LLM inference than GPUs speculative decoding token reuse and why were just getting started. Watch the full story here" [X Link](https://x.com/cerebras/status/2010855964682154094) 2026-01-12T23:26Z 40.3K followers, 13.7K engagements "RT @SarahChieng: What if [--] line of code could [--] your LLM inference speed Predicted Outputs is a new software technique that can drastica" [X Link](https://x.com/anyuser/status/2011181445369200806) 2026-01-13T20:59Z 40.2K followers, [--] engagements "72% of people don't trust the internet. AI generated false information is polluting even trusted sources. It's impossible to sort through what is real. With this cookbook you can build a fact checker that scans web-pages and verifies every claim. All at the speed of light powered by @cerebras. Test it on Hacker News the latest NeurIPS papers the Wall Street Journal X/Twitter. Built with @p0 Search API @cerebras Inference @OpenAI's gpt-oss-120B Comment what questionable claims you catch ๐ง https://twitter.com/i/web/status/2011194998516228466 https://twitter.com/i/web/status/2011194998516228466" [X Link](https://x.com/cerebras/status/2011194998516228466) 2026-01-13T21:53Z 40.3K followers, 10.2K engagements "Get the starter code here: https://cookbook.openai.com/articles/gpt-oss/build-your-own-fact-checker-cerebras https://cookbook.openai.com/articles/gpt-oss/build-your-own-fact-checker-cerebras" [X Link](https://x.com/cerebras/status/2011195002261492177) 2026-01-13T21:53Z 40.3K followers, [----] engagements "RT @andrewdfeldman: What can these guys teach us about data center cooling A lot it turns out. If you slip into icy water you are dead i" [X Link](https://x.com/cerebras/status/2012230946578223418) 2026-01-16T18:30Z 40.3K followers, [--] engagements "@andrewdfeldman Buffalo Bills fans all of the sudden" [X Link](https://x.com/cerebras/status/2012240917613220009) 2026-01-16T19:09Z 40.1K followers, [----] engagements "RT @sama: Very fast Codex coming" [X Link](https://x.com/cerebras/status/2012263424273871279) 2026-01-16T20:39Z 40.3K followers, [---] engagements "@karpathy @Rasmic speed = intelligence https://www.cerebras.ai/blog/the-cerebras-scaling-law-faster-inference-is-smarter-ai https://www.cerebras.ai/blog/the-cerebras-scaling-law-faster-inference-is-smarter-ai" [X Link](https://x.com/cerebras/status/2012330507766559037) 2026-01-17T01:05Z 40.1K followers, [----] engagements "If you think Cerebras is just about speed you do not understand Cerebras. Just as mass can be converted to energy speed can be converted to intelligence. It's the natural consequence of test-time compute scaling" [X Link](https://x.com/cerebras/status/2012333629389537495) 2026-01-17T01:18Z 40.5K followers, 105.3K engagements "Read more https://www.cerebras.ai/blog/the-cerebras-scaling-law-faster-inference-is-smarter-ai https://www.cerebras.ai/blog/the-cerebras-scaling-law-faster-inference-is-smarter-ai" [X Link](https://x.com/cerebras/status/2012335659088760965) 2026-01-17T01:26Z 40.3K followers, [----] engagements "@hi_im_isaac_ why hello" [X Link](https://x.com/cerebras/status/2013654636238868731) 2026-01-20T16:47Z 40.3K followers, [----] engagements "RT @SarahChieng: Cafe Compute is now open for the year Cafe Compute is the first late-night coffeeshop for developers researchers wr" [X Link](https://x.com/cerebras/status/2013694500426915975) 2026-01-20T19:25Z 40.3K followers, [--] engagements "RT @SarahChieng: GLM [---] is the top ranked OS model on leaderboards.but lets see if its worth the hype This Friday @cerebras and @cli" [X Link](https://x.com/cerebras/status/2014065855651574182) 2026-01-21T20:01Z 40.3K followers, [--] engagements "RT @andrewdfeldman: The fastest AI processor in the world the @cerebras WSE is having fun in Davos" [X Link](https://x.com/cerebras/status/2014363574014038357) 2026-01-22T15:44Z 40.4K followers, [--] engagements "Blog: https://trilogyai.substack.com/i/170248764/cerebras-cline-hackathon-implementation https://trilogyai.substack.com/i/170248764/cerebras-cline-hackathon-implementation" [X Link](https://x.com/cerebras/status/2014409802206871626) 2026-01-22T18:48Z 40.5K followers, [----] engagements "We're proud to work with @NETL_DOE to enable scientific modeling several hundred to thousands of times faster and with several thousand times lower energy consumption than traditional distributed computing systems like the JOULE [---] supercomputer #AI https://hubs.li/Q018Rkwk0 https://hubs.li/Q018Rkwk0" [X Link](https://x.com/cerebras/status/1519015700211380224) 2022-04-26T18:08Z 41.2K followers, [--] engagements "Very proud to enable @NETL_DOE's researchers to greatly accelerate a key computational fluid dynamics workload more than [---] times faster and at a fraction of the power consumption than the JOULE [---] supercomputer #HPC #scientificdiscovery https://hubs.li/Q01d8XqX0 https://hubs.li/Q01d8XqX0" [X Link](https://x.com/cerebras/status/1536409373764894721) 2022-06-13T18:05Z 41.2K followers, [--] engagements "The thrill the terror and a breakthrough. ๐ Listen here: Apple Podcasts: Spotify: Thank you @marcelsalathe and the @EPFL_en AI Center for this fascinating conversation with Jean Philippe Fricker. https://bit.ly/3CdZuKU https://bit.ly/4aobCFV https://bit.ly/3CdZuKU https://bit.ly/4aobCFV" [X Link](https://x.com/cerebras/status/1880349966016868840) 2025-01-17T20:22Z 43.1K followers, [----] engagements "K2 Think is now available and runs fastest on Cerebras Inference at [----] TPS on Cerebras 20x faster than GPU. Launched by @mbzuai and @G42ai K2 Think is a world leading open-source reasoning model that beats GPT OSS 120B and DeepSeek on math and reasoning" [X Link](https://x.com/cerebras/status/1965808643929993687) 2025-09-10T16:04Z 41.6K followers, 30.9K engagements "Sonnet [---] can code for [--] hours. That's [--] GPU hours. aka [--] Cerebras hours tops. Claude Sonnet [---] runs autonomously for 30+ hours of coding The record for GPT-5-Codex was just [--] hours. Whats Anthropics secret sauce https://t.co/0cGKtoSviy Claude Sonnet [---] runs autonomously for 30+ hours of coding The record for GPT-5-Codex was just [--] hours. Whats Anthropics secret sauce https://t.co/0cGKtoSviy" [X Link](https://x.com/anyuser/status/1973156073289973905) 2025-09-30T22:40Z 41.7K followers, 28.4K engagements "RT @andrewdfeldman: What is it like to be a delegate at Davos Its like watching a trade show swallow a small Swiss village. Every store" [X Link](https://x.com/cerebras/status/2014795863072124985) 2026-01-23T20:22Z 43K followers, [--] engagements "RT @SarahChieng: MoEs were invented in [----]. Jacobs Jordan Nowlan and Hinton proposed MoEs in [----]. Lets have a bunch of specialized" [X Link](https://x.com/anyuser/status/2016659265289932828) 2026-01-28T23:46Z 41.7K followers, [--] engagements "RT @SarahChieng: Cafe Compute is officially open for the year We're coming soon to Boston NYC Seattle Austin London Miami Brazil an" [X Link](https://x.com/cerebras/status/2016974696840155402) 2026-01-29T20:40Z 43K followers, [--] engagements "The @EPCCed at the University of Edinburghone of Europes leading supercomputing centershas developed new high-level libraries to program the Cerebras CS-3 and just open-sourced them for everyone else to use. Their results: [---] faster than [---] Nvidia A100 GPUs on acoustic wave modeling - [--] faster than [---] nodes of a Cray EX supercomputer - Their compiler-generated code outperforms hand-tuned implementations The A100 is obviously not the latest GPU. For context Nvidia B200 has [--] the dense FP32 flops of A100. So as a first-order approximation a single WSE-3 delivered the performance of 450" [X Link](https://x.com/cerebras/status/2017310831470645687) 2026-01-30T18:55Z 40.6K followers, [----] engagements "RT @SarahChieng: The Year of Latency Debt (And How Big Tech Is Paying It Down) In the past six months the four most important companies i" [X Link](https://x.com/cerebras/status/2017650039280308501) 2026-01-31T17:23Z 43K followers, [--] engagements "Up to [----] tokens/sec of single user generation speed is the number measured by 3rd party Artificial Analysis. Routing to providers such as OpenRouter can add some queue'ing latency and OpenRouter uses different throughput calculations. For raw speed metrics our direct API at is the best way to benchmark. http://cloud.cerebras.ai http://cloud.cerebras.ai" [X Link](https://x.com/cerebras/status/2018796175596617992) 2026-02-03T21:18Z 41.2K followers, [--] engagements "Speed and rate limits are different things. Speed (also known as single user latency) means that each response flows back faster. Rate limits cap total throughput over time (ie. the number of requests and tokens available within a time window to free accounts). For heavy agentic workflows you may hit limits faster because it's fast -- meaning you hit rate limits faster because you are processing more tokens in less time. To get higher rate limits you can upgrade your subscription to Code Max or Developer tier (pay-as-you-go) here: http://cerebras.ai/pricing http://cerebras.ai/pricing" [X Link](https://x.com/cerebras/status/2018796876536111186) 2026-02-03T21:20Z 41.2K followers, [--] engagements Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
@cerebras CerebrasCerebras posts on X about ai, inference, glm, up to the most. They currently have [------] followers and [---] posts still getting attention that total [------] engagements in the last [--] hours.
Social category influence technology brands 18% stocks 8% finance 3% social networks 2% countries 2% currencies 1% musicians 1% events 1% cryptocurrencies 1% nfl 1%
Social topic influence ai 14%, inference 13%, glm #41, up to 3%, strong 3%, open ai #2366, we are 3%, realtime 3%, claude code 2%, university of 2%
Top accounts mentioned or mentioned by @sarahchieng @andrewdfeldman @grok @ai_hyperbull @llmjunky @paulsolt @windsurf @cognition @jimsbr @dmsobol @zaiorg @braintrust @cooolernemesis @learnwdaniel @sama @embirico @yontrtwt @aurexav @cantstopclick @iamadamreed
Top assets mentioned FilesCoins Power Cu (FILECOIN) Microsoft Corp. (MSFT) Braintrust (BTRST) Spotify Technology (SPOT)
Top posts by engagements in the last [--] hours
"Throwback to last year's hackathon with @cline where 1000+ hackers signed up to play around with the latest model on Cerebras among them @archimagos who wrote about his experience. Missed out Join us tomorrow on Discord for our GLM [---] hackathon with $5000 and Cerebras Code for winners https://twitter.com/i/web/status/2014409721265193305 https://twitter.com/i/web/status/2014409721265193305"
X Link 2026-01-22T18:47Z 43K followers, [----] engagements
"RSVP here: https://luma.com/pzfjr8qputm_source=x https://luma.com/pzfjr8qputm_source=x"
X Link 2026-01-22T18:48Z 42.8K followers, [----] engagements
"Say hello to @dmsobol in Osaka on Jan 28"
X Link 2026-01-24T18:17Z 43K followers, [----] engagements
"We're thrilled to announce the winner of the GLM4.7 Hackathon - X track cohosted with @cline The winners @Maaztwts and @enflect_ win $2500 USD and Cerebras Code plans for building a desktop AI assistant in 24hr hours using FAST inference. Built FlickAI in [--] hours at the @cerebras x @cline Vibe Coder Hackathon A desktop AI assistant that: Sees whats on your screen Wakes up instantly Helps with coding emails notes anything and everything. Built this with my teammate @enflect_ https://t.co/YDxVp911e0 Built FlickAI in [--] hours at the @cerebras x @cline Vibe Coder Hackathon A desktop AI assistant"
X Link 2026-01-27T23:41Z 43K followers, 21.9K engagements
"RT @learnwdaniel: Super fast @openclaw with @cerebras is super ergonomic to use ngl"
X Link 2026-01-30T01:19Z 42.8K followers, [--] engagements
"RT @learnwdaniel: http://x.com/i/article/2017390506582638592 http://x.com/i/article/2017390506582638592"
X Link 2026-02-02T19:49Z 41.7K followers, [--] engagements
"RT @hi_im_isaac_: [--] claude code [--] mins [--] opencodes running @cerebras [--] secs"
X Link 2026-02-09T19:49Z 42.4K followers, [--] engagements
"RT @stevekrouse: experimenting generating html as fast as possible [--]. using cerebras glm [---] @ 2k tps [--]. parent agent generates html scaff"
X Link 2026-02-09T21:18Z 42.3K followers, [--] engagements
"GLM-4.7 from @Zai_org is live on Cerebras - Frontier intelligence for coding tool-driven agents and multi-turn reasoning - Record coding speed: [----] tokens per second (up to [----] TPS for other uses) - Strong price-performance: 10x higher than Sonnet 4.5"
X Link 2026-01-08T17:01Z 43.5K followers, 134.5K engagements
"The @EPCCed at the University of Edinburghone of Europes leading supercomputing centershas developed new high-level libraries to program the Cerebras CS-3 and just open-sourced them for everyone else to use. When they measured the performance against GPU and CPU clusters they found: [--] faster than [---] Nvidia A100 GPUs on acoustic wave modeling [--] faster than [---] nodes of a Cray EX supercomputer Their compiler-generated code outperforms hand-tuned implementations The A100 is obviously not the latest GPU. For context Nvidia B200 has [--] the dense FP32 flops of A100. So as a first-order"
X Link 2026-01-30T20:34Z 43.5K followers, [----] engagements
"GLM [---] is one of the top open-source models on LM Arenaand it's going toe-to-toe with Claude Opus [---] and Gemini Pro. We sat down with @ml_angelopoulos co-founder and CEO of @arena to break down 8000+ developer votes: Within [--] points of Gemini Pro in math & coding Frontier-level multi-turn & instruction following The open-source model devs are actually switching to The best part You can run it at 1500+ tokens/sec on Cerebrasfor free"
X Link 2026-02-03T17:00Z 43.5K followers, 28.6K engagements
"https://www.cerebras.ai/blog/openai-codexspark https://www.cerebras.ai/blog/openai-codexspark"
X Link 2026-02-12T18:55Z 43.6K followers, 11.9K engagements
"OpenAI๐คCerebras https://openai.com/index/cerebras-partnership/ https://openai.com/index/cerebras-partnership/"
X Link 2026-01-14T20:11Z 43.6K followers, 1.6M engagements
"GLM [---] is one of the strongest open-source coding models availablebut most developers aren't prompting it correctly. We put together [--] rules to help you get the most out of it: - Front-load instructions (it has a strong recency bias) - Use firm language: "must" and "strictly" soft suggestions - Break complex tasks into smaller steps - Disable reasoning for simple tasks enable it for hard ones - Use critic agents for code review QA and validation - Pair it with a frontier model for the hardest 10% of workloads - and more GLM [---] hits 96% on Tau Bench and 86% on GPQA Diamond. At 1500"
X Link 2026-01-30T19:29Z 43.5K followers, 634.5K engagements
"Learn more: https://www.cerebras.ai/blog/glm-4-7-migration-guide https://www.cerebras.ai/blog/glm-4-7-migration-guide"
X Link 2026-01-30T19:45Z 43.5K followers, [----] engagements
"CerebrasSystems today announced the closing of a$1 billionSeries H financing at a post-money valuation of approximately$23 billion. The round was led by Tiger Global with participation fromBenchmarkFidelity Management & Research Company Atreides Management Alpha Wave Global Altimeter AMD Coatue and [----] Capital among others. For more information onCerebras visit https://www.cerebras.ai/press-release/cerebras-systems-raises-usd1-billion-series-h https://www.cerebras.ai/press-release/cerebras-systems-raises-usd1-billion-series-h"
X Link 2026-02-04T16:15Z 43.5K followers, 77.3K engagements
"Who's your agent Codex [---] Opus [---] GLM [---] Codex [---] Opus [---] GLM 4.7"
X Link 2026-02-05T18:40Z 43.5K followers, 25.7K engagements
"Fast inference = 6x markup Don't be giving us any ideas ๐ผ Our teams have been building with a 2.5x-faster version of Claude Opus [---]. Were now making it available as an early experiment via Claude Code and our API. Our teams have been building with a 2.5x-faster version of Claude Opus [---]. Were now making it available as an early experiment via Claude Code and our API"
X Link 2026-02-08T01:59Z 43.6K followers, 215.5K engagements
"OpenAI Codex-Spark powered by Cerebras You can now just build things fasterat [----] tokens/s. https://twitter.com/i/web/status/2022021218208297302 https://twitter.com/i/web/status/2022021218208297302"
X Link 2026-02-12T18:53Z 43.6K followers, 281.6K engagements
"@sama Cerebras ๐ค OpenAI"
X Link 2026-02-12T19:39Z 43.6K followers, [----] engagements
"Collapse minutes to seconds hours to minutes and days to hours. That is what we are here to do"
X Link 2026-02-14T19:59Z 43.6K followers, 29.1K engagements
"Cerebras is Thor's hammer for AI developers. We got you @steipete ๐ฆ"
X Link 2026-02-16T22:06Z 43.6K followers, 13.1K engagements
"@karpathy @sama make it smarter. we'll make it faster"
X Link 2025-09-05T18:48Z 40.2K followers, 15K engagements
"Crush by @charmcli powered by Cerebras Code Pro 2000t/s looks good on you ๐
โจ"
X Link 2025-09-16T21:26Z 36.9K followers, 26.2K engagements
"On FridayCerebras withdrew our S-1.It had become stale and no longer reflected the current state of our business. Our business and financial position have evolved significantlyfor the bettersince our initial filing in 2024: In [----] we achieved record revenues. In [----] we are on track to achieve record revenues. Our inference businessisgrowing exponentially. We closed an - $. including Fidelity Atreides and Tiger Global who are major investors in IPOs as well as late-stage private companies. Withdrawing our S-1 was an . We will file a new offering document that better describes and"
X Link 2025-10-06T02:43Z 37K followers, 51.6K engagements
"๐ง Cerebras Inference self-serve is finally here ๐ง Pay by credit card starting at $10 Run Qwen3 Coder GPT OSS & more at 2000+ TPS 20x the speed of GPU-based model providers Go ahead. Melt our wafers. http://cloud.cerebras.ai http://cloud.cerebras.ai http://cloud.cerebras.ai http://cloud.cerebras.ai"
X Link 2025-10-13T17:28Z 35.3K followers, 52.6K engagements
"Wanna hear a cool story Our Oklahoma City datacenter was designed around water not air. To keep our wafer-scale systems running at peak performance we use a Tier III 6000-ton chilled-water plant inside a [------] sq ft F5-rated facility. Unlike traditional air-cooled systems our chilled-water design moves heat quietly and efficiently even under the heaviest AI workloads. It keeps hundreds of millions of cores perfectly balanced as they drive real-time inference at massive scale. A closed-loop system recycles and stores water maintaining stable cooling without drawing on external sources.even"
X Link 2025-10-14T20:08Z 34.9K followers, 305.5K engagements
"Cerebras is now powering Cognition's latest code retrieval models directly in @windsurf Context retrieval has been one of the biggest bottlenecks in agentic coding. When you ask an agent to work on a large codebase it can spend 60% of its time just searching for relevant files. This retrieval process not only keeps you waiting but pollutes the context window with irrelevant snippets and racks up your inference bill. Cognition trained two specialized compact models using RL and deployed them on Cerebras for maximum speed. Querying huge 1M line codebases like React Vercel and PyTorch swe-grep"
X Link 2025-10-16T17:24Z 34.9K followers, 58.2K engagements
"SWE-grep is truly some inspired ML from @cognition. Take a 1M line codebase like React. With multiple fast inference calls on Cerebras it can fetch & explain relevant code in seconds. Here are a few measurements taken on React Vercel PyTorch repos"
X Link 2025-10-17T19:53Z 34.9K followers, 16.2K engagements
"๐งฎ What does 87B actually mean It is NOT [--] experts with 7B active parameters per token. Turns out its actually 13B active parameters. But wait where does 13B come from If youve ever tried to make sense of MoE math our next post in the MoE [---] guide by @dmsobol (and interactive calculator) breaks it all down"
X Link 2025-10-20T14:59Z 35.4K followers, 312.4K engagements
"In [--] minute add evals logging and tracing to your inference stack so you can always pick the best model improve your prompts catch bugs with @braintrustdata and Cerebras inference. Avoid the AI deleting your entire codebase this halloween and remember our free tier gets you 1M+ free toks/day per model"
X Link 2025-10-20T17:55Z 35.4K followers, 299.1K engagements
"full docs here: https://inference-docs.cerebras.ai/integrations/braintrust https://inference-docs.cerebras.ai/integrations/braintrust"
X Link 2025-10-20T18:01Z 35.4K followers, [----] engagements
"Pay-as-you-go is now available on @awscloud Marketplace. Use your AWS accountno upfronts no lock-insto serve frontier models 20x faster than leading GPUs"
X Link 2025-10-21T21:43Z 34.9K followers, [----] engagements
"Cerebras inference growth on @huggingface Reminds us of a certain Pink Floyd song"
X Link 2025-10-23T21:05Z 35.5K followers, 21.4K engagements
"Do you want to use REAP models on Cerebras inference API Yes No Unsure Yes No Unsure"
X Link 2025-10-24T19:24Z 35.4K followers, [----] engagements
"A beautiful Blackwell wafer from Nvidia. Too bad it's cut into pieces after the photo is taken ๐ข"
X Link 2025-10-28T23:12Z 36.8K followers, 22.3K engagements
"GPT-OSS-Safeguard from @OpenAI is here. Open-weight safety-tuned transparent reasoning. Now available in private preview at Cerebras speedsโก https://www.cerebras.ai/build-with-us https://www.cerebras.ai/build-with-us"
X Link 2025-10-29T16:43Z 36.8K followers, [----] engagements
"Today @cognition released SWE-1.5 the worlds fastest coding agent powered by Cerebras. SWE-1.5 achieves frontier-level coding ability comparable to Sonnet [---] and surpassing GPT-5. Cerebras and Cognition engineers worked hand in hand over the past few weeks training a custom draft model for super-fast spec decoding and building a custom priority system for smoother agent sessions. The result is the first coding agent that completes intricate software engineering tasks in 5-10 seconds instead of minutes. Going from start-stop coding to continuous flow is like using an AI computer ten years in"
X Link 2025-10-30T00:41Z 37.1K followers, 156.9K engagements
"Cognition trained SWE-1.5 on Nvidia's flagship GB200 NVL72. Yet they switched over to Cerebras for inference. Imagine how much better we must be to make that worthwhile"
X Link 2025-10-30T18:59Z 35.3K followers, 26.6K engagements
"@devsharma_8 @windsurf @cognition Deus ex machina"
X Link 2025-10-30T19:27Z 35.2K followers, [---] engagements
"Even frontier models fail all the time. The difference is we fail in [--] sec and with one more prompt you get the right answer. Time to success = [--] sec. On GPT Codex it takes 22min just to find out it failed. ๐ซ Just ran the same prompt with Windsurf's new SWE-1 model and gpt-5-codex gpt-5-codex took over [--] minutes. SWE-1 took under [--] seconds. Neither worked ๐ Just ran the same prompt with Windsurf's new SWE-1 model and gpt-5-codex gpt-5-codex took over [--] minutes. SWE-1 took under [--] seconds. Neither worked ๐"
X Link 2025-10-31T17:38Z 36.8K followers, 39.1K engagements
"๐ง Just deployed additional speed optimizations for @cognition SWE-1.5. The fastest measured request was an eye watering [----] token/s per Grafana dashboard. ๐คฏ๐คฏ"
X Link 2025-10-31T20:14Z 36.8K followers, 47.6K engagements
"Behind the scenes with @swyx on how @windsurf built 20x faster search over your codebase"
X Link 2025-11-05T18:44Z 37K followers, 429.7K engagements
"Cerebras beats Nvidia H100 but can it beat Blackwell Blackwell inference endpoints are finally out and its fast. It runs GPT-OSS-120B at [---] tokens/s leapfrogging H100 and Groq. Cerebras clocked in at [----] TPS - still #1. Looking forward to Rubin"
X Link 2025-11-06T23:01Z 36.9K followers, 33.4K engagements
"Cerebras Code just got an UPGRADE. It's now powered by GLM [---] Pro Plans ($50): 300k 1M TPM @ 24M Tokens/day Max Plans ($200): 400k 1.5M TPM @ 120M Tokens/day Fastest GLM provider on the planet at [----] tokens/s and at 131K context. Get yours before we run out ๐"
X Link 2025-11-07T22:48Z 37.6K followers, 176.9K engagements
"Introducing Cerebras for Nations our global initiative to advance and scale sovereign AI. How it works: [--] We will build world-class AI supercomputers with our WSE-3 chips and CS-3 systems [--] Co-develop state-of-the-art models and deploy with the worlds fastest inference [--] Invest locally in talent education and AI policy"
X Link 2025-11-11T20:25Z 37.1K followers, 14.2K engagements
"@esthor we are glad you enjoy high density inference"
X Link 2025-11-12T01:17Z 36.9K followers, [--] engagements
"See you in St. Louis for @Supercomputing 25"
X Link 2025-11-14T18:54Z 37.1K followers, [----] engagements
"GLM-4.6 is now live on Cerebras ๐ง - [----] TPS: 17x faster than Claude - Worlds #1 tool calling model - Tasteful and reliable codegen - Self-serve API starting at $10 Try it now: http://cloud.cerebras.ai http://cloud.cerebras.ai http://cloud.cerebras.ai http://cloud.cerebras.ai"
X Link 2025-11-18T19:49Z 37.1K followers, 32.6K engagements
"Learn more: https://www.cerebras.ai/blog/glm https://www.cerebras.ai/blog/glm"
X Link 2025-11-18T19:54Z 37.1K followers, [----] engagements
"After [--] years at NVIDIA James Wang left and joined @cerebras. In this Big Chip Club episode @draecomino breaks down the bottlenecks of NVIDIA GPUs and what's keeping them OUT OF first place. Drop your follow-up questions below ๐"
X Link 2025-11-20T19:14Z 37.1K followers, 411.6K engagements
"Cerebras inference ๐ is now live in @Microsoft Marketplace ๐ @Azure customers can now pair the Marketplace ecosystem with Cerebras speed scale & quality for real-time search voice coding agents & more"
X Link 2025-11-21T19:27Z 37.1K followers, [----] engagements
"๐Lets go https://marketplace.microsoft.com/en-us/product/saas/12372016.cerebras-cloud-fast-inference-as-a-servicetab=overview https://marketplace.microsoft.com/en-us/product/saas/12372016.cerebras-cloud-fast-inference-as-a-servicetab=overview"
X Link 2025-11-21T19:27Z 37.1K followers, [----] engagements
"Viva Las Cerebras ๐ฒWe are at AWS re:Invent all week. Come see live demos meet the team and spin up Cerebras in minutes using @awscloud Marketplace. Booth [----] in the Venetian"
X Link 2025-12-02T21:01Z 37.2K followers, [----] engagements
"Cerebras is showcasing nine new papers at NeurIPS [----] spanning pretraining to inference: 1CODA orchestrate 32B models to beat 235B models 2Calibrated Reasoning 1-3x token savings in best-of-n sampling 3DREAM [---] faster VLM inference using spec decode 4CompleteP hyper parameter transfer from [--] to [---] layers 5Power Lines scaling laws for weight decay and batch size 6PTPP-Aware Adaptation predict adaptation performance before training 7Prot42 generate protein binders from sequence alone matching AlphaProteo 8Chem42 design drug-like molecules for specific protein targets with 6x fewer parameters"
X Link 2025-12-04T21:01Z 37.3K followers, [----] engagements
"Introducing Jais-2 - a major leap for Arabic AI.Co-developed with @G42ai Inception and @mbzuai Jais-2 is a family of Arabic models built end-to-end on Cerebras from pretraining to inference. Jais-2 70B runs at [----] tokens/sec 20x faster than leading LLMs. It brings deep cultural grounding and dialect awareness to more than [---] million Arabic speakers worldwide. Learn more: https://www.cerebras.ai/blog/jais2 https://www.cerebras.ai/blog/jais2"
X Link 2025-12-09T23:27Z 37.3K followers, 16.2K engagements
"Some of our top customers are still choosing Llama [---] 8B. For a while we jumped to whatever hottest latest model was taking up our twitter feed. ๐ But as we are quickly realizing to create a SOTA product you need a model that fits your exact use case. Heres what our customers tell us: a lot of the legwork is actually around prompting theres an art to selecting and combining multiple models benchmarks only show part of the picture. you have to understand the unique quirks of each model. Especially as model releases become more and more frequent we need a clear way to evaluate new models. We"
X Link 2025-12-10T20:36Z 37.3K followers, 345.5K engagements
"๐ฝ A five-person Cerebras team approached one of the worlds largest semiconductor manufacturers with a radical proposal and TSMC backed the vision. We recently won TSMC North Americas Innovation Zone Demo of the Year earning the highest number of attendee votes across [--] showcased technologies"
X Link 2025-12-11T19:37Z 37.3K followers, [----] engagements
"Tired of waiting for minutes for your AI coding assistant @cognition built agents that search reason & edit code in a few seconds. Powered by Cerebrasrunning at 1K tokens/sec with frontier-level accuracy"
X Link 2025-12-12T18:02Z 37.5K followers, 16.9K engagements
"The most damaging AI failures dont make headlines they drive away users. Braintrust CEO @ankrgyl explains why context-breaking models kill UX and how evals turn user complaints into lasting fixes. If youre building an AI product this is the foundation of a reliable user experience. You can easily achieve this using tools like @cerebras and @braintrust https://inference-docs.cerebras.ai/integrations/braintrust https://inference-docs.cerebras.ai/integrations/braintrust"
X Link 2025-12-16T21:17Z 37.3K followers, [--] engagements
"You're right that newer models offer impressive gains and for many teams it does make sense to upgrade quickly. What were seeing across our customer base though is that the best model isnt always the newest model. In real production workflows teams prioritize whichever model delivers the highest net performance for their specific use case. This means evaluating compatibility with existing prompts predictable behavior on edge cases stability across long-running workloads and cost efficiency at scale. The point isnt to avoid upgrading its to evaluate upgrades with specificity on actual use"
X Link 2025-12-16T22:09Z 37.3K followers, [--] engagements
"DELOITTE TO REFUND AUSTRALIAN GOVERNMENT AFTER AI HALLUCINATIONS FOUND IN REPORT It's not about choosing the right model anymore. Most AI products dont fail because the model is bad. They fail because the system drops context or over-writes user work. Top teams take complaints like this and turn them directly into evals so regressions never ship twice. @ankrgyl explains what reliable AI UX actually looks like and how teams use Cerebras + Braintrust to turn user complaints into lasting fixes. Build with @cerebras and @braintrust"
X Link 2025-12-18T00:09Z 37.9K followers, [----] engagements
"A proud moment today we have signed an MOU with the @ENERGY to deepen collaboration on next-generation AI and HPC in support of the @WhiteHouse Genesis Mission. This builds on years of real work and a strong partnership with the national labs. We're just getting started"
X Link 2025-12-18T20:46Z 40.3K followers, 13.5K engagements
"RT @dmsobol: MoE models are compute efficient. Everyone knows that. But they are not parameter efficient. Why Our experts learn redundant"
X Link 2026-01-06T21:25Z 40.2K followers, [--] engagements
"Inference speed determines what AI can actually do. ๐ [----] year made that clear. Real-time inference moved from production to proof. Not benchmarks. Not demos. With measurable results for our partners and customers. Read @andrewdfeldman reflection on [----] and groove with us in [----]. https://twitter.com/i/web/status/2008930460357857535 https://twitter.com/i/web/status/2008930460357857535"
X Link 2026-01-07T15:55Z 40.2K followers, [----] engagements
"Read our blog: https://www.cerebras.ai/blog/glm-4-7 https://www.cerebras.ai/blog/glm-4-7"
X Link 2026-01-08T17:01Z 40.3K followers, [----] engagements
"RT @SarahChieng: We are cooking quite the storm of events in [----] Cafe Compute continues to go global ๐ Omakase hackathons Barrys w"
X Link 2026-01-08T22:08Z 40.3K followers, [--] engagements
"RT @SarahChieng: To put it bluntly the Chinese model labs have officially caught up. The latest release from @zai_org GLM-4.7 marks the"
X Link 2026-01-09T16:25Z 40.3K followers, [--] engagements
"@Mayhem4Markets @Zai_org @vithursant19"
X Link 2026-01-09T20:23Z 40.1K followers, [----] engagements
"@Chris65536 @Zai_org yes soon on open router. heard on prompt caching ๐"
X Link 2026-01-09T20:28Z 40.5K followers, [---] engagements
"RT @p0: New cookbook: A real-time fact-checking app to showcase Parallels high-accuracy web search and @Cerebras blazing fast inference"
X Link 2026-01-09T22:14Z 40.3K followers, [--] engagements
"RT @SarahChieng: A common misconception in AI right now is that everything were excited about is new. When I graduated from MIT in 2023"
X Link 2026-01-10T01:23Z 40.3K followers, [--] engagements
"Everyone talks about our hardware @Cerebras. Few notice the software. Ryan Loney breaks down the hidden optimizations powering [--] faster LLM inference than GPUs speculative decoding token reuse and why were just getting started. Watch the full story here"
X Link 2026-01-12T23:26Z 40.3K followers, 13.7K engagements
"RT @SarahChieng: What if [--] line of code could [--] your LLM inference speed Predicted Outputs is a new software technique that can drastica"
X Link 2026-01-13T20:59Z 40.2K followers, [--] engagements
"72% of people don't trust the internet. AI generated false information is polluting even trusted sources. It's impossible to sort through what is real. With this cookbook you can build a fact checker that scans web-pages and verifies every claim. All at the speed of light powered by @cerebras. Test it on Hacker News the latest NeurIPS papers the Wall Street Journal X/Twitter. Built with @p0 Search API @cerebras Inference @OpenAI's gpt-oss-120B Comment what questionable claims you catch ๐ง https://twitter.com/i/web/status/2011194998516228466 https://twitter.com/i/web/status/2011194998516228466"
X Link 2026-01-13T21:53Z 40.3K followers, 10.2K engagements
"Get the starter code here: https://cookbook.openai.com/articles/gpt-oss/build-your-own-fact-checker-cerebras https://cookbook.openai.com/articles/gpt-oss/build-your-own-fact-checker-cerebras"
X Link 2026-01-13T21:53Z 40.3K followers, [----] engagements
"RT @andrewdfeldman: What can these guys teach us about data center cooling A lot it turns out. If you slip into icy water you are dead i"
X Link 2026-01-16T18:30Z 40.3K followers, [--] engagements
"@andrewdfeldman Buffalo Bills fans all of the sudden"
X Link 2026-01-16T19:09Z 40.1K followers, [----] engagements
"RT @sama: Very fast Codex coming"
X Link 2026-01-16T20:39Z 40.3K followers, [---] engagements
"@karpathy @Rasmic speed = intelligence https://www.cerebras.ai/blog/the-cerebras-scaling-law-faster-inference-is-smarter-ai https://www.cerebras.ai/blog/the-cerebras-scaling-law-faster-inference-is-smarter-ai"
X Link 2026-01-17T01:05Z 40.1K followers, [----] engagements
"If you think Cerebras is just about speed you do not understand Cerebras. Just as mass can be converted to energy speed can be converted to intelligence. It's the natural consequence of test-time compute scaling"
X Link 2026-01-17T01:18Z 40.5K followers, 105.3K engagements
"Read more https://www.cerebras.ai/blog/the-cerebras-scaling-law-faster-inference-is-smarter-ai https://www.cerebras.ai/blog/the-cerebras-scaling-law-faster-inference-is-smarter-ai"
X Link 2026-01-17T01:26Z 40.3K followers, [----] engagements
"@hi_im_isaac_ why hello"
X Link 2026-01-20T16:47Z 40.3K followers, [----] engagements
"RT @SarahChieng: Cafe Compute is now open for the year Cafe Compute is the first late-night coffeeshop for developers researchers wr"
X Link 2026-01-20T19:25Z 40.3K followers, [--] engagements
"RT @SarahChieng: GLM [---] is the top ranked OS model on leaderboards.but lets see if its worth the hype This Friday @cerebras and @cli"
X Link 2026-01-21T20:01Z 40.3K followers, [--] engagements
"RT @andrewdfeldman: The fastest AI processor in the world the @cerebras WSE is having fun in Davos"
X Link 2026-01-22T15:44Z 40.4K followers, [--] engagements
"Blog: https://trilogyai.substack.com/i/170248764/cerebras-cline-hackathon-implementation https://trilogyai.substack.com/i/170248764/cerebras-cline-hackathon-implementation"
X Link 2026-01-22T18:48Z 40.5K followers, [----] engagements
"We're proud to work with @NETL_DOE to enable scientific modeling several hundred to thousands of times faster and with several thousand times lower energy consumption than traditional distributed computing systems like the JOULE [---] supercomputer #AI https://hubs.li/Q018Rkwk0 https://hubs.li/Q018Rkwk0"
X Link 2022-04-26T18:08Z 41.2K followers, [--] engagements
"Very proud to enable @NETL_DOE's researchers to greatly accelerate a key computational fluid dynamics workload more than [---] times faster and at a fraction of the power consumption than the JOULE [---] supercomputer #HPC #scientificdiscovery https://hubs.li/Q01d8XqX0 https://hubs.li/Q01d8XqX0"
X Link 2022-06-13T18:05Z 41.2K followers, [--] engagements
"The thrill the terror and a breakthrough. ๐ Listen here: Apple Podcasts: Spotify: Thank you @marcelsalathe and the @EPFL_en AI Center for this fascinating conversation with Jean Philippe Fricker. https://bit.ly/3CdZuKU https://bit.ly/4aobCFV https://bit.ly/3CdZuKU https://bit.ly/4aobCFV"
X Link 2025-01-17T20:22Z 43.1K followers, [----] engagements
"K2 Think is now available and runs fastest on Cerebras Inference at [----] TPS on Cerebras 20x faster than GPU. Launched by @mbzuai and @G42ai K2 Think is a world leading open-source reasoning model that beats GPT OSS 120B and DeepSeek on math and reasoning"
X Link 2025-09-10T16:04Z 41.6K followers, 30.9K engagements
"Sonnet [---] can code for [--] hours. That's [--] GPU hours. aka [--] Cerebras hours tops. Claude Sonnet [---] runs autonomously for 30+ hours of coding The record for GPT-5-Codex was just [--] hours. Whats Anthropics secret sauce https://t.co/0cGKtoSviy Claude Sonnet [---] runs autonomously for 30+ hours of coding The record for GPT-5-Codex was just [--] hours. Whats Anthropics secret sauce https://t.co/0cGKtoSviy"
X Link 2025-09-30T22:40Z 41.7K followers, 28.4K engagements
"RT @andrewdfeldman: What is it like to be a delegate at Davos Its like watching a trade show swallow a small Swiss village. Every store"
X Link 2026-01-23T20:22Z 43K followers, [--] engagements
"RT @SarahChieng: MoEs were invented in [----]. Jacobs Jordan Nowlan and Hinton proposed MoEs in [----]. Lets have a bunch of specialized"
X Link 2026-01-28T23:46Z 41.7K followers, [--] engagements
"RT @SarahChieng: Cafe Compute is officially open for the year We're coming soon to Boston NYC Seattle Austin London Miami Brazil an"
X Link 2026-01-29T20:40Z 43K followers, [--] engagements
"The @EPCCed at the University of Edinburghone of Europes leading supercomputing centershas developed new high-level libraries to program the Cerebras CS-3 and just open-sourced them for everyone else to use. Their results: [---] faster than [---] Nvidia A100 GPUs on acoustic wave modeling - [--] faster than [---] nodes of a Cray EX supercomputer - Their compiler-generated code outperforms hand-tuned implementations The A100 is obviously not the latest GPU. For context Nvidia B200 has [--] the dense FP32 flops of A100. So as a first-order approximation a single WSE-3 delivered the performance of 450"
X Link 2026-01-30T18:55Z 40.6K followers, [----] engagements
"RT @SarahChieng: The Year of Latency Debt (And How Big Tech Is Paying It Down) In the past six months the four most important companies i"
X Link 2026-01-31T17:23Z 43K followers, [--] engagements
"Up to [----] tokens/sec of single user generation speed is the number measured by 3rd party Artificial Analysis. Routing to providers such as OpenRouter can add some queue'ing latency and OpenRouter uses different throughput calculations. For raw speed metrics our direct API at is the best way to benchmark. http://cloud.cerebras.ai http://cloud.cerebras.ai"
X Link 2026-02-03T21:18Z 41.2K followers, [--] engagements
"Speed and rate limits are different things. Speed (also known as single user latency) means that each response flows back faster. Rate limits cap total throughput over time (ie. the number of requests and tokens available within a time window to free accounts). For heavy agentic workflows you may hit limits faster because it's fast -- meaning you hit rate limits faster because you are processing more tokens in less time. To get higher rate limits you can upgrade your subscription to Code Max or Developer tier (pay-as-you-go) here: http://cerebras.ai/pricing http://cerebras.ai/pricing"
X Link 2026-02-03T21:20Z 41.2K followers, [--] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
/creator/twitter::cerebras