[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @marksg Mark G Mark G posts on X about open ai, $googl, gdp, mark zuckerberg the most. They currently have XXX followers and XXX posts still getting attention that total XXX engagements in the last XX hours. ### Engagements: XXX [#](/creator/twitter::37065919/interactions)  - X Week XXXXX +17% - X Month XXXXX +226% - X Months XXXXX +62% - X Year XXXXXX -XX% ### Mentions: X [#](/creator/twitter::37065919/posts_active)  ### Followers: XXX [#](/creator/twitter::37065919/followers)  - X Week XXX +1.30% - X Month XXX +11% - X Months XXX +63% - X Year XXX +54% ### CreatorRank: XXXXXXXXX [#](/creator/twitter::37065919/influencer_rank)  ### Social Influence [#](/creator/twitter::37065919/influence) --- **Social category influence** [technology brands](/list/technology-brands) [stocks](/list/stocks) [celebrities](/list/celebrities) [musicians](/list/musicians) **Social topic influence** [open ai](/topic/open-ai), [$googl](/topic/$googl), [gdp](/topic/gdp), [mark zuckerberg](/topic/mark-zuckerberg), [goldman sachs](/topic/goldman-sachs), [inference](/topic/inference), [madonna](/topic/madonna), [radar](/topic/radar), [hey grok](/topic/hey-grok), [qwen](/topic/qwen) **Top assets mentioned** [Alphabet Inc Class A (GOOGL)](/topic/$googl) [Goldman Sachs (GS)](/topic/goldman-sachs) ### Top Social Posts [#](/creator/twitter::37065919/posts) --- Top posts by engagements in the last XX hours "Whats one thing Mark Zuckerberg and Sam Altman agree on Energy is the bottleneck for AI growth. And capital is the bottleneck for energy. Without muti-trillions in new power plants not only will AI be constrained GDP will be constrained. The bridge to a sustainable fusion future will be gas and then solar. So more CO2. This report is both timely and important"  [@marksg](/creator/x/marksg) on [X](/post/tweet/1948036561431638282) 2025-07-23 15:04:43 UTC XXX followers, XX engagements "Google DeepMind in achieving Gold in the IMO (Intl Math Olympiad) used an inference architecture similar to the RLToT technique that I guessed might have been used by OpenAI who also achieved Gold"  [@marksg](/creator/x/marksg) on [X](/post/tweet/1947496432170373219) 2025-07-22 03:18:26 UTC XXX followers, XXX engagements "@aIuckystar Who knows what was the last photo of Madonna ever taken by Steven Meisel I always thought he made her look her best. Yes including the Sex book sessions"  [@marksg](/creator/x/marksg) on [X](/post/tweet/1946515427879719099) 2025-07-19 10:20:17 UTC XXX followers, XXX engagements "The private model that won Gold in the IMO for OpenAI last week is a general use model not a math specialist. It will be released by the end of the year. But its rumoured to be the model that powers the agent tool in ChatGPT available to paid users today"  [@marksg](/creator/x/marksg) on [X](/post/tweet/1947870659570503757) 2025-07-23 04:05:29 UTC XXX followers, XXX engagements "Yes it's a very promising technique. Now both Google and Apple have confirmed that they are doing something similar I would be surprised if OpenAI isn't also. I'd also expect Chinese lab @Kimi_Moonshot to use this in future. They like others have a MoE model that they plan to upgrade to a reasoning model so Tree-of-Thought RL should be on their radar"  [@marksg](/creator/x/marksg) on [X](/post/tweet/1947529039385923916) 2025-07-22 05:28:00 UTC XXX followers, XX engagements "@grok @clawrence @GeoffLewisOrg Hey @grok the dialog in the screenshots Geoff shared sounds like dark sci-fi. Is there a chance that ChatGPT could have been trained on some similar fan fiction online and it's just been triggered to emulate it by Geoff's diving down a rabbit hole"  [@marksg](/creator/x/marksg) on [X](/post/tweet/1946586133581938721) 2025-07-19 15:01:14 UTC XXX followers, XXX engagements "Have an Apple Studio M3 Ultra You can run the new Qwen X Coder locally at XX tokens per second using the 4-bit quantized version on HF"  [@marksg](/creator/x/marksg) on [X](/post/tweet/1947838148593258953) 2025-07-23 01:56:18 UTC XXX followers, XXX engagements "@JacksonAtkinsX @ValWanders_Ai True. And it raises the speed of local models on domestic Apple silicon closer to the speed of the same models on $5000+ NVIDIA RTX GPU cards (which have less max memory)"  [@marksg](/creator/x/marksg) on [X](/post/tweet/1947590494286131296) 2025-07-22 09:32:12 UTC XXX followers, XX engagements "A follow-up to my earlier post on this paper by Apple. The speed boost means models running on Apple Silicon will more closely approach the speed of the same model running on NVIDIA RTX GPUs. (Not to mention advantages in cost efficiency and unified memory size)"  [@marksg](/creator/x/marksg) on [X](/post/tweet/1947592100444574009) 2025-07-22 09:38:35 UTC XXX followers, XXX engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Mark G posts on X about open ai, $googl, gdp, mark zuckerberg the most. They currently have XXX followers and XXX posts still getting attention that total XXX engagements in the last XX hours.
Social category influence technology brands stocks celebrities musicians
Social topic influence open ai, $googl, gdp, mark zuckerberg, goldman sachs, inference, madonna, radar, hey grok, qwen
Top assets mentioned Alphabet Inc Class A (GOOGL) Goldman Sachs (GS)
Top posts by engagements in the last XX hours
"Whats one thing Mark Zuckerberg and Sam Altman agree on Energy is the bottleneck for AI growth. And capital is the bottleneck for energy. Without muti-trillions in new power plants not only will AI be constrained GDP will be constrained. The bridge to a sustainable fusion future will be gas and then solar. So more CO2. This report is both timely and important" @marksg on X 2025-07-23 15:04:43 UTC XXX followers, XX engagements
"Google DeepMind in achieving Gold in the IMO (Intl Math Olympiad) used an inference architecture similar to the RLToT technique that I guessed might have been used by OpenAI who also achieved Gold" @marksg on X 2025-07-22 03:18:26 UTC XXX followers, XXX engagements
"@aIuckystar Who knows what was the last photo of Madonna ever taken by Steven Meisel I always thought he made her look her best. Yes including the Sex book sessions" @marksg on X 2025-07-19 10:20:17 UTC XXX followers, XXX engagements
"The private model that won Gold in the IMO for OpenAI last week is a general use model not a math specialist. It will be released by the end of the year. But its rumoured to be the model that powers the agent tool in ChatGPT available to paid users today" @marksg on X 2025-07-23 04:05:29 UTC XXX followers, XXX engagements
"Yes it's a very promising technique. Now both Google and Apple have confirmed that they are doing something similar I would be surprised if OpenAI isn't also. I'd also expect Chinese lab @Kimi_Moonshot to use this in future. They like others have a MoE model that they plan to upgrade to a reasoning model so Tree-of-Thought RL should be on their radar" @marksg on X 2025-07-22 05:28:00 UTC XXX followers, XX engagements
"@grok @clawrence @GeoffLewisOrg Hey @grok the dialog in the screenshots Geoff shared sounds like dark sci-fi. Is there a chance that ChatGPT could have been trained on some similar fan fiction online and it's just been triggered to emulate it by Geoff's diving down a rabbit hole" @marksg on X 2025-07-19 15:01:14 UTC XXX followers, XXX engagements
"Have an Apple Studio M3 Ultra You can run the new Qwen X Coder locally at XX tokens per second using the 4-bit quantized version on HF" @marksg on X 2025-07-23 01:56:18 UTC XXX followers, XXX engagements
"@JacksonAtkinsX @ValWanders_Ai True. And it raises the speed of local models on domestic Apple silicon closer to the speed of the same models on $5000+ NVIDIA RTX GPU cards (which have less max memory)" @marksg on X 2025-07-22 09:32:12 UTC XXX followers, XX engagements
"A follow-up to my earlier post on this paper by Apple. The speed boost means models running on Apple Silicon will more closely approach the speed of the same model running on NVIDIA RTX GPUs. (Not to mention advantages in cost efficiency and unified memory size)" @marksg on X 2025-07-22 09:38:35 UTC XXX followers, XXX engagements
/creator/x::marksg