Dark | Light
# ![@fchollet Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::68746721.png) @fchollet François Chollet

François Chollet posts on X about ai, agi, if you, $googl the most. They currently have [-------] followers and [---] posts still getting attention that total [-------] engagements in the last [--] hours.

### Engagements: [-------] [#](/creator/twitter::68746721/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::68746721/c:line/m:interactions.svg)

- [--] Week [---------] +10%
- [--] Month [---------] +127%
- [--] Months [----------] +331%
- [--] Year [----------] +5.10%

### Mentions: [--] [#](/creator/twitter::68746721/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::68746721/c:line/m:posts_active.svg)

- [--] Month [--] -67%
- [--] Months [---] +55%
- [--] Year [---] -44%

### Followers: [-------] [#](/creator/twitter::68746721/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::68746721/c:line/m:followers.svg)

- [--] Week [-------] +0.51%
- [--] Month [-------] +1.50%
- [--] Months [-------] +6.80%
- [--] Year [-------] +13%

### CreatorRank: [------] [#](/creator/twitter::68746721/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::68746721/c:line/m:influencer_rank.svg)

### Social Influence

**Social category influence**
[technology brands](/list/technology-brands)  8.82% [stocks](/list/stocks)  #5699 [social networks](/list/social-networks)  2.94% [countries](/list/countries)  2.94% [finance](/list/finance)  2.94% [travel destinations](/list/travel-destinations)  1.18% [currencies](/list/currencies)  1.18% [products](/list/products)  0.59% [cryptocurrencies](/list/cryptocurrencies)  0.59%

**Social topic influence**
[ai](/topic/ai) #1597, [agi](/topic/agi) #20, [if you](/topic/if-you) #2682, [$googl](/topic/$googl) #523, [twitter](/topic/twitter) #3028, [to the](/topic/to-the) #4134, [science](/topic/science) #2689, [model](/topic/model) #757, [solve](/topic/solve) #510, [software](/topic/software) 1.76%

**Top accounts mentioned or mentioned by**
[@ofquarks28701](/creator/undefined) [@raahiravi](/creator/undefined) [@lfuckingg](/creator/undefined) [@polynoamial](/creator/undefined) [@ghidorah_x](/creator/undefined) [@spisallyouneed](/creator/undefined) [@alygg77](/creator/undefined) [@unifiedenergy11](/creator/undefined) [@lateinteraction](/creator/undefined) [@visimod](/creator/undefined) [@noonglade_](/creator/undefined) [@grok](/creator/undefined) [@curiousiter](/creator/undefined) [@louis4174](/creator/undefined) [@anayatkhan09](/creator/undefined) [@stellarium1234](/creator/undefined) [@akanoego](/creator/undefined) [@bapxai](/creator/undefined) [@chasebrowe32432](/creator/undefined) [@biounit000](/creator/undefined)

**Top assets mentioned**
[Alphabet Inc Class A (GOOGL)](/topic/$googl)
### Top Social Posts
Top posts by engagements in the last [--] hours

"A good solution to the intelligence problem should be able to autonomously produce abstractions that compose well stack well and stand the test of time. Without cribbing them from somewhere else. So far there's no tech that achieves this. Gradient descent certainly doesn't"  
[X Link](https://x.com/fchollet/status/2020957139846693285)  2026-02-09T20:25Z 606.2K followers, 29.9K engagements


"We are looking for brilliant deep learning researchers to help us solve program synthesis at @ndea. If you strongly feel like AGI should be capable of invention not just automation consider joining us. Apply here: https://ndea.com/jobs https://ndea.com/jobs"  
[X Link](https://x.com/fchollet/status/2021306746543751379)  2026-02-10T19:34Z 606.2K followers, 37.5K engagements


"Write papers where the citation count per year looks like this"  
[X Link](https://x.com/fchollet/status/2021745676208755035)  2026-02-12T00:38Z 606.2K followers, 75.7K engagements


"I don't know if you've noticed but there's a wave of mass psychosis rolling through tech Twitter very similar to what we experienced in spring [----] and spring [----] (interesting that the periodicity is exactly [--] years) But the vibes are much darker now than they were last time"  
[X Link](https://x.com/fchollet/status/2021748146951606545)  2026-02-12T00:48Z 606.2K followers, 406.8K engagements


"@polynoamial About one year Frontier models today perform very poorly with a minimal harness. However if big labs start directly targeting the benchmark like they did for ARC-2 numbers will go up fast"  
[X Link](https://x.com/fchollet/status/2022054537293705260)  2026-02-12T21:05Z 606.2K followers, 18.7K engagements


"You should read it some day it's a good read. Every single thing said there is still true which you would know if you had actually cared to read it. This is the key bit: the first superhuman AI will just be another step on a visibly linear ladder of progress that we started climbing long ago AGI is in the continuity of the process of Science at large (which is itself a recursively self-improving intelligent process) and that process has been moving roughly at a linear pace due to the reasons detailed in the article"  
[X Link](https://x.com/fchollet/status/2022166668886389167)  2026-02-13T04:31Z 606.2K followers, [----] engagements


"Best time to buy an asset you want to own is when everybody hates it and there's no bid. Worst time is when everybody is enthusiastic about it and wants a piece"  
[X Link](https://x.com/fchollet/status/2022174952649380230)  2026-02-13T05:04Z 606.2K followers, 35.3K engagements


"I don't think the rise of AGI will lead to a sudden exponential explosion in AI capabilities. There are bottlenecks on the sources of new capability improvements and horizontally scaling intelligence in silicon (even by a massive factor) doesn't lift those bottlenecks"  
[X Link](https://x.com/fchollet/status/2022374191715238122)  2026-02-13T18:15Z 606.2K followers, 30.7K engagements


"The 3rd edition of my book Deep Learning with Python is being printed right now and will be in bookstores within [--] weeks. You can order it now from Amazon or from Manning. This time we're also releasing the whole thing as a 100% free website. I don't care if it reduces book sales I think it's the best deep learning intro around and more people should be able to read it"  
[X Link](https://x.com/anyuser/status/1968676861430706451)  2025-09-18T14:01Z 606.2K followers, 775.5K engagements


"If more people can build software there will be more software startups and side businesses. Which means SaaS tool builders that cater to such startups will benefit from massive AI tailwinds: [--]. Their customer base will expand [--]. AI makes their job easier so they can better serve their customers (e.g. add features faster launch more services) [--]. AI presents new automation opportunities so that they can make their products higher-value [--]. AI gives them the ability to ship customizable adaptive interfaces on top of their core service"  
[X Link](https://x.com/fchollet/status/2017359370867380500)  2026-01-30T22:08Z 606.2K followers, 24.4K engagements


"Back in [----] everybody was telling me "no one uses Google search anymore it's over" From [----] to [----] Google search query volume has grown 61% to 5T/year and search revenue has grown 28% to $225B (56% of Google's revenue) The track record of Twitter pundits predicting AI disruption has been abysmal https://twitter.com/i/web/status/2020497629290148139 https://twitter.com/i/web/status/2020497629290148139"  
[X Link](https://x.com/fchollet/status/2020497629290148139)  2026-02-08T13:59Z 606.2K followers, 292.1K engagements


"Such a weird question in the first place -- it is basically impossible for any word not to carry semantic value. Let's say you introduce "uh" purely as a filler when speaking and absolutely no other purpose then "uh" has automatically acquired semantic value as a signifier of orality and will be used in written contexts to evoke orality. The only way a word could *not* carry semantic value is if it is inserted 100% at random. If it is non random then it has meaning. It has at least the meaning of evoking the non-random context in which it gets used. Like most mid-century linguists Chomsky"  
[X Link](https://x.com/fchollet/status/2020704155145547938)  2026-02-09T03:39Z 606.2K followers, 55.5K engagements


"The new Gemini Deep Think is achieving some truly incredible numbers on ARC-AGI-2. We certified these scores in the past few days"  
[X Link](https://x.com/fchollet/status/2021983310541729894)  2026-02-12T16:22Z 606.2K followers, 204.5K engagements


"A good canary in the coal mine for AI-caused job loss will be call centers. We're currently projecting 2.75M call center jobs in the US in [----]. In [----] it was 2.63M. The global call center market size has grown 35% in that time period (from $298B to $405B). Peak employment was [----] at 2.98M. When we see a -50% employment drop in this sector you can get ready for broad disruption across the economy"  
[X Link](https://x.com/fchollet/status/2022063228399005931)  2026-02-12T21:40Z 606.2K followers, 147.4K engagements


"I'm guessing most people on tech Twitter believe call center employment went to [--] in 2024"  
[X Link](https://x.com/fchollet/status/2022074817751797847)  2026-02-12T22:26Z 606.2K followers, 25.2K engagements


"@Yossi_Dahan_ @polynoamial ARC-4 is in the works to be released early [----]. ARC-5 is also planned. The final ARC will probably be 6-7. The point is to keep making benchmarks until it is no longer possible to propose something that humans can do and AI can't. AGI 2030"  
[X Link](https://x.com/fchollet/status/2022086661170254203)  2026-02-12T23:13Z 606.2K followers, 200.6K engagements


"Reaching AGI won't be beating a benchmark. It will be the end of the human-AI gap. Benchmarks are simply a way to estimate the current gap which is why we need to continually release new benchmarks (focused on the remaining gap). Benchmarking is a process not a fixed point. We can say we have AGI when it's no longer possible to come up with a test that evidences the gap. When it's no longer possible to point to something that regular humans can do and AI can't. Today it's still easy. I expect it will become nearly impossible by [----]. https://twitter.com/i/web/status/2022090111832535354"  
[X Link](https://x.com/fchollet/status/2022090111832535354)  2026-02-12T23:27Z 606.2K followers, 103.6K engagements


"You can write a self-replicating physical program in just [--] tokens (RNA bases). That's small enough to emerge spontaneously via brute force recombination at scale. AI is cool and all. but a new paper in @ScienceMagazine kind of figured out the origin of life The paper reports the discovery of a simple 45-nucleotide RNA molecule that can perfectly copy itself. https://t.co/TTe4sXhqUT AI is cool and all. but a new paper in @ScienceMagazine kind of figured out the origin of life The paper reports the discovery of a simple 45-nucleotide RNA molecule that can perfectly copy itself."  
[X Link](https://x.com/fchollet/status/2022334626833072162)  2026-02-13T15:38Z 606.2K followers, 155.1K engagements


"Merely *knowing* that an outcome you want is *attainable* leads you to automatically filter out decisions that clearly wouldn't lead to it thus dramatically increasing the probability you'll reach it. Belief is destiny"  
[X Link](https://x.com/fchollet/status/2022370324193353794)  2026-02-13T18:00Z 606.2K followers, 69K engagements


"AGI is in the continuity of the continual recursively self-improving capability expansion process that started when humanity developed modern science in the 1700s-1800s. It is the next step of the ladder"  
[X Link](https://x.com/fchollet/status/2022374441360236984)  2026-02-13T18:16Z 606.2K followers, 14.9K engagements


"Merely having tools or language is not enough to kickstart recursive self-improvement. The total set of prerequisites were only assembled very recently. In particular: - Writing - Scalable publishing - Sufficient social freedom to think and communicate about nature/people - Food no longer a major bottleneck Basically we had to wait until the 1700s for everything to line up. https://twitter.com/i/web/status/2022376859795886230 https://twitter.com/i/web/status/2022376859795886230"  
[X Link](https://x.com/fchollet/status/2022376859795886230)  2026-02-13T18:26Z 606.2K followers, [---] engagements


"Whenever I hear Very Serious Businessmen make confident pronouncements about the future of AI I remember what the very same people were saying in [----] about the Metaverse and NFTs"  
[X Link](https://x.com/fchollet/status/2019861668919152728)  2026-02-06T19:52Z 606.2K followers, 121K engagements


"Those predicting the death of all SaaS will fare even worse They don't understand the PMF dynamics and they don't understand the AI tailwinds"  
[X Link](https://x.com/fchollet/status/2020498320175829471)  2026-02-08T14:01Z 606.2K followers, 45K engagements


"Outside the human mind there's just one kind of abstraction substrate that has achieved these properties historically: math and code (yes it's only one kind not two)"  
[X Link](https://x.com/fchollet/status/2020957680538616196)  2026-02-09T20:27Z 606.2K followers, 20.6K engagements


"Lots of folks spread false narratives about how ARC-1 was created in response to LLMs or how ARC-2 was only created because ARC-1 was saturated. Setting the record straight: [--]. ARC-1 was designed 2017-2019 and released in [----] (pre LLMs). [--]. The coming of ARC-2 was announced in May [----] (pre ChatGPT). [--]. By mid-2024 there was still essentially no progress on ARC-1. [--]. All progress on ARC-1 & ARC-2 came from a new paradigm test-time adaptation models starting in late [----] and ramping up through [----]. [--]. Progress happened specifically *because* research moved away from what ARC was intended to"  
[X Link](https://x.com/fchollet/status/2022036543582638517)  2026-02-12T19:54Z 606.2K followers, 86.6K engagements


"One possible scenario for many industries is that the nature of the job changes total task throughput increases revenue increases and employment stays stable or slightly decreases. I generally don't expect to see AI-caused mass unemployment in the next [--] years"  
[X Link](https://x.com/fchollet/status/2022063728716644781)  2026-02-12T21:42Z 606.2K followers, 39K engagements


"That process if you zoom out is not exponential (though it does involved many lower-level exponentials mostly at the level of system inputs). It is essentially linear. The weight/importance of scientific progress over say 1850-1900 is comparable to 1900-1950 1950-2000 or 2000-2050"  
[X Link](https://x.com/fchollet/status/2022374760156688577)  2026-02-13T18:18Z 606.2K followers, 14.7K engagements


"Right now it's still taking me more time to generate medium-complexity diagrams by describing them to Nano Banana than by drawing them manually in Google Slides"  
[X Link](https://x.com/fchollet/status/2022418106774384923)  2026-02-13T21:10Z 606.2K followers, 36.9K engagements


"Fun fact the person responsible for this is a Russian asset Gerhard Schrder (see [----] Atomgesetz) and the purpose of the move was to ensure German energy dependence on Russia. He became chairman of the board of Rosneft and NordStream and was about to join the board of Gasprom before the war started. He made tens of millions from Russian energy companies and pro-Putin lobbying. Lifelong friend with Putin whom he lauded as a flawless democrat in [----]. Imagine voluntarily doing this to your own country https://t.co/mqbzZE7ZKu Imagine voluntarily doing this to your own country"  
[X Link](https://x.com/fchollet/status/2018353777548616000)  2026-02-02T16:00Z 606.2K followers, 616.1K engagements


"Large capital raise from Waymo to accelerate deployment. They plan to add +20 cities in [----]. I expect they will roughly double their city count every [--] months from now on using their new Zeekr-based platform ($40000 per vehicle). They should also double their weekly rides every [--] months. The age of autonomous mobility at scale is here. Waymo has raised $16B to bring the worlds most trusted driver to more cities. ✅ $126B valuation ✅ 20M+ lifetime rides ✅ 90% reduction in serious injury crashes Read more from our co-CEOs: https://t.co/Fc5I33WpYB https://t.co/zF79Sc6kzm The age of autonomous"  
[X Link](https://x.com/fchollet/status/2018456237256613917)  2026-02-02T22:47Z 606.2K followers, 71K engagements


"The best movie genre to watch to learn a new language is romantic comedies (from the target country). They're dialog-heavy they feature only everyday vocabulary you can use and they show you a realistic consensus view of local contemporary society and culture"  
[X Link](https://x.com/fchollet/status/2019807314136703129)  2026-02-06T16:16Z 606.2K followers, 55.4K engagements


"Whenever there's a TV ad for a crypto exchange it shows things like skyscraper construction sites the moon landing fighter jets etc. (you've probably seen some of them). It's funny. Nothing to be feature from crypto land so they have to use exclusively borrowed achievements. What is the "crypto industry" Like what does it produce What is the "crypto industry" Like what does it produce"  
[X Link](https://x.com/fchollet/status/2019808408690708631)  2026-02-06T16:20Z 606.2K followers, 22.6K engagements


"@lateinteraction @polynoamial If you believe in AGI then you shouldn't use a harness obviously. If any kind of task-specific program is needed the AI should come up with it"  
[X Link](https://x.com/fchollet/status/2022071737811452024)  2026-02-12T22:14Z 606.2K followers, [----] engagements


"Today OpenAI announced o3 its next-gen reasoning model. We've worked with OpenAI to test it on ARC-AGI and we believe it represents a significant breakthrough in getting AI to adapt to novel tasks. It scores 75.7% on the semi-private eval in low-compute mode (for $20 per task in compute ) and 87.5% in high-compute mode (thousands of $ per task). It's very expensive but it's not just brute -- these capabilities are new territory and they demand serious scientific attention"  
[X Link](https://x.com/fchollet/status/1870169764762710376)  2024-12-20T18:09Z 606.2K followers, 2.2M engagements


"All the great breakthroughs in science are at their core compression. They take a complex mess of observations and say "it's all just this simple rule". Symbolic compression specifically. Because the rule is always symbolic -- usually expressed as mathematical equations. If it isn't symbolic you haven't really explained the thing. You can observe it but you can't understand it"  
[X Link](https://x.com/fchollet/status/1989340153114976598)  2025-11-14T14:30Z 606.2K followers, 13.2M engagements


"Folks who work in AI or software engineering feel like the world is changing exponential fast. Because *their* world is changing exponentially fast. Folks in structural engineering or aeronautical engineering might not share the same sentiment"  
[X Link](https://x.com/fchollet/status/2018836816552694139)  2026-02-03T23:59Z 606.2K followers, 109.6K engagements


"What happens when a skill can be almost fully automated with AI Do these jobs simply disappear Instead of purely speculating we can simply look at concrete examples. Take translators. Translation can be 100% automated with AI and this capability has been around since [----]. So we have 2-3 years of data. What we see so far: - Stable FTE count but slow hiring or no hiring - Nature of the job switched from doing it yourself to supervising AI output (post-editing) - Increased task volume - Decreased hourly rates - Freelancers getting cut We are now starting to see the same pattern with software"  
[X Link](https://x.com/fchollet/status/2019571942148472899)  2026-02-06T00:40Z 606.2K followers, 394.1K engagements


"For non-verifiable domains the only way you can improve AI performance at this time is via curating more annotated training data which is expensive and only yields logarithmic improvements. And here's the thing: nearly all jobs have non-verifiable elements. There's virtually no job that's end-to-end verifiable. Even the job of a mathematician is not end-to-end verifiable. Sofware engineering involves many verifiable tasks but it isn't end-to-end verifiable. For this reason the gap between "AI can automate most of these tasks" and "AI can fully replace this job" will remain for a very long"  
[X Link](https://x.com/fchollet/status/2019610121371054455)  2026-02-06T03:12Z 606.2K followers, 232K engagements


"Lots of folks are apparently in utter disbelief at these numbers because *obviously* Google search died in [----] *no one* is using Google at all in [----] so the numbers must be wrong somehow or maybe it's just AI agents making all these queries Nope it's a plain fact that more people than ever are using Google to search more than ever. In fact Google search usage is *accelerating* as of Q4 [----] Look instead of grasping at straws ask yourself why you were wrong about this and try to update your priors so that you'll be less wrong next time Back in [----] everybody was telling me "no one uses"  
[X Link](https://x.com/fchollet/status/2020506767134978518)  2026-02-08T14:35Z 606.2K followers, 185.8K engagements


"Best resource to understand deep learning fundamentals. If you want to understand how modern AI actually works not just use it Deep Learning with Python 3rd ed. breaks it down with hands-on projects & real code. @fchollet & Matthew Watson cover the foundations and the cutting edge. 50% off for the next three days: https://t.co/I7EfulP17D If you want to understand how modern AI actually works not just use it Deep Learning with Python 3rd ed. breaks it down with hands-on projects & real code. @fchollet & Matthew Watson cover the foundations and the cutting edge. 50% off for the next three days:"  
[X Link](https://x.com/fchollet/status/2022755674145567095)  2026-02-14T19:31Z 606.2K followers, 18.8K engagements


"That was my point back in [----] when I said "intelligence is skill-acquisition efficiency". I didn't say "skill-acquisition ability" *efficiency* is the key. When you have two systems capable of acquiring the same skills the more efficient one is more intelligent. Intelligence is fundamentally an efficiency ratio and both data efficiency and compute/energy efficiency matter"  
[X Link](https://x.com/fchollet/status/1904294217712808209)  2025-03-24T22:08Z 605.8K followers, 26.5K engagements


"NVIDA chips are manufactured by TSMC a Taiwanese company. They're created using EUV lithography machines manufactured by ASML a Dutch company. These machines consist of 50% of German parts (by value) in particular ZEISS optics"  
[X Link](https://x.com/fchollet/status/1960079432548516310)  2025-08-25T20:38Z 605.9K followers, 2.2M engagements


"The most interesting fact about this globalized chain is the number of irreplaceable single points of failure. There's only one company that can make these chips at scale. It runs on equipment that only one company can make. Out of parts that only one company can make"  
[X Link](https://x.com/fchollet/status/1960080110335480202)  2025-08-25T20:41Z 603.9K followers, 240.5K engagements


"The Transformer architecture is fundamentally a parallel processor of context but reasoning is a sequential iterative process. To solve complex problems a model needs a "scratchpad" not just in its output CoT but in its internal state. A differentiable way to loop branch and backtrack until the model finds a solution that works"  
[X Link](https://x.com/fchollet/status/2003523368805630450)  2025-12-23T17:49Z 605.8K followers, 133.9K engagements


"If you're wondering whether saturating ARC-AGI-1 or [--] means we have AGI now. I refer you to what I said when we launched ARC-AGI-2 last year (which is also the same thing I said when we announced ARC-AGI-2 was coming in Spring [----] before the rise of LLM chatbots). The ARC-AGI series is not an AGI threshold it's a compass that points the research community toward the right questions. ARC-AGI-1 is a minimal test of fluid intelligence -- to pass it you needed to show nonzero fluid intelligence. This required AI to move past the classic deep learning / LLM paradigm of pretraining scaling +"  
[X Link](https://x.com/fchollet/status/2004276612385108221)  2025-12-25T19:42Z 603.8K followers, 212.4K engagements


"Enlightenment values are what's most unique and distinctive about Western culture. They are its foundations. Human rights individualism liberty free speech valuing science and reason strong individual property rights modern state design (democracy separation of powers separation of church and state rule of law.) In particular the West's most precious gift to the world is the radical idea that every human being possesses inherent unalienable rights. Independently of whether they are members of the restricted in-group or not. This stuff is the source code of the modern world. We should cherish"  
[X Link](https://x.com/fchollet/status/2015931039626223958)  2026-01-26T23:33Z 603.7K followers, 46.2K engagements


"The growth rate of Gemini is truly remarkable. If you model the current growth of the different alternatives and extrapolate into the future it's very clear where this is going"  
[X Link](https://x.com/fchollet/status/2016552076328010094)  2026-01-28T16:40Z 603.2K followers, 67.5K engagements


"If you're feeling like inventing AGI today check out the new ARC-AGI-3 quickstart. You can get started building your own solver agent in minutes locally and you can run your experiments at [------] APM. https://docs.arcprize.org/ https://docs.arcprize.org/"  
[X Link](https://x.com/fchollet/status/2017322610397483519)  2026-01-30T19:42Z 603.7K followers, 19.3K engagements


"Meanwhile it is absolutely not the case that SaaS customers will decide to ship their own solutions rather than buying a ready-made customizable solution. Customers will always focus on their core competency and pay people to take care of the rest. Software is changing but this basic dynamic isn't"  
[X Link](https://x.com/fchollet/status/2017360590386434409)  2026-01-30T22:13Z 603.7K followers, 15.5K engagements


"This is a completely misguided take that reminds me how during the 3D printing bubble of [----] investors genuinely believed that consumers would start producing their own goods and stop buying them from stores. Sure you *can* print your own stuff or cook your own food and so on. It would be cheaper But it's just not a rational use of your resources and attention unless you are doing it for fun. Those who actually benefitted from 3D printing were. the manufacturers. AI for code is just the same. https://twitter.com/i/web/status/2017362381677203869"  
[X Link](https://x.com/fchollet/status/2017362381677203869)  2026-01-30T22:20Z 603.7K followers, [----] engagements


"We're reaching unprecedented levels of panicked gaslighting. But we have eyes we can read"  
[X Link](https://x.com/fchollet/status/2018025084976537708)  2026-02-01T18:14Z 604K followers, 84.5K engagements


"I should have said "most responsible" -- of course there is more than one person responsible. But it is an indisputable fact that Schrder is the single most responsible individual here aside from Putin himself. Trittin Merkel and the Green party also have their share of responsibility"  
[X Link](https://x.com/fchollet/status/2018389858377773209)  2026-02-02T18:23Z 603.7K followers, 31.1K engagements


"Expect more US-based global tech companies to considerably expand their engineering offices in India Europe possibly Japan/Korea Alphabet is plotting to dramatically expand its presence in India with the possibility of taking millions of square feet in new office space in Bangalore Indias tech hub https://t.co/OciaCnCTW0 Alphabet is plotting to dramatically expand its presence in India with the possibility of taking millions of square feet in new office space in Bangalore Indias tech hub https://t.co/OciaCnCTW0"  
[X Link](https://x.com/fchollet/status/2018800432634888444)  2026-02-03T21:35Z 603.8K followers, 144.9K engagements


"@SuperHumanEpoch Looks like they didn't ask for your opinion then because it's already happening big time -- London Paris Munich and Zurich specifically"  
[X Link](https://x.com/fchollet/status/2018802268045525410)  2026-02-03T21:42Z 603.8K followers, [----] engagements


"Given the degree of undiluted and universal hate SaaS is getting at this point we can't be too far from the bottom. Reminds me of when everybody knew with absolute confidence that Google was an AI loser and already dead (last year). Good times It is genuinely remarkable how much money you can make by watching negative sentiment reach a fever pitch on FinTwit and then buying Works over and over and over and over It is genuinely remarkable how much money you can make by watching negative sentiment reach a fever pitch on FinTwit and then buying Works over and over and over and over"  
[X Link](https://x.com/fchollet/status/2018908862292898040)  2026-02-04T04:45Z 603.9K followers, 45.8K engagements


"Activation-aware quantization (AWQ) is now built-in in Keras as a new quantization strategy -- it lets you retain greater performance with smaller weights"  
[X Link](https://x.com/fchollet/status/2019836229970456784)  2026-02-06T18:10Z 603.5K followers, [----] engagements


"Strong benchmarking results from AWQ"  
[X Link](https://x.com/fchollet/status/2019836476943642908)  2026-02-06T18:11Z 603.5K followers, [----] engagements


"Another new quantization strategy: int4 sub-channel quantization"  
[X Link](https://x.com/fchollet/status/2019836758779900270)  2026-02-06T18:13Z 603.4K followers, [---] engagements


"One-line export of any Keras model to LiteRT (the successor to TFLite) regardless of backend. Works with iOS Android"  
[X Link](https://x.com/fchollet/status/2019837128449097915)  2026-02-06T18:14Z 603.4K followers, [---] engagements


"Google is granting TPU research & education awards -- free TPU compute for accepted Keras + JAX projects"  
[X Link](https://x.com/fchollet/status/2019838151834103947)  2026-02-06T18:18Z 603.5K followers, [----] engagements


"@Gabe__MD False Nokia sales peaked literally months after the release of the iPhone so long before the iPhone went mainstream (meanwhile genAI today has 1.8B users). How do you not fact check a basic factual claim before you make it it takes 30s"  
[X Link](https://x.com/fchollet/status/2020602497338057161)  2026-02-08T20:55Z 603.9K followers, [---] engagements


"@vkhosla @agenticasdk You should try ARC-AGI-3 (developer preview is available now full benchmark coming in a few weeks)"  
[X Link](https://x.com/fchollet/status/2021820241521922154)  2026-02-12T05:34Z 606.2K followers, 13.8K engagements


"@meowbooksj @IterIntellectus We announced ARC-AGI-3 one year ago. The preview has been online for a while. Full release in a few weeks"  
[X Link](https://x.com/fchollet/status/2021985443445977463)  2026-02-12T16:31Z 606.1K followers, [----] engagements


"Natural evolution suggests that AGI won't come from larger models that cram more and more specific knowledge but from discovering the meta-rules that allow a system to grow and adapt its own architecture in response to the environment"  
[X Link](https://x.com/fchollet/status/2019152128779186563)  2026-02-04T20:52Z 606.2K followers, 71.1K engagements


"Radiologists are a good example -- a job we were promised since [----] would soon disappear. The lesson is that even if the core tasks underlying a job can be done with AI that doesn't mean the human expert isn't still needed"  
[X Link](https://x.com/fchollet/status/2019610588612292834)  2026-02-06T03:14Z 606.2K followers, 22.8K engagements


"When you lack a grounded causal model of the world your "predictions" are simply a remix of narratives you've heard from others. Reminds me of something actually"  
[X Link](https://x.com/fchollet/status/2019861980413325785)  2026-02-06T19:53Z 606.2K followers, 38.7K engagements


"None of what you just said is remotely accurate. ARC did not come out [--] years ago but over [--] years ago. ARC-2 was announced several years before ARC-1 was saturated. And ARC was never said to be impossible for AI (the point of the benchmark was obviously to get solve by AI). It was said to be impossible for LLMs which proved accurate. Progress came from pivoting towards test-time adaption not from scaling up LLMs. *To this very day* base LLMs still perform abysmally low on ARC"  
[X Link](https://x.com/fchollet/status/2022040149794967763)  2026-02-12T20:08Z 606.2K followers, 55.9K engagements


"Interesting finding on frontier model performance on ARC -- due to extensive direct targeting of the benchmark models are overfitting to the original ARC encoding format. Frontier model performance remains largely tied to a familiar input distribution. @mikeknoop We found that if we change the encoding from numbers to other kinds of symbols the accuracy goes down. (Results to be published soon.) We also identified other kinds of possible shortcuts. @mikeknoop We found that if we change the encoding from numbers to other kinds of symbols the accuracy goes down. (Results to be published soon.)"  
[X Link](https://x.com/fchollet/status/2022787435348840630)  2026-02-14T21:38Z 606.2K followers, 22.7K engagements


"Live-tweeting the Keras community meeting. First off: new model architectures in KerasHub Latest Gemmas GPT-OSS Qwen"  
[X Link](https://x.com/fchollet/status/2019835435921666353)  2026-02-06T18:07Z 606.2K followers, 13.9K engagements


"Some people might say "but aren't humans also very sensitive to encoding format" In my opinion for an actually intelligent agent re-encoding a task with a known encoding scheme should always be a no-op for performance. If you give me a set of multiplications to do but you've encoded the values in binary my first action will be to decode them back to the format I'm familiar with - digits. Because "decode binary" and "multiply" are simple and error-correctable operations final performance should be 100% all of the time. In fact you could chain many such indirection steps and still see"  
[X Link](https://x.com/fchollet/status/2022793276437459294)  2026-02-14T22:01Z 606.2K followers, [----] engagements


"The new Keras release (3.11.0) is out Main upgrades: int4 quantization with all backends Support for Grain a data i/o and streaming library inspired by tf-data that is backend-agnostic On the JAX side integration with the NNX library -- if you're a NNX user you can start using any Keras layer/model (including models from KerasHub) as a NNX module Release notes: https://github.com/keras-team/keras/releases/tag/v3.11.0 https://github.com/keras-team/keras/releases/tag/v3.11.0"  
[X Link](https://x.com/fchollet/status/1950582199574835455)  2025-07-30T15:40Z 564.1K followers, 29.8K engagements


"Languages follow a power law distribution. There are. [----] living languages with over [----] speakers [---] with over 1M speakers [---] with over 10M speakers [--] with over 100M speakers"  
[X Link](https://x.com/fchollet/status/1951120293734523326)  2025-08-01T03:18Z 567.4K followers, 40.7K engagements


"The claim wasn't "self-driving cars will eventually work" (which was kind of obvious) but "they will be deployed at scale in every single city before 2020" In [----] I knew people who decided not to get their license (in practice they ended up getting it a few years later anyway) because they assumed no one would be driving anymore by [----]. It was a very mainstream view in SV"  
[X Link](https://x.com/fchollet/status/1951400687469994494)  2025-08-01T21:52Z 564.1K followers, 12.9K engagements


"The paper "Hierarchical Reasoning Models" has been making the rounds lately collecting tens of thousands of likes on Twitter across dozens of semi-viral threads which is quite unusual for a research paper. The paper claims 40.3% accuracy on ARC-AGI-1 with a tiny model (27M parameters) trained from scratch without any external training data -- if real this would represent a major reasoning breakthrough. I just did a deep dive on the paper and codebase. It's good read detailed yet easy to follow. I think the ideas presented are quite interesting and the architecture is likely valuable. The"  
[X Link](https://x.com/fchollet/status/1951807511474147379)  2025-08-03T00:49Z 563.3K followers, 27.4K engagements


"The big breakthrough for convnets was the first GPU-accelerated CUDA implementation which immediately started winning first place in image classification competitions. Remember when that happened I do. That was Dan Ciresan in [----] Who invented convolutional neural networks (CNNs) 1969: Fukushima had CNN-relevant ReLUs [--]. 1979: Fukushima had the basic CNN architecture with convolution layers and downsampling layers [--]. Compute was [---] x more costly than in [----] and a billion x more costly than https://t.co/TRS8zg4vCA Who invented convolutional neural networks (CNNs) 1969: Fukushima had"  
[X Link](https://x.com/fchollet/status/1952121621583663440)  2025-08-03T21:37Z 575.1K followers, 166.5K engagements


"The path forward is not to build a "god in a box" it's to create intelligent systems that integrate with existing processes in particular science and humans at large to empower and accelerate them"  
[X Link](https://x.com/fchollet/status/1952130743418974243)  2025-08-03T22:13Z 567.8K followers, 109.3K engagements


"Kaggle just launched the NeurIPS [----] Code Golf competition -- the goal is for you to write Python solution programs to ARC-AGI-1 tasks while keeping the programs as small as possible. Are you better at writing code than frontier models https://www.kaggle.com/competitions/google-code-golf-2025 https://www.kaggle.com/competitions/google-code-golf-2025"  
[X Link](https://x.com/fchollet/status/1953493314323562922)  2025-08-07T16:27Z 567.6K followers, 52.8K engagements


"GPT-5 results on ARC-AGI [--] & [--] Top line: 65.7% on ARC-AGI-1 9.9% on ARC-AGI-2 GPT-5 on ARC-AGI Semi Private Eval GPT-5 * ARC-AGI-1: 65.7% $0.51/task * ARC-AGI-2: 9.9% $0.73/task GPT-5 Mini * ARC-AGI-1: 54.3% $0.12/task * ARC-AGI-2: 4.4% $0.20/task GPT-5 Nano * ARC-AGI-1: 16.5% $0.03/task * ARC-AGI-2: 2.5% $0.03/task https://t.co/KNl7ToFYEf GPT-5 on ARC-AGI Semi Private Eval GPT-5 * ARC-AGI-1: 65.7% $0.51/task * ARC-AGI-2: 9.9% $0.73/task GPT-5 Mini * ARC-AGI-1: 54.3% $0.12/task * ARC-AGI-2: 4.4% $0.20/task GPT-5 Nano * ARC-AGI-1: 16.5% $0.03/task * ARC-AGI-2: 2.5% $0.03/task"  
[X Link](https://x.com/fchollet/status/1953509615624499571)  2025-08-07T17:32Z 567.4K followers, 40.7K engagements


"Why do millennials like Harry Potter so much"  
[X Link](https://x.com/fchollet/status/1953984061065900293)  2025-08-09T00:57Z 568.5K followers, 87.6K engagements


"To be clear I do think it's pretty good content as far as young adult books / movies go but I'm perplexed by the sheer intensity of fandom it seems to enjoy a full 20-30 years later"  
[X Link](https://x.com/fchollet/status/1954007258217820644)  2025-08-09T02:30Z 567.6K followers, 24.9K engagements


"@svpino AGI might happen soon-ish but won't be coming from scaling up current systems which makes it tricky to time -- definitely not a matter of extrapolating from a chart"  
[X Link](https://x.com/fchollet/status/1954370554565419320)  2025-08-10T02:33Z 567.6K followers, 51.7K engagements


"JAX = performance & scalability Keras [--] = high velocity development compact code best practices by default Both at the same time = pretty killer Whats the one skill that separates good AI engineers from the highest-paid ones PyTorch gets you in the door. JAX gets you the higher-paid role. The biggest AI teams lean on JAX for speed and scale. If you dont understand it youre already behind. And Im not teaching you https://t.co/CeI0W9Zbjp Whats the one skill that separates good AI engineers from the highest-paid ones PyTorch gets you in the door. JAX gets you the higher-paid role. The biggest AI"  
[X Link](https://x.com/fchollet/status/1954686735646068772)  2025-08-10T23:30Z 568.2K followers, 42.3K engagements


"Needless to say this is not how human intelligence works. Human intelligence is compositional which means you can understand the cross product of two spaces without being explicitly exposed to a dense sampling of data pairs from those spaces. When reading a book most people can visually picture what's going on no matter how far from everyday reality the text gets -- and these people were never exposed to billions of explicit text:video pairs. In fact they were exposed to virtually no such data"  
[X Link](https://x.com/fchollet/status/1955009824560910426)  2025-08-11T20:53Z 571K followers, 46.8K engagements


"Open questions about driverless ride hailing economics: [--]. What will be the cost reduction (over Uber/Lyft) of removing the driver [--]. How much does that cost reduction increase demand [--]. Would the UX change significantly affect demand [--]. Would we see a large increase in geographic availability (no need for drivers = can put more taxis on the road) For 1: the labor cost of a Lyft/Uber ride after accounting for everything else is only 20-40% of the price which caps the reduction at -40% in the best case scenario. However a driverless taxi network would have significantly higher fixed costs (AI"  
[X Link](https://x.com/fchollet/status/1955336778015183152)  2025-08-12T18:33Z 565.9K followers, 37.8K engagements


"GenAI isn't just a technology; it's an informational pollutanta pervasive cognitive smog that touches and corrupts every aspect of the Internet. It's not just a productivity tool; it's a kind of digital acid rain silently eroding the value of all information. Every image is no longer a glimpse of reality but a potential vector for synthetic deception. Every article is no longer a unique voice but a soulless permutation of data a hollow echo in the digital chamber. This isn't just content creation; it's the flattening of the entire vibrant ecosystem of human expression transforming a rich"  
[X Link](https://x.com/fchollet/status/1955603320212684834)  2025-08-13T12:12Z 575.1K followers, 681.3K engagements


"@wewalkwillow Believe it or not I wrote it myself. It's not satire; it's a pastiche or perhaps a parody"  
[X Link](https://x.com/fchollet/status/1955605042243002381)  2025-08-13T12:19Z 568.2K followers, 46.9K engagements


"@nagaraj_arvind @wewalkwillow I painstakingly copy-pasted them inI would normally use a double dash instead of a literal em dash character. Dashing innit"  
[X Link](https://x.com/fchollet/status/1955716406999441731)  2025-08-13T19:41Z 567.6K followers, [----] engagements


"Google just dropped a new tiny LLM with outstanding performance -- Gemma3 270M. Now available on KerasHub. Try the new presets gemma3_270m and gemma3_instruct_270m"  
[X Link](https://x.com/fchollet/status/1956059444523286870)  2025-08-14T18:24Z 571.5K followers, 67K engagements


"Interesting findings from this post: [--]. It should be obvious to anyone who has interacted with LLMs before that the writing style of the tweet is a conspicuous caricature of AI slop (e.g. em dashes the "it's not. it's." construction rambling florid prose etc.). Yet many people reacted by saying "It's written with AI" as if it were some kind of clever gotcha. (It was in fact not written with AI unlike a good fraction of the comments.) [--]. Many people also react by saying this prose is "beautiful." (I don't think it is.) I guess this illuminates why LLMs have converged on this style: many people"  
[X Link](https://x.com/fchollet/status/1956116739386933646)  2025-08-14T22:12Z 570.5K followers, 48.6K engagements


"Being pro-technology doesn't mean being blind to the negative effects of new technology. It's an exercise in pragmatic optimism -- maximizing the upside while managing the downside"  
[X Link](https://x.com/fchollet/status/1957276052029649130)  2025-08-18T02:59Z 571.5K followers, 69K engagements


"JAX on GPU is basically as good actually Jax on TPU will solve all your problems Jax on TPU will solve all your problems"  
[X Link](https://x.com/fchollet/status/1957285456543879431)  2025-08-18T03:36Z 570.7K followers, 70.8K engagements


"LLM adoption among US workers is closing in on 50%. Meanwhile labor productivity growth is lower than in [----]. Many counter-arguments can be made here e.g. "they don't know yet how to be productive with it they've only been using for 1-2 years" "50% is still too low to see impact" "models next year will be unbelievably better" etc. But I think we now have enough evidence to say that the [----] talking point that "LLMs will make workers 10x more productive" (some folks even quoted 100x) is probably not accurate. LLM adoption rose to 45.9% among US workers as of June/July [----] according to a"  
[X Link](https://x.com/fchollet/status/1958329112101343701)  2025-08-21T00:43Z 575.8K followers, 930.5K engagements


"By the way I don't know if people realize this but the [----] work-from-home switch coincided with a major productivity boom and the late [----] and [----] back-to-office reversal coincided with a noticeable productivity drop. It's right there in the statistics. Narrative violation Productivity growth is now back to pre-2020 levels"  
[X Link](https://x.com/fchollet/status/1958331145634152743)  2025-08-21T00:51Z 571.7K followers, 80.1K engagements


"@polynoamial Back in [----] AGI was 1-2 years away (in the form of GPT-5 no less) productivity was about to increase by 10-100x and developers were about to go extinct. People who questioned the narrative were a tiny minority. We were proven right"  
[X Link](https://x.com/fchollet/status/1958342875126865927)  2025-08-21T01:38Z 571.2K followers, 34.5K engagements


"People ask me "didn't you say before ChatGPT that deep learning had hit a wall and there would be no more progress" I have never said this. I was saying the opposite (that scaling DL would deliver). You might be thinking of Gary Marcus. My pre-ChatGPT position (below) was that scaling up DL would keep delivering better and better results and *also* that it wasn't the way to AGI (as I defined it: human-level skill acquisition efficiency). This was a deeply unpopular position at the time (neither AI skeptic nor AGI-via-DL-scaling prophet). It is now completely mainstream. Two perfectly"  
[X Link](https://x.com/fchollet/status/1958410017683681698)  2025-08-21T06:05Z 571.9K followers, 203.7K engagements


"People also ask "didn't you say in [----] that LLMs could not reason" I have also never said this. I am on the record across many channels (Twitter podcasts.) saying that "can LLMs reason" was not a relevant question just semantics and that the more interesting question was "could they adapt to novel tasks beyond what they had been trained on" -- and that the answer was no. Also correct in retrospect and a mainstream position today"  
[X Link](https://x.com/fchollet/status/1958410745496129877)  2025-08-21T06:08Z 570.4K followers, 13.4K engagements


"I have been consistently bullish on deep learning since [----] back when deep learning was maybe a couple thousands of people. I have also been consistently bullish on scaling DL -- not as a way to achieve AGI but as a way to create more useful models"  
[X Link](https://x.com/fchollet/status/1958411239400599769)  2025-08-21T06:09Z 571.6K followers, 26.1K engagements


"Back in [----] my book had an entire chapter on generative AI including language modeling and image generation. I wrote that content in [----] and early [----]. This was some of the earliest textbook content that covered generative AI. All the way back in [----] I was convinced that AI would one day become a major source of cultural content creation -- which was a completely outlandish position at the time"  
[X Link](https://x.com/fchollet/status/1958412150055321966)  2025-08-21T06:13Z 571.4K followers, 24.3K engagements


"In general there are two different kinds of methodology to produce progress in any science or engineering field. both are important and can lead to transformative progress. There's the "Edison way" where you brute-force a large predefined design space and you keep what works without necessarily understanding why it works. This is akin to biological evolution. Nearly all of deep learning was built this way (despite the fancy math in papers which is there to look nice 99% of the time). And there's the "Einstein way" where you think up big ideas in a top-down fashion and derive precise results"  
[X Link](https://x.com/fchollet/status/1958914107480137748)  2025-08-22T15:28Z 572.8K followers, 131.3K engagements


"The proprietary frontier models of today are ephemeral artifacts. Essentially very expensive sandcastles. Destined to be washed away by the rising tide of open source replication (first) and algorithmic disruption (later)"  
[X Link](https://x.com/fchollet/status/1959466875622224327)  2025-08-24T04:04Z 574.1K followers, 231K engagements


"I'll take the other side of this bet. By [----] all jobs will be replaced by AI and robots. Easily. The US labor force is about [---] million workers. About [--] million of those jobs include hands-on work. Automated systems can work four shifts a week. Replacing all physical labor would require about [--] million By [----] all jobs will be replaced by AI and robots. Easily. The US labor force is about [---] million workers. About [--] million of those jobs include hands-on work. Automated systems can work four shifts a week. Replacing all physical labor would require about [--] million"  
[X Link](https://x.com/fchollet/status/1959741006905233470)  2025-08-24T22:14Z 573.3K followers, 310.7K engagements


"@javelartin As any US VC who moved to Miami in [----] will tell you"  
[X Link](https://x.com/fchollet/status/1960080703133250032)  2025-08-25T20:43Z 571.7K followers, 96.8K engagements


"@singidunumx ASML is probably undervalued. TMSC is probably fairly valued given geopolitical risk around it. NVDA is probably overvalued. ZEISS is private"  
[X Link](https://x.com/fchollet/status/1960084881842823411)  2025-08-25T21:00Z 571.9K followers, 17.6K engagements


"Not sure if USD inflation will ever be under 2% again in my lifetime"  
[X Link](https://x.com/fchollet/status/1960149722221658372)  2025-08-26T01:18Z 572.3K followers, 66.7K engagements


"If your capital is in USD I hope you've made at least 10% in capital gains YTD (after tax) otherwise you are now poorer. Since USDX is down by this much"  
[X Link](https://x.com/fchollet/status/1960151730622235011)  2025-08-26T01:26Z 572.8K followers, 33.6K engagements


"Saying that deep learning is "just a bunch of matrix multiplications" is about as informative as saying that computers are "just a bunch of transistors" or that a library is "just a lot of paper and ink." It's true but the encoding substrate is the least important part here. It's the programs being encoded that are interesting and useful: what they can do what they can't do how well they generalize how efficiently they can be learned etc"  
[X Link](https://x.com/fchollet/status/1960548626117353856)  2025-08-27T03:43Z 571.7K followers, 208K engagements


"When a model gives you the right answer to a reasoning question you can't tell whether it was via memorization or via reasoning. A simple way to tell between the two is to tweak your question in a way that [--]. changes the answer [--]. requires some reasoning to adapt to the change. If you still get the same answer as before. it was memorization"  
[X Link](https://x.com/fchollet/status/1960808676262076629)  2025-08-27T20:56Z 572K followers, 90.6K engagements


"Many people think "reasoning" is a category of tasks -- e.g. involving numbers riddles etc. It's not. It's an ability underpinned by compositional generalization. You can always solve "reasoning" tasks without reasoning. Just memorize -- either memorize the answer or memorize the general question/answer template"  
[X Link](https://x.com/fchollet/status/1960810039259881506)  2025-08-27T21:01Z 571.8K followers, 26K engagements


"Model interpretability is not a question of which ML method you're using (which model substrate e.g. NNs vs graphical models vs symbolic code). Any substrate can be interpretable when the model is small enough. It's purely a question of model size/complexity. The behavior of a complex codebase or a complex graphical model is not interpretable despite the fact that you can locally read any bit of what it does. It is perhaps *debuggable* in specific cases with great effort -- but the same would be true of NNs as well. IMO the statement "we must use interpretable methods" is a nonstarter it"  
[X Link](https://x.com/fchollet/status/1961140723392332276)  2025-08-28T18:55Z 572.9K followers, 65K engagements


"With enough compute all approaches start looking alike. Compute is the great equalizer"  
[X Link](https://x.com/fchollet/status/1961546393849663982)  2025-08-29T21:47Z 575K followers, 382.3K engagements


"This is a great definition of magic and this is precisely why computers feel like magical artifacts. What they do is simple but they do a lot of it very fast. An amount humans cannot comprehend at a speed humans cannot comprehend. @fchollet there is a definition of magic that goes something like magic works because we underestimate the time someone would take to master something to make it appear effortless/invisible compute feels like that for ai @fchollet there is a definition of magic that goes something like magic works because we underestimate the time someone would take to master"  
[X Link](https://x.com/fchollet/status/1961578336720986329)  2025-08-29T23:54Z 572.8K followers, 44.5K engagements


"Homo sapiens has had current levels of fluid intelligence for 50k-100k years perhaps even longer. Yet we only reached the moon [--] years ago. Operationalizing and deploying general intelligence takes much longer than we assume"  
[X Link](https://x.com/fchollet/status/1961822084381974896)  2025-08-30T16:03Z 571K followers, 112.7K engagements


"This was the comment. Had to delete it since I don't want my comments to serve as an outlet for this discourse"  
[X Link](https://x.com/fchollet/status/1961929598318645741)  2025-08-30T23:10Z 572K followers, 30K engagements


"Do consider: some humans [-----] years ago could paint better than you do (see Altamira bisons below). And the basics of civilization -- agriculture domestication writing complex stone architecture metallurgy (gold) were independently reinvented among populations that were completely isolated from each other for [-----] years"  
[X Link](https://x.com/fchollet/status/1961932215996317712)  2025-08-30T23:21Z 572K followers, 38.8K engagements


"I want to be absolutely clear: it *is* the scientific consensus that behaviorally and cognitively modern humans data back at least [-----] years. If you think fluid intelligence is something that recently appeared you are going against a vast body of evidence"  
[X Link](https://x.com/fchollet/status/1961950385616429469)  2025-08-31T00:33Z 572K followers, 27.9K engagements


"This is what we're dealing with. If you're sharing this data as "proof" that intelligence is a recent and exclusively European development you are showing yourself to be incapable of the most basic level of critical thinking"  
[X Link](https://x.com/fchollet/status/1961953935964733526)  2025-08-31T00:47Z 572.2K followers, 32.8K engagements


"The general idea of this world map is for every country that belongs to the "wrong" category Lynn would go and find a IQ test study conducted on a mentally disabled group (e.g. a few children in a mental institution a few children that took part in a study on malnourishment etc.) and then would report the IQ number as a "national average" with no further context. Hence why you end up with IQs reflecting mental disability To be clear this is not how science works and Richard Lynn is not a scientist"  
[X Link](https://x.com/fchollet/status/1961960886790394049)  2025-08-31T01:15Z 572.2K followers, 37.9K engagements


"1980s Japan didn't just have a roaring economy it also had many of the best AI research labs in the world. It still retained world-leading robotics expertise up until the mid 2000s. But it is all but absent from the current AI wave"  
[X Link](https://x.com/fchollet/status/1962945301767245837)  2025-09-02T18:26Z 572.9K followers, 151.3K engagements


"You haven't so much invented such a solution as you have discovered the coordinate system in which the problem becomes trivial"  
[X Link](https://x.com/fchollet/status/1964440949935001745)  2025-09-06T21:29Z 573.6K followers, 25.6K engagements


"This is why to solve a difficult open-ended problem (like AGI) you must always start by asking the right question"  
[X Link](https://x.com/fchollet/status/1964441290386657489)  2025-09-06T21:31Z 573.6K followers, 23.2K engagements


"An experiment: I started a subscriber feed. For the time being all of my spicy opinion tweets will go there. Let's see how it works out"  
[X Link](https://x.com/fchollet/status/1964763205466775923)  2025-09-07T18:50Z 574.1K followers, 45.4K engagements


""Does a causal substrate necessarily need to be symbolic" you ask. Could it be based on parametric curves Yes it does because in such a substrate the model is isomorphic to the graph of causal factors of what you are modeling. And such a graph is necessarily very sparse i.e. it's a symbolic graph. It has completely different properties from a continuous manifold"  
[X Link](https://x.com/fchollet/status/1964776744852111744)  2025-09-07T19:44Z 574.3K followers, 31.7K engagements


"I like the analogy of the "bicycle for the mind" because riding a bike requires effort from you and the bike multiplies the effect of that effort. I don't think the end goal of technology should be to let you sit around and twiddle your thumbs"  
[X Link](https://x.com/fchollet/status/1964834406830600269)  2025-09-07T23:33Z 573.7K followers, 91.2K engagements


"As slop floods the Internet and as humans start relying on generative AI more and more it's inevitable that future models will be mostly trained on slop (except for verifiable reasoning tasks where the training will be done in sims). Culture will turn into slop remixed from slop remixed from slop"  
[X Link](https://x.com/fchollet/status/1964900285698252903)  2025-09-08T03:55Z 573.7K followers, 116K engagements


"AGI will not be an algorithmic encoding of an individual mind but of the process of Science itself. The light of reason made manifest"  
[X Link](https://x.com/fchollet/status/1965111660554977488)  2025-09-08T17:55Z 574.2K followers, 49.7K engagements


"Worth noting that the output artifacts of science -- the models it produces -- are symbolic in nature. Most commonly expressed in mathematical form sometimes in code. Science is a program synthesis process"  
[X Link](https://x.com/fchollet/status/1965112066932703334)  2025-09-08T17:56Z 574.7K followers, 22.1K engagements


"The keyword here isn't "understand". It's "novel". You "truly understand" a thing if your model of it lets you make sense of every possible instance of the thing including those that are very far from what you've seen before (extreme generalization). You "somewhat understand" the thing if you can approach at least some new instances of it (local generalization). If you can only handle what you have seen before you are merely memorizing / retrieving. A student who truly understands F=ma can solve more novel problems than a Transformer that has memorized every physics textbook ever written. A"  
[X Link](https://x.com/fchollet/status/1965843304098181223)  2025-09-10T18:22Z 574.3K followers, 73.9K engagements


""Understanding" isn't some magical ineffable concept. It's a very concrete and practical property of an agent reflected in what it can *do*. We know LLMs don't "truly understand" because we can see what they can't do. It's as simple as that"  
[X Link](https://x.com/fchollet/status/1965843788003524690)  2025-09-10T18:24Z 574.3K followers, 21.4K engagements


"Theory is a ghost until you confront it with reality. You cannot choose the evidence you find. Always go with the evidence and never fall in love with ghosts of your own creation"  
[X Link](https://x.com/fchollet/status/1966519124634292480)  2025-09-12T15:07Z 575K followers, 39.2K engagements


""entangled representations" is a symptom not the cause. The cause is using as your representation substrate parametric curves fitted via gradient descent. The fix is to use maximally concise symbolic programs instead. Which by construction will be disentagled (otherwise they wouldn't be maximally concise)"  
[X Link](https://x.com/fchollet/status/1966691702917526016)  2025-09-13T02:33Z 573.6K followers, [----] engagements


"The most important skill for a researcher is not technical ability. It's taste. The ability to identify interesting and tractable problems and recognize important ideas when they show up. This can't be taught directly. It's cultivated through curiosity and broad reading"  
[X Link](https://x.com/fchollet/status/1966893993339597034)  2025-09-13T15:57Z 576K followers, [----] engagements


"NNs are not a dead end. They are a great fit for problems that must be naturally understood by embedding samples in continuous manifolds where distance approximates semantic similarity -- this covers all perception and intuition problems. They are suboptimal for anything that must be naturally understood as symbolic function composition. There program synthesis is the best fit. Future AI systems will feature both. Some already do. You can try to make NN representations closer to symbolic programs but this will remain suboptimal even if it yields progress. Optimality will be reached when our"  
[X Link](https://x.com/fchollet/status/1967261315493474576)  2025-09-14T16:17Z 573.6K followers, [----] engagements


"99% of research is finding out what doesn't work. The other 1% is what they write the textbooks about"  
[X Link](https://x.com/fchollet/status/1968057996203921655)  2025-09-16T21:02Z 576.1K followers, [----] engagements


"If you see me posting less here it's because I'm posting on the private feed which is where all the unfiltered nonsense is"  
[X Link](https://x.com/fchollet/status/1968415277277921548)  2025-09-17T20:42Z 574.9K followers, 37.8K engagements


"@kevin_jordan__ There's a ton of new content on LLMs and LLM centric workflows"  
[X Link](https://x.com/fchollet/status/1968733979218940128)  2025-09-18T17:48Z 574.7K followers, [----] engagements


"In [----] there was a big uptick in companies marketing their products as AI powered. [----] will be the year of companies marketing their products as AI free (a trend already underway)"  
[X Link](https://x.com/fchollet/status/1971643108522946646)  2025-09-26T18:28Z 577.3K followers, 91.2K engagements


"No theory feels true to me unless it is simple (relative to what it is explaining). The solution most likely to generalize is always the simplest one"  
[X Link](https://x.com/fchollet/status/1971946235704889833)  2025-09-27T14:33Z 577.6K followers, 34.9K engagements


"The point of science is to cover the greatest number of empirical facts at the least model complexity cost"  
[X Link](https://x.com/fchollet/status/1971946626920206421)  2025-09-27T14:34Z 576.6K followers, 20.6K engagements


"By now there are probably more agents platforms than agentic workflows actually in use"  
[X Link](https://x.com/fchollet/status/1972273695675883545)  2025-09-28T12:14Z 577.7K followers, 52.9K engagements


"you see the killer advantage of mechahorses is that you dont need to buy a new carriage. You dont need to build a new mill. The mechahorse is a drop-in horse replacement for all the different devices horses are currently powering thousands of them"  
[X Link](https://x.com/fchollet/status/1972307334740512820)  2025-09-28T14:28Z 577.7K followers, 32K engagements


"Meanwhile AGI will in fact get better by simply adding more *compute*. It will not be bottlenecked by the availability of human-generated text"  
[X Link](https://x.com/fchollet/status/1972478358413013232)  2025-09-29T01:47Z 576.6K followers, 29.4K engagements


"As a reminder we are also making the content available for free online so anyone can learn from it: https://deeplearningwithpython.io/ https://deeplearningwithpython.io/"  
[X Link](https://x.com/fchollet/status/1972790863089823874)  2025-09-29T22:29Z 577.6K followers, 11.2K engagements


"@__asan__t Yes it's a good start for learning about LLMs and how to build with them (in addition to general ML theory)"  
[X Link](https://x.com/fchollet/status/1972806020788625583)  2025-09-29T23:29Z 575.9K followers, [---] engagements


"So why exactly is there a huge quantum computing bubble at this point in time Is it just that the excess froth from the AI bubble needed somewhere to go and "quantum computing" has a cool ring to it"  
[X Link](https://x.com/fchollet/status/1974235192282808785)  2025-10-03T22:08Z 578.4K followers, 72.4K engagements


"Aperture Science did pivot from shower curtains to teleportation so there's precedent for this arc"  
[X Link](https://x.com/fchollet/status/1974239671329640826)  2025-10-03T22:26Z 577.7K followers, 18.8K engagements


"@oscarle_x It's up to $20B mc for no revenue quantum cos. And while the valuation of AI startups is highly speculative gen AI is a real technology that actually works and has well established use cases and fast increasing demand"  
[X Link](https://x.com/fchollet/status/1974240794425872847)  2025-10-03T22:30Z 577.7K followers, [----] engagements


"The single best item you can get at 7-Eleven in Japan is the egg sandwich"  
[X Link](https://x.com/fchollet/status/1974256975090229351)  2025-10-03T23:35Z 577.7K followers, 31.6K engagements


"@curiousiter @Mrlucid21 The bubble is in the gap between revenue (and revenue growth) and investment. Right now the industry spends $10 to make $1. Which cannot go on forever so either everyone starts consuming hundreds of dollars a month of gen AI or investment drops by a massive amount"  
[X Link](https://x.com/fchollet/status/1974258050551660845)  2025-10-03T23:39Z 576.6K followers, [---] engagements


"The idea that villains actually think of themselves as righteous is what's unrealistic. Real life villains almost always know they're villains"  
[X Link](https://x.com/fchollet/status/1974405529478058100)  2025-10-04T09:25Z 577.9K followers, 23.1K engagements


"The way to think about AGI is as a scalable efficient formalization & implementation of the scientific method. Not a brain in a jar"  
[X Link](https://x.com/fchollet/status/1974641737751761027)  2025-10-05T01:04Z 578.6K followers, 26.7K engagements


"This is how I can tell it's a good book store"  
[X Link](https://x.com/fchollet/status/1975865738603921905)  2025-10-08T10:07Z 578.6K followers, 45.1K engagements


"@sudo_xai A lot of them in fact. Kaggle is very popular in Japan"  
[X Link](https://x.com/fchollet/status/1975883021627863418)  2025-10-08T11:16Z 577.6K followers, [----] engagements


"@VictorTaelin @arcprize @jm_alexia @makingAGI Well HRM is another approach that doesn't use a commercial frontier model (TRM comes directly from it)"  
[X Link](https://x.com/fchollet/status/1978959516222865660)  2025-10-16T23:01Z 578.7K followers, [---] engagements


"An implementation of Reinforcement Learning agents in Keras with @OpenAI Gym: by @oshtim https://github.com/osh/kerlym https://github.com/osh/kerlym"  
[X Link](https://x.com/fchollet/status/732619320916746240)  2016-05-17T17:10Z 507.8K followers, [---] engagements


"Reassuring to know that SoftBank is hard at work thinking about our future: (via @kcimc) http://cdn.softbank.jp/en/corp/set/data/irinfo/investor/shareholders/pdf/36/softbank_meeting36_004.pdf http://cdn.softbank.jp/en/corp/set/data/irinfo/investor/shareholders/pdf/36/softbank_meeting36_004.pdf"  
[X Link](https://x.com/fchollet/status/755253901914312705)  2016-07-19T04:12Z 573.4K followers, [---] engagements


"A Keras implementation of DenseNet: - an extension of ResNet where every block is connected to all previous blocks. https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/DenseNet https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/DenseNet"  
[X Link](https://x.com/fchollet/status/786674352293023748)  2016-10-13T21:05Z 512K followers, [---] engagements


"Two competing RL-related announcements today at NIPS: OpenAI Universe & DeepMind labyrinth. Universe looks very exciting https://x.com/gdb/status/805663771976835072 Just released Universe the AI training infrastructure we've been planning since we founded OpenAI: https://t.co/SODGDZ65kk https://x.com/gdb/status/805663771976835072 Just released Universe the AI training infrastructure we've been planning since we founded OpenAI: https://t.co/SODGDZ65kk"  
[X Link](https://x.com/fchollet/status/805910051563175936)  2016-12-05T23:01Z 507.3K followers, [---] engagements


"Great demo of the latest version of Spot by Boston Dynamics at #NIPS2016"  
[X Link](https://x.com/fchollet/status/806523439192440832)  2016-12-07T15:39Z 507.7K followers, [--] engagements


"Video of the Spot demo by Boston Dynamics"  
[X Link](https://x.com/fchollet/status/806560045022511105)  2016-12-07T18:04Z 507.8K followers, [---] engagements


"Super cool work from OpenAI. https://x.com/gdb/status/896160062997016576 Our AI is undefeated against the world's top professionals including @DendiBoss @Arteezy @SumaaaaiL in Dota [--] solo https://t.co/hAdsyt7Q6C https://x.com/gdb/status/896160062997016576 Our AI is undefeated against the world's top professionals including @DendiBoss @Arteezy @SumaaaaiL in Dota [--] solo https://t.co/hAdsyt7Q6C"  
[X Link](https://x.com/fchollet/status/896161506110263297)  2017-08-12T00:08Z 507.3K followers, [---] engagements


"I think this is meant as a commentary on the progress of AI so it's worth remembering that Atlas is 100% hardcoded it involves no learning or anything that would qualify as AI these days. It's classical control theory https://x.com/elonmusk/status/934888089058549760 This is nothing. In a few years that bot will move so fast youll need a strobe light to see it. Sweet dreams https://t.co/0MYNixQXMw https://x.com/elonmusk/status/934888089058549760 This is nothing. In a few years that bot will move so fast youll need a strobe light to see it. Sweet dreams https://t.co/0MYNixQXMw"  
[X Link](https://x.com/fchollet/status/935043422510919680)  2017-11-27T07:11Z 506.7K followers, [----] engagements


"A state-of-the-art convnet trained on millions of images and videos of prehistoric animals in the wild could not recognize an auroch in this picture. But you can. In fact you can *even though you have never seen one*"  
[X Link](https://x.com/fchollet/status/940572856752095232)  2017-12-12T13:23Z 573.9K followers, [---] engagements


"A scalable deep learning model-serving API with Keras Redis and Flask https://www.pyimagesearch.com/2018/01/29/scalable-keras-deep-learning-rest-api/ https://www.pyimagesearch.com/2018/01/29/scalable-keras-deep-learning-rest-api/"  
[X Link](https://x.com/fchollet/status/958069169588617216)  2018-01-29T20:07Z 570.9K followers, [---] engagements


"This is why there are strict regulations over who can invest in high-risk assets (like startups) or over who can day-trade on margin (in short: if you want to gamble you need to prove you can afford to). But crypto is unregulated. https://x.com/seldo/status/959244935567298560 As you watch Bitcoin plummet to earth keep in mind that in addition to the rich and greedy idiots losing their shirts a bunch of under-informed ordinary people are losing money they can't afford to lose. https://x.com/seldo/status/959244935567298560 As you watch Bitcoin plummet to earth keep in mind that in addition to"  
[X Link](https://x.com/fchollet/status/959581955963498497)  2018-02-03T00:19Z 507.4K followers, [---] engagements


"Working on designing missiles at Lockheed Martin might have a lesser human cost than working on AI software at FB. I'd like to see the establishment of a kind of Hippocratic Oath for the software engineering profession in particular for ML engineers & researchers. Don't be evil"  
[X Link](https://x.com/fchollet/status/964995615993155585)  2018-02-17T22:50Z 573.9K followers, [---] engagements


"Hypothesis: tech crashes and AI winters because they separate the believers from the opportunists may paradoxically act as catalyzers of progress -- much like mass extinctions may act as catalyzers of evolutionary change by abruptly transforming the selection landscape"  
[X Link](https://x.com/fchollet/status/965396164869632000)  2018-02-19T01:22Z 571.4K followers, [---] engagements


"Magic Leap has raised $2.3B in total at a valuation of $6B. A consumer play before product/market fit. I hope there's more to it than what the public knows and that it works out for them 🤔 https://www.bloomberg.com/news/articles/2018-03-07/magic-leap-raises-461-million-from-saudis https://www.bloomberg.com/news/articles/2018-03-07/magic-leap-raises-461-million-from-saudis"  
[X Link](https://x.com/fchollet/status/971424085983993856)  2018-03-07T16:35Z 507K followers, [---] engagements


"Seemed to me that for successful giant tech companies the fundamental tech breakthroughs take place before the startup gets formed then the seed funding is used to find product/market fit then the big $ is used to scale. Is this naive Are there historical counter-examples"  
[X Link](https://x.com/fchollet/status/971425607769759744)  2018-03-07T16:41Z 506.9K followers, [--] engagements


"Nice work from OpenAI on evolving loss functions to quickly master new tasks: https://blog.openai.com/evolved-policy-gradients/ https://blog.openai.com/evolved-policy-gradients/"  
[X Link](https://x.com/fchollet/status/986652863530061826)  2018-04-18T17:09Z 507K followers, [---] engagements


"🌉 SF bay area: area: [-----] km2 population: 7.8M density: 425.7/km2 1BR rent: $3.3k 🗼 Tokyo bay area: area: [-----] km2 (22% less) population: 37.8M (385% more (4.85x)) density: 2631/km2 1BR rent: $1k (70% less) I wonder how they do it. Ah I guess we'll never know🤔"  
[X Link](https://x.com/fchollet/status/989290688738086912)  2018-04-25T23:50Z 513.1K followers, [----] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@fchollet Avatar @fchollet François Chollet

François Chollet posts on X about ai, agi, if you, $googl the most. They currently have [-------] followers and [---] posts still getting attention that total [-------] engagements in the last [--] hours.

Engagements: [-------] #

Engagements Line Chart

  • [--] Week [---------] +10%
  • [--] Month [---------] +127%
  • [--] Months [----------] +331%
  • [--] Year [----------] +5.10%

Mentions: [--] #

Mentions Line Chart

  • [--] Month [--] -67%
  • [--] Months [---] +55%
  • [--] Year [---] -44%

Followers: [-------] #

Followers Line Chart

  • [--] Week [-------] +0.51%
  • [--] Month [-------] +1.50%
  • [--] Months [-------] +6.80%
  • [--] Year [-------] +13%

CreatorRank: [------] #

CreatorRank Line Chart

Social Influence

Social category influence technology brands 8.82% stocks #5699 social networks 2.94% countries 2.94% finance 2.94% travel destinations 1.18% currencies 1.18% products 0.59% cryptocurrencies 0.59%

Social topic influence ai #1597, agi #20, if you #2682, $googl #523, twitter #3028, to the #4134, science #2689, model #757, solve #510, software 1.76%

Top accounts mentioned or mentioned by @ofquarks28701 @raahiravi @lfuckingg @polynoamial @ghidorah_x @spisallyouneed @alygg77 @unifiedenergy11 @lateinteraction @visimod @noonglade_ @grok @curiousiter @louis4174 @anayatkhan09 @stellarium1234 @akanoego @bapxai @chasebrowe32432 @biounit000

Top assets mentioned Alphabet Inc Class A (GOOGL)

Top Social Posts

Top posts by engagements in the last [--] hours

"A good solution to the intelligence problem should be able to autonomously produce abstractions that compose well stack well and stand the test of time. Without cribbing them from somewhere else. So far there's no tech that achieves this. Gradient descent certainly doesn't"
X Link 2026-02-09T20:25Z 606.2K followers, 29.9K engagements

"We are looking for brilliant deep learning researchers to help us solve program synthesis at @ndea. If you strongly feel like AGI should be capable of invention not just automation consider joining us. Apply here: https://ndea.com/jobs https://ndea.com/jobs"
X Link 2026-02-10T19:34Z 606.2K followers, 37.5K engagements

"Write papers where the citation count per year looks like this"
X Link 2026-02-12T00:38Z 606.2K followers, 75.7K engagements

"I don't know if you've noticed but there's a wave of mass psychosis rolling through tech Twitter very similar to what we experienced in spring [----] and spring [----] (interesting that the periodicity is exactly [--] years) But the vibes are much darker now than they were last time"
X Link 2026-02-12T00:48Z 606.2K followers, 406.8K engagements

"@polynoamial About one year Frontier models today perform very poorly with a minimal harness. However if big labs start directly targeting the benchmark like they did for ARC-2 numbers will go up fast"
X Link 2026-02-12T21:05Z 606.2K followers, 18.7K engagements

"You should read it some day it's a good read. Every single thing said there is still true which you would know if you had actually cared to read it. This is the key bit: the first superhuman AI will just be another step on a visibly linear ladder of progress that we started climbing long ago AGI is in the continuity of the process of Science at large (which is itself a recursively self-improving intelligent process) and that process has been moving roughly at a linear pace due to the reasons detailed in the article"
X Link 2026-02-13T04:31Z 606.2K followers, [----] engagements

"Best time to buy an asset you want to own is when everybody hates it and there's no bid. Worst time is when everybody is enthusiastic about it and wants a piece"
X Link 2026-02-13T05:04Z 606.2K followers, 35.3K engagements

"I don't think the rise of AGI will lead to a sudden exponential explosion in AI capabilities. There are bottlenecks on the sources of new capability improvements and horizontally scaling intelligence in silicon (even by a massive factor) doesn't lift those bottlenecks"
X Link 2026-02-13T18:15Z 606.2K followers, 30.7K engagements

"The 3rd edition of my book Deep Learning with Python is being printed right now and will be in bookstores within [--] weeks. You can order it now from Amazon or from Manning. This time we're also releasing the whole thing as a 100% free website. I don't care if it reduces book sales I think it's the best deep learning intro around and more people should be able to read it"
X Link 2025-09-18T14:01Z 606.2K followers, 775.5K engagements

"If more people can build software there will be more software startups and side businesses. Which means SaaS tool builders that cater to such startups will benefit from massive AI tailwinds: [--]. Their customer base will expand [--]. AI makes their job easier so they can better serve their customers (e.g. add features faster launch more services) [--]. AI presents new automation opportunities so that they can make their products higher-value [--]. AI gives them the ability to ship customizable adaptive interfaces on top of their core service"
X Link 2026-01-30T22:08Z 606.2K followers, 24.4K engagements

"Back in [----] everybody was telling me "no one uses Google search anymore it's over" From [----] to [----] Google search query volume has grown 61% to 5T/year and search revenue has grown 28% to $225B (56% of Google's revenue) The track record of Twitter pundits predicting AI disruption has been abysmal https://twitter.com/i/web/status/2020497629290148139 https://twitter.com/i/web/status/2020497629290148139"
X Link 2026-02-08T13:59Z 606.2K followers, 292.1K engagements

"Such a weird question in the first place -- it is basically impossible for any word not to carry semantic value. Let's say you introduce "uh" purely as a filler when speaking and absolutely no other purpose then "uh" has automatically acquired semantic value as a signifier of orality and will be used in written contexts to evoke orality. The only way a word could not carry semantic value is if it is inserted 100% at random. If it is non random then it has meaning. It has at least the meaning of evoking the non-random context in which it gets used. Like most mid-century linguists Chomsky"
X Link 2026-02-09T03:39Z 606.2K followers, 55.5K engagements

"The new Gemini Deep Think is achieving some truly incredible numbers on ARC-AGI-2. We certified these scores in the past few days"
X Link 2026-02-12T16:22Z 606.2K followers, 204.5K engagements

"A good canary in the coal mine for AI-caused job loss will be call centers. We're currently projecting 2.75M call center jobs in the US in [----]. In [----] it was 2.63M. The global call center market size has grown 35% in that time period (from $298B to $405B). Peak employment was [----] at 2.98M. When we see a -50% employment drop in this sector you can get ready for broad disruption across the economy"
X Link 2026-02-12T21:40Z 606.2K followers, 147.4K engagements

"I'm guessing most people on tech Twitter believe call center employment went to [--] in 2024"
X Link 2026-02-12T22:26Z 606.2K followers, 25.2K engagements

"@Yossi_Dahan_ @polynoamial ARC-4 is in the works to be released early [----]. ARC-5 is also planned. The final ARC will probably be 6-7. The point is to keep making benchmarks until it is no longer possible to propose something that humans can do and AI can't. AGI 2030"
X Link 2026-02-12T23:13Z 606.2K followers, 200.6K engagements

"Reaching AGI won't be beating a benchmark. It will be the end of the human-AI gap. Benchmarks are simply a way to estimate the current gap which is why we need to continually release new benchmarks (focused on the remaining gap). Benchmarking is a process not a fixed point. We can say we have AGI when it's no longer possible to come up with a test that evidences the gap. When it's no longer possible to point to something that regular humans can do and AI can't. Today it's still easy. I expect it will become nearly impossible by [----]. https://twitter.com/i/web/status/2022090111832535354"
X Link 2026-02-12T23:27Z 606.2K followers, 103.6K engagements

"You can write a self-replicating physical program in just [--] tokens (RNA bases). That's small enough to emerge spontaneously via brute force recombination at scale. AI is cool and all. but a new paper in @ScienceMagazine kind of figured out the origin of life The paper reports the discovery of a simple 45-nucleotide RNA molecule that can perfectly copy itself. https://t.co/TTe4sXhqUT AI is cool and all. but a new paper in @ScienceMagazine kind of figured out the origin of life The paper reports the discovery of a simple 45-nucleotide RNA molecule that can perfectly copy itself."
X Link 2026-02-13T15:38Z 606.2K followers, 155.1K engagements

"Merely knowing that an outcome you want is attainable leads you to automatically filter out decisions that clearly wouldn't lead to it thus dramatically increasing the probability you'll reach it. Belief is destiny"
X Link 2026-02-13T18:00Z 606.2K followers, 69K engagements

"AGI is in the continuity of the continual recursively self-improving capability expansion process that started when humanity developed modern science in the 1700s-1800s. It is the next step of the ladder"
X Link 2026-02-13T18:16Z 606.2K followers, 14.9K engagements

"Merely having tools or language is not enough to kickstart recursive self-improvement. The total set of prerequisites were only assembled very recently. In particular: - Writing - Scalable publishing - Sufficient social freedom to think and communicate about nature/people - Food no longer a major bottleneck Basically we had to wait until the 1700s for everything to line up. https://twitter.com/i/web/status/2022376859795886230 https://twitter.com/i/web/status/2022376859795886230"
X Link 2026-02-13T18:26Z 606.2K followers, [---] engagements

"Whenever I hear Very Serious Businessmen make confident pronouncements about the future of AI I remember what the very same people were saying in [----] about the Metaverse and NFTs"
X Link 2026-02-06T19:52Z 606.2K followers, 121K engagements

"Those predicting the death of all SaaS will fare even worse They don't understand the PMF dynamics and they don't understand the AI tailwinds"
X Link 2026-02-08T14:01Z 606.2K followers, 45K engagements

"Outside the human mind there's just one kind of abstraction substrate that has achieved these properties historically: math and code (yes it's only one kind not two)"
X Link 2026-02-09T20:27Z 606.2K followers, 20.6K engagements

"Lots of folks spread false narratives about how ARC-1 was created in response to LLMs or how ARC-2 was only created because ARC-1 was saturated. Setting the record straight: [--]. ARC-1 was designed 2017-2019 and released in [----] (pre LLMs). [--]. The coming of ARC-2 was announced in May [----] (pre ChatGPT). [--]. By mid-2024 there was still essentially no progress on ARC-1. [--]. All progress on ARC-1 & ARC-2 came from a new paradigm test-time adaptation models starting in late [----] and ramping up through [----]. [--]. Progress happened specifically because research moved away from what ARC was intended to"
X Link 2026-02-12T19:54Z 606.2K followers, 86.6K engagements

"One possible scenario for many industries is that the nature of the job changes total task throughput increases revenue increases and employment stays stable or slightly decreases. I generally don't expect to see AI-caused mass unemployment in the next [--] years"
X Link 2026-02-12T21:42Z 606.2K followers, 39K engagements

"That process if you zoom out is not exponential (though it does involved many lower-level exponentials mostly at the level of system inputs). It is essentially linear. The weight/importance of scientific progress over say 1850-1900 is comparable to 1900-1950 1950-2000 or 2000-2050"
X Link 2026-02-13T18:18Z 606.2K followers, 14.7K engagements

"Right now it's still taking me more time to generate medium-complexity diagrams by describing them to Nano Banana than by drawing them manually in Google Slides"
X Link 2026-02-13T21:10Z 606.2K followers, 36.9K engagements

"Fun fact the person responsible for this is a Russian asset Gerhard Schrder (see [----] Atomgesetz) and the purpose of the move was to ensure German energy dependence on Russia. He became chairman of the board of Rosneft and NordStream and was about to join the board of Gasprom before the war started. He made tens of millions from Russian energy companies and pro-Putin lobbying. Lifelong friend with Putin whom he lauded as a flawless democrat in [----]. Imagine voluntarily doing this to your own country https://t.co/mqbzZE7ZKu Imagine voluntarily doing this to your own country"
X Link 2026-02-02T16:00Z 606.2K followers, 616.1K engagements

"Large capital raise from Waymo to accelerate deployment. They plan to add +20 cities in [----]. I expect they will roughly double their city count every [--] months from now on using their new Zeekr-based platform ($40000 per vehicle). They should also double their weekly rides every [--] months. The age of autonomous mobility at scale is here. Waymo has raised $16B to bring the worlds most trusted driver to more cities. ✅ $126B valuation ✅ 20M+ lifetime rides ✅ 90% reduction in serious injury crashes Read more from our co-CEOs: https://t.co/Fc5I33WpYB https://t.co/zF79Sc6kzm The age of autonomous"
X Link 2026-02-02T22:47Z 606.2K followers, 71K engagements

"The best movie genre to watch to learn a new language is romantic comedies (from the target country). They're dialog-heavy they feature only everyday vocabulary you can use and they show you a realistic consensus view of local contemporary society and culture"
X Link 2026-02-06T16:16Z 606.2K followers, 55.4K engagements

"Whenever there's a TV ad for a crypto exchange it shows things like skyscraper construction sites the moon landing fighter jets etc. (you've probably seen some of them). It's funny. Nothing to be feature from crypto land so they have to use exclusively borrowed achievements. What is the "crypto industry" Like what does it produce What is the "crypto industry" Like what does it produce"
X Link 2026-02-06T16:20Z 606.2K followers, 22.6K engagements

"@lateinteraction @polynoamial If you believe in AGI then you shouldn't use a harness obviously. If any kind of task-specific program is needed the AI should come up with it"
X Link 2026-02-12T22:14Z 606.2K followers, [----] engagements

"Today OpenAI announced o3 its next-gen reasoning model. We've worked with OpenAI to test it on ARC-AGI and we believe it represents a significant breakthrough in getting AI to adapt to novel tasks. It scores 75.7% on the semi-private eval in low-compute mode (for $20 per task in compute ) and 87.5% in high-compute mode (thousands of $ per task). It's very expensive but it's not just brute -- these capabilities are new territory and they demand serious scientific attention"
X Link 2024-12-20T18:09Z 606.2K followers, 2.2M engagements

"All the great breakthroughs in science are at their core compression. They take a complex mess of observations and say "it's all just this simple rule". Symbolic compression specifically. Because the rule is always symbolic -- usually expressed as mathematical equations. If it isn't symbolic you haven't really explained the thing. You can observe it but you can't understand it"
X Link 2025-11-14T14:30Z 606.2K followers, 13.2M engagements

"Folks who work in AI or software engineering feel like the world is changing exponential fast. Because their world is changing exponentially fast. Folks in structural engineering or aeronautical engineering might not share the same sentiment"
X Link 2026-02-03T23:59Z 606.2K followers, 109.6K engagements

"What happens when a skill can be almost fully automated with AI Do these jobs simply disappear Instead of purely speculating we can simply look at concrete examples. Take translators. Translation can be 100% automated with AI and this capability has been around since [----]. So we have 2-3 years of data. What we see so far: - Stable FTE count but slow hiring or no hiring - Nature of the job switched from doing it yourself to supervising AI output (post-editing) - Increased task volume - Decreased hourly rates - Freelancers getting cut We are now starting to see the same pattern with software"
X Link 2026-02-06T00:40Z 606.2K followers, 394.1K engagements

"For non-verifiable domains the only way you can improve AI performance at this time is via curating more annotated training data which is expensive and only yields logarithmic improvements. And here's the thing: nearly all jobs have non-verifiable elements. There's virtually no job that's end-to-end verifiable. Even the job of a mathematician is not end-to-end verifiable. Sofware engineering involves many verifiable tasks but it isn't end-to-end verifiable. For this reason the gap between "AI can automate most of these tasks" and "AI can fully replace this job" will remain for a very long"
X Link 2026-02-06T03:12Z 606.2K followers, 232K engagements

"Lots of folks are apparently in utter disbelief at these numbers because obviously Google search died in [----] no one is using Google at all in [----] so the numbers must be wrong somehow or maybe it's just AI agents making all these queries Nope it's a plain fact that more people than ever are using Google to search more than ever. In fact Google search usage is accelerating as of Q4 [----] Look instead of grasping at straws ask yourself why you were wrong about this and try to update your priors so that you'll be less wrong next time Back in [----] everybody was telling me "no one uses"
X Link 2026-02-08T14:35Z 606.2K followers, 185.8K engagements

"Best resource to understand deep learning fundamentals. If you want to understand how modern AI actually works not just use it Deep Learning with Python 3rd ed. breaks it down with hands-on projects & real code. @fchollet & Matthew Watson cover the foundations and the cutting edge. 50% off for the next three days: https://t.co/I7EfulP17D If you want to understand how modern AI actually works not just use it Deep Learning with Python 3rd ed. breaks it down with hands-on projects & real code. @fchollet & Matthew Watson cover the foundations and the cutting edge. 50% off for the next three days:"
X Link 2026-02-14T19:31Z 606.2K followers, 18.8K engagements

"That was my point back in [----] when I said "intelligence is skill-acquisition efficiency". I didn't say "skill-acquisition ability" efficiency is the key. When you have two systems capable of acquiring the same skills the more efficient one is more intelligent. Intelligence is fundamentally an efficiency ratio and both data efficiency and compute/energy efficiency matter"
X Link 2025-03-24T22:08Z 605.8K followers, 26.5K engagements

"NVIDA chips are manufactured by TSMC a Taiwanese company. They're created using EUV lithography machines manufactured by ASML a Dutch company. These machines consist of 50% of German parts (by value) in particular ZEISS optics"
X Link 2025-08-25T20:38Z 605.9K followers, 2.2M engagements

"The most interesting fact about this globalized chain is the number of irreplaceable single points of failure. There's only one company that can make these chips at scale. It runs on equipment that only one company can make. Out of parts that only one company can make"
X Link 2025-08-25T20:41Z 603.9K followers, 240.5K engagements

"The Transformer architecture is fundamentally a parallel processor of context but reasoning is a sequential iterative process. To solve complex problems a model needs a "scratchpad" not just in its output CoT but in its internal state. A differentiable way to loop branch and backtrack until the model finds a solution that works"
X Link 2025-12-23T17:49Z 605.8K followers, 133.9K engagements

"If you're wondering whether saturating ARC-AGI-1 or [--] means we have AGI now. I refer you to what I said when we launched ARC-AGI-2 last year (which is also the same thing I said when we announced ARC-AGI-2 was coming in Spring [----] before the rise of LLM chatbots). The ARC-AGI series is not an AGI threshold it's a compass that points the research community toward the right questions. ARC-AGI-1 is a minimal test of fluid intelligence -- to pass it you needed to show nonzero fluid intelligence. This required AI to move past the classic deep learning / LLM paradigm of pretraining scaling +"
X Link 2025-12-25T19:42Z 603.8K followers, 212.4K engagements

"Enlightenment values are what's most unique and distinctive about Western culture. They are its foundations. Human rights individualism liberty free speech valuing science and reason strong individual property rights modern state design (democracy separation of powers separation of church and state rule of law.) In particular the West's most precious gift to the world is the radical idea that every human being possesses inherent unalienable rights. Independently of whether they are members of the restricted in-group or not. This stuff is the source code of the modern world. We should cherish"
X Link 2026-01-26T23:33Z 603.7K followers, 46.2K engagements

"The growth rate of Gemini is truly remarkable. If you model the current growth of the different alternatives and extrapolate into the future it's very clear where this is going"
X Link 2026-01-28T16:40Z 603.2K followers, 67.5K engagements

"If you're feeling like inventing AGI today check out the new ARC-AGI-3 quickstart. You can get started building your own solver agent in minutes locally and you can run your experiments at [------] APM. https://docs.arcprize.org/ https://docs.arcprize.org/"
X Link 2026-01-30T19:42Z 603.7K followers, 19.3K engagements

"Meanwhile it is absolutely not the case that SaaS customers will decide to ship their own solutions rather than buying a ready-made customizable solution. Customers will always focus on their core competency and pay people to take care of the rest. Software is changing but this basic dynamic isn't"
X Link 2026-01-30T22:13Z 603.7K followers, 15.5K engagements

"This is a completely misguided take that reminds me how during the 3D printing bubble of [----] investors genuinely believed that consumers would start producing their own goods and stop buying them from stores. Sure you can print your own stuff or cook your own food and so on. It would be cheaper But it's just not a rational use of your resources and attention unless you are doing it for fun. Those who actually benefitted from 3D printing were. the manufacturers. AI for code is just the same. https://twitter.com/i/web/status/2017362381677203869"
X Link 2026-01-30T22:20Z 603.7K followers, [----] engagements

"We're reaching unprecedented levels of panicked gaslighting. But we have eyes we can read"
X Link 2026-02-01T18:14Z 604K followers, 84.5K engagements

"I should have said "most responsible" -- of course there is more than one person responsible. But it is an indisputable fact that Schrder is the single most responsible individual here aside from Putin himself. Trittin Merkel and the Green party also have their share of responsibility"
X Link 2026-02-02T18:23Z 603.7K followers, 31.1K engagements

"Expect more US-based global tech companies to considerably expand their engineering offices in India Europe possibly Japan/Korea Alphabet is plotting to dramatically expand its presence in India with the possibility of taking millions of square feet in new office space in Bangalore Indias tech hub https://t.co/OciaCnCTW0 Alphabet is plotting to dramatically expand its presence in India with the possibility of taking millions of square feet in new office space in Bangalore Indias tech hub https://t.co/OciaCnCTW0"
X Link 2026-02-03T21:35Z 603.8K followers, 144.9K engagements

"@SuperHumanEpoch Looks like they didn't ask for your opinion then because it's already happening big time -- London Paris Munich and Zurich specifically"
X Link 2026-02-03T21:42Z 603.8K followers, [----] engagements

"Given the degree of undiluted and universal hate SaaS is getting at this point we can't be too far from the bottom. Reminds me of when everybody knew with absolute confidence that Google was an AI loser and already dead (last year). Good times It is genuinely remarkable how much money you can make by watching negative sentiment reach a fever pitch on FinTwit and then buying Works over and over and over and over It is genuinely remarkable how much money you can make by watching negative sentiment reach a fever pitch on FinTwit and then buying Works over and over and over and over"
X Link 2026-02-04T04:45Z 603.9K followers, 45.8K engagements

"Activation-aware quantization (AWQ) is now built-in in Keras as a new quantization strategy -- it lets you retain greater performance with smaller weights"
X Link 2026-02-06T18:10Z 603.5K followers, [----] engagements

"Strong benchmarking results from AWQ"
X Link 2026-02-06T18:11Z 603.5K followers, [----] engagements

"Another new quantization strategy: int4 sub-channel quantization"
X Link 2026-02-06T18:13Z 603.4K followers, [---] engagements

"One-line export of any Keras model to LiteRT (the successor to TFLite) regardless of backend. Works with iOS Android"
X Link 2026-02-06T18:14Z 603.4K followers, [---] engagements

"Google is granting TPU research & education awards -- free TPU compute for accepted Keras + JAX projects"
X Link 2026-02-06T18:18Z 603.5K followers, [----] engagements

"@Gabe__MD False Nokia sales peaked literally months after the release of the iPhone so long before the iPhone went mainstream (meanwhile genAI today has 1.8B users). How do you not fact check a basic factual claim before you make it it takes 30s"
X Link 2026-02-08T20:55Z 603.9K followers, [---] engagements

"@vkhosla @agenticasdk You should try ARC-AGI-3 (developer preview is available now full benchmark coming in a few weeks)"
X Link 2026-02-12T05:34Z 606.2K followers, 13.8K engagements

"@meowbooksj @IterIntellectus We announced ARC-AGI-3 one year ago. The preview has been online for a while. Full release in a few weeks"
X Link 2026-02-12T16:31Z 606.1K followers, [----] engagements

"Natural evolution suggests that AGI won't come from larger models that cram more and more specific knowledge but from discovering the meta-rules that allow a system to grow and adapt its own architecture in response to the environment"
X Link 2026-02-04T20:52Z 606.2K followers, 71.1K engagements

"Radiologists are a good example -- a job we were promised since [----] would soon disappear. The lesson is that even if the core tasks underlying a job can be done with AI that doesn't mean the human expert isn't still needed"
X Link 2026-02-06T03:14Z 606.2K followers, 22.8K engagements

"When you lack a grounded causal model of the world your "predictions" are simply a remix of narratives you've heard from others. Reminds me of something actually"
X Link 2026-02-06T19:53Z 606.2K followers, 38.7K engagements

"None of what you just said is remotely accurate. ARC did not come out [--] years ago but over [--] years ago. ARC-2 was announced several years before ARC-1 was saturated. And ARC was never said to be impossible for AI (the point of the benchmark was obviously to get solve by AI). It was said to be impossible for LLMs which proved accurate. Progress came from pivoting towards test-time adaption not from scaling up LLMs. To this very day base LLMs still perform abysmally low on ARC"
X Link 2026-02-12T20:08Z 606.2K followers, 55.9K engagements

"Interesting finding on frontier model performance on ARC -- due to extensive direct targeting of the benchmark models are overfitting to the original ARC encoding format. Frontier model performance remains largely tied to a familiar input distribution. @mikeknoop We found that if we change the encoding from numbers to other kinds of symbols the accuracy goes down. (Results to be published soon.) We also identified other kinds of possible shortcuts. @mikeknoop We found that if we change the encoding from numbers to other kinds of symbols the accuracy goes down. (Results to be published soon.)"
X Link 2026-02-14T21:38Z 606.2K followers, 22.7K engagements

"Live-tweeting the Keras community meeting. First off: new model architectures in KerasHub Latest Gemmas GPT-OSS Qwen"
X Link 2026-02-06T18:07Z 606.2K followers, 13.9K engagements

"Some people might say "but aren't humans also very sensitive to encoding format" In my opinion for an actually intelligent agent re-encoding a task with a known encoding scheme should always be a no-op for performance. If you give me a set of multiplications to do but you've encoded the values in binary my first action will be to decode them back to the format I'm familiar with - digits. Because "decode binary" and "multiply" are simple and error-correctable operations final performance should be 100% all of the time. In fact you could chain many such indirection steps and still see"
X Link 2026-02-14T22:01Z 606.2K followers, [----] engagements

"The new Keras release (3.11.0) is out Main upgrades: int4 quantization with all backends Support for Grain a data i/o and streaming library inspired by tf-data that is backend-agnostic On the JAX side integration with the NNX library -- if you're a NNX user you can start using any Keras layer/model (including models from KerasHub) as a NNX module Release notes: https://github.com/keras-team/keras/releases/tag/v3.11.0 https://github.com/keras-team/keras/releases/tag/v3.11.0"
X Link 2025-07-30T15:40Z 564.1K followers, 29.8K engagements

"Languages follow a power law distribution. There are. [----] living languages with over [----] speakers [---] with over 1M speakers [---] with over 10M speakers [--] with over 100M speakers"
X Link 2025-08-01T03:18Z 567.4K followers, 40.7K engagements

"The claim wasn't "self-driving cars will eventually work" (which was kind of obvious) but "they will be deployed at scale in every single city before 2020" In [----] I knew people who decided not to get their license (in practice they ended up getting it a few years later anyway) because they assumed no one would be driving anymore by [----]. It was a very mainstream view in SV"
X Link 2025-08-01T21:52Z 564.1K followers, 12.9K engagements

"The paper "Hierarchical Reasoning Models" has been making the rounds lately collecting tens of thousands of likes on Twitter across dozens of semi-viral threads which is quite unusual for a research paper. The paper claims 40.3% accuracy on ARC-AGI-1 with a tiny model (27M parameters) trained from scratch without any external training data -- if real this would represent a major reasoning breakthrough. I just did a deep dive on the paper and codebase. It's good read detailed yet easy to follow. I think the ideas presented are quite interesting and the architecture is likely valuable. The"
X Link 2025-08-03T00:49Z 563.3K followers, 27.4K engagements

"The big breakthrough for convnets was the first GPU-accelerated CUDA implementation which immediately started winning first place in image classification competitions. Remember when that happened I do. That was Dan Ciresan in [----] Who invented convolutional neural networks (CNNs) 1969: Fukushima had CNN-relevant ReLUs [--]. 1979: Fukushima had the basic CNN architecture with convolution layers and downsampling layers [--]. Compute was [---] x more costly than in [----] and a billion x more costly than https://t.co/TRS8zg4vCA Who invented convolutional neural networks (CNNs) 1969: Fukushima had"
X Link 2025-08-03T21:37Z 575.1K followers, 166.5K engagements

"The path forward is not to build a "god in a box" it's to create intelligent systems that integrate with existing processes in particular science and humans at large to empower and accelerate them"
X Link 2025-08-03T22:13Z 567.8K followers, 109.3K engagements

"Kaggle just launched the NeurIPS [----] Code Golf competition -- the goal is for you to write Python solution programs to ARC-AGI-1 tasks while keeping the programs as small as possible. Are you better at writing code than frontier models https://www.kaggle.com/competitions/google-code-golf-2025 https://www.kaggle.com/competitions/google-code-golf-2025"
X Link 2025-08-07T16:27Z 567.6K followers, 52.8K engagements

"GPT-5 results on ARC-AGI [--] & [--] Top line: 65.7% on ARC-AGI-1 9.9% on ARC-AGI-2 GPT-5 on ARC-AGI Semi Private Eval GPT-5 * ARC-AGI-1: 65.7% $0.51/task * ARC-AGI-2: 9.9% $0.73/task GPT-5 Mini * ARC-AGI-1: 54.3% $0.12/task * ARC-AGI-2: 4.4% $0.20/task GPT-5 Nano * ARC-AGI-1: 16.5% $0.03/task * ARC-AGI-2: 2.5% $0.03/task https://t.co/KNl7ToFYEf GPT-5 on ARC-AGI Semi Private Eval GPT-5 * ARC-AGI-1: 65.7% $0.51/task * ARC-AGI-2: 9.9% $0.73/task GPT-5 Mini * ARC-AGI-1: 54.3% $0.12/task * ARC-AGI-2: 4.4% $0.20/task GPT-5 Nano * ARC-AGI-1: 16.5% $0.03/task * ARC-AGI-2: 2.5% $0.03/task"
X Link 2025-08-07T17:32Z 567.4K followers, 40.7K engagements

"Why do millennials like Harry Potter so much"
X Link 2025-08-09T00:57Z 568.5K followers, 87.6K engagements

"To be clear I do think it's pretty good content as far as young adult books / movies go but I'm perplexed by the sheer intensity of fandom it seems to enjoy a full 20-30 years later"
X Link 2025-08-09T02:30Z 567.6K followers, 24.9K engagements

"@svpino AGI might happen soon-ish but won't be coming from scaling up current systems which makes it tricky to time -- definitely not a matter of extrapolating from a chart"
X Link 2025-08-10T02:33Z 567.6K followers, 51.7K engagements

"JAX = performance & scalability Keras [--] = high velocity development compact code best practices by default Both at the same time = pretty killer Whats the one skill that separates good AI engineers from the highest-paid ones PyTorch gets you in the door. JAX gets you the higher-paid role. The biggest AI teams lean on JAX for speed and scale. If you dont understand it youre already behind. And Im not teaching you https://t.co/CeI0W9Zbjp Whats the one skill that separates good AI engineers from the highest-paid ones PyTorch gets you in the door. JAX gets you the higher-paid role. The biggest AI"
X Link 2025-08-10T23:30Z 568.2K followers, 42.3K engagements

"Needless to say this is not how human intelligence works. Human intelligence is compositional which means you can understand the cross product of two spaces without being explicitly exposed to a dense sampling of data pairs from those spaces. When reading a book most people can visually picture what's going on no matter how far from everyday reality the text gets -- and these people were never exposed to billions of explicit text:video pairs. In fact they were exposed to virtually no such data"
X Link 2025-08-11T20:53Z 571K followers, 46.8K engagements

"Open questions about driverless ride hailing economics: [--]. What will be the cost reduction (over Uber/Lyft) of removing the driver [--]. How much does that cost reduction increase demand [--]. Would the UX change significantly affect demand [--]. Would we see a large increase in geographic availability (no need for drivers = can put more taxis on the road) For 1: the labor cost of a Lyft/Uber ride after accounting for everything else is only 20-40% of the price which caps the reduction at -40% in the best case scenario. However a driverless taxi network would have significantly higher fixed costs (AI"
X Link 2025-08-12T18:33Z 565.9K followers, 37.8K engagements

"GenAI isn't just a technology; it's an informational pollutanta pervasive cognitive smog that touches and corrupts every aspect of the Internet. It's not just a productivity tool; it's a kind of digital acid rain silently eroding the value of all information. Every image is no longer a glimpse of reality but a potential vector for synthetic deception. Every article is no longer a unique voice but a soulless permutation of data a hollow echo in the digital chamber. This isn't just content creation; it's the flattening of the entire vibrant ecosystem of human expression transforming a rich"
X Link 2025-08-13T12:12Z 575.1K followers, 681.3K engagements

"@wewalkwillow Believe it or not I wrote it myself. It's not satire; it's a pastiche or perhaps a parody"
X Link 2025-08-13T12:19Z 568.2K followers, 46.9K engagements

"@nagaraj_arvind @wewalkwillow I painstakingly copy-pasted them inI would normally use a double dash instead of a literal em dash character. Dashing innit"
X Link 2025-08-13T19:41Z 567.6K followers, [----] engagements

"Google just dropped a new tiny LLM with outstanding performance -- Gemma3 270M. Now available on KerasHub. Try the new presets gemma3_270m and gemma3_instruct_270m"
X Link 2025-08-14T18:24Z 571.5K followers, 67K engagements

"Interesting findings from this post: [--]. It should be obvious to anyone who has interacted with LLMs before that the writing style of the tweet is a conspicuous caricature of AI slop (e.g. em dashes the "it's not. it's." construction rambling florid prose etc.). Yet many people reacted by saying "It's written with AI" as if it were some kind of clever gotcha. (It was in fact not written with AI unlike a good fraction of the comments.) [--]. Many people also react by saying this prose is "beautiful." (I don't think it is.) I guess this illuminates why LLMs have converged on this style: many people"
X Link 2025-08-14T22:12Z 570.5K followers, 48.6K engagements

"Being pro-technology doesn't mean being blind to the negative effects of new technology. It's an exercise in pragmatic optimism -- maximizing the upside while managing the downside"
X Link 2025-08-18T02:59Z 571.5K followers, 69K engagements

"JAX on GPU is basically as good actually Jax on TPU will solve all your problems Jax on TPU will solve all your problems"
X Link 2025-08-18T03:36Z 570.7K followers, 70.8K engagements

"LLM adoption among US workers is closing in on 50%. Meanwhile labor productivity growth is lower than in [----]. Many counter-arguments can be made here e.g. "they don't know yet how to be productive with it they've only been using for 1-2 years" "50% is still too low to see impact" "models next year will be unbelievably better" etc. But I think we now have enough evidence to say that the [----] talking point that "LLMs will make workers 10x more productive" (some folks even quoted 100x) is probably not accurate. LLM adoption rose to 45.9% among US workers as of June/July [----] according to a"
X Link 2025-08-21T00:43Z 575.8K followers, 930.5K engagements

"By the way I don't know if people realize this but the [----] work-from-home switch coincided with a major productivity boom and the late [----] and [----] back-to-office reversal coincided with a noticeable productivity drop. It's right there in the statistics. Narrative violation Productivity growth is now back to pre-2020 levels"
X Link 2025-08-21T00:51Z 571.7K followers, 80.1K engagements

"@polynoamial Back in [----] AGI was 1-2 years away (in the form of GPT-5 no less) productivity was about to increase by 10-100x and developers were about to go extinct. People who questioned the narrative were a tiny minority. We were proven right"
X Link 2025-08-21T01:38Z 571.2K followers, 34.5K engagements

"People ask me "didn't you say before ChatGPT that deep learning had hit a wall and there would be no more progress" I have never said this. I was saying the opposite (that scaling DL would deliver). You might be thinking of Gary Marcus. My pre-ChatGPT position (below) was that scaling up DL would keep delivering better and better results and also that it wasn't the way to AGI (as I defined it: human-level skill acquisition efficiency). This was a deeply unpopular position at the time (neither AI skeptic nor AGI-via-DL-scaling prophet). It is now completely mainstream. Two perfectly"
X Link 2025-08-21T06:05Z 571.9K followers, 203.7K engagements

"People also ask "didn't you say in [----] that LLMs could not reason" I have also never said this. I am on the record across many channels (Twitter podcasts.) saying that "can LLMs reason" was not a relevant question just semantics and that the more interesting question was "could they adapt to novel tasks beyond what they had been trained on" -- and that the answer was no. Also correct in retrospect and a mainstream position today"
X Link 2025-08-21T06:08Z 570.4K followers, 13.4K engagements

"I have been consistently bullish on deep learning since [----] back when deep learning was maybe a couple thousands of people. I have also been consistently bullish on scaling DL -- not as a way to achieve AGI but as a way to create more useful models"
X Link 2025-08-21T06:09Z 571.6K followers, 26.1K engagements

"Back in [----] my book had an entire chapter on generative AI including language modeling and image generation. I wrote that content in [----] and early [----]. This was some of the earliest textbook content that covered generative AI. All the way back in [----] I was convinced that AI would one day become a major source of cultural content creation -- which was a completely outlandish position at the time"
X Link 2025-08-21T06:13Z 571.4K followers, 24.3K engagements

"In general there are two different kinds of methodology to produce progress in any science or engineering field. both are important and can lead to transformative progress. There's the "Edison way" where you brute-force a large predefined design space and you keep what works without necessarily understanding why it works. This is akin to biological evolution. Nearly all of deep learning was built this way (despite the fancy math in papers which is there to look nice 99% of the time). And there's the "Einstein way" where you think up big ideas in a top-down fashion and derive precise results"
X Link 2025-08-22T15:28Z 572.8K followers, 131.3K engagements

"The proprietary frontier models of today are ephemeral artifacts. Essentially very expensive sandcastles. Destined to be washed away by the rising tide of open source replication (first) and algorithmic disruption (later)"
X Link 2025-08-24T04:04Z 574.1K followers, 231K engagements

"I'll take the other side of this bet. By [----] all jobs will be replaced by AI and robots. Easily. The US labor force is about [---] million workers. About [--] million of those jobs include hands-on work. Automated systems can work four shifts a week. Replacing all physical labor would require about [--] million By [----] all jobs will be replaced by AI and robots. Easily. The US labor force is about [---] million workers. About [--] million of those jobs include hands-on work. Automated systems can work four shifts a week. Replacing all physical labor would require about [--] million"
X Link 2025-08-24T22:14Z 573.3K followers, 310.7K engagements

"@javelartin As any US VC who moved to Miami in [----] will tell you"
X Link 2025-08-25T20:43Z 571.7K followers, 96.8K engagements

"@singidunumx ASML is probably undervalued. TMSC is probably fairly valued given geopolitical risk around it. NVDA is probably overvalued. ZEISS is private"
X Link 2025-08-25T21:00Z 571.9K followers, 17.6K engagements

"Not sure if USD inflation will ever be under 2% again in my lifetime"
X Link 2025-08-26T01:18Z 572.3K followers, 66.7K engagements

"If your capital is in USD I hope you've made at least 10% in capital gains YTD (after tax) otherwise you are now poorer. Since USDX is down by this much"
X Link 2025-08-26T01:26Z 572.8K followers, 33.6K engagements

"Saying that deep learning is "just a bunch of matrix multiplications" is about as informative as saying that computers are "just a bunch of transistors" or that a library is "just a lot of paper and ink." It's true but the encoding substrate is the least important part here. It's the programs being encoded that are interesting and useful: what they can do what they can't do how well they generalize how efficiently they can be learned etc"
X Link 2025-08-27T03:43Z 571.7K followers, 208K engagements

"When a model gives you the right answer to a reasoning question you can't tell whether it was via memorization or via reasoning. A simple way to tell between the two is to tweak your question in a way that [--]. changes the answer [--]. requires some reasoning to adapt to the change. If you still get the same answer as before. it was memorization"
X Link 2025-08-27T20:56Z 572K followers, 90.6K engagements

"Many people think "reasoning" is a category of tasks -- e.g. involving numbers riddles etc. It's not. It's an ability underpinned by compositional generalization. You can always solve "reasoning" tasks without reasoning. Just memorize -- either memorize the answer or memorize the general question/answer template"
X Link 2025-08-27T21:01Z 571.8K followers, 26K engagements

"Model interpretability is not a question of which ML method you're using (which model substrate e.g. NNs vs graphical models vs symbolic code). Any substrate can be interpretable when the model is small enough. It's purely a question of model size/complexity. The behavior of a complex codebase or a complex graphical model is not interpretable despite the fact that you can locally read any bit of what it does. It is perhaps debuggable in specific cases with great effort -- but the same would be true of NNs as well. IMO the statement "we must use interpretable methods" is a nonstarter it"
X Link 2025-08-28T18:55Z 572.9K followers, 65K engagements

"With enough compute all approaches start looking alike. Compute is the great equalizer"
X Link 2025-08-29T21:47Z 575K followers, 382.3K engagements

"This is a great definition of magic and this is precisely why computers feel like magical artifacts. What they do is simple but they do a lot of it very fast. An amount humans cannot comprehend at a speed humans cannot comprehend. @fchollet there is a definition of magic that goes something like magic works because we underestimate the time someone would take to master something to make it appear effortless/invisible compute feels like that for ai @fchollet there is a definition of magic that goes something like magic works because we underestimate the time someone would take to master"
X Link 2025-08-29T23:54Z 572.8K followers, 44.5K engagements

"Homo sapiens has had current levels of fluid intelligence for 50k-100k years perhaps even longer. Yet we only reached the moon [--] years ago. Operationalizing and deploying general intelligence takes much longer than we assume"
X Link 2025-08-30T16:03Z 571K followers, 112.7K engagements

"This was the comment. Had to delete it since I don't want my comments to serve as an outlet for this discourse"
X Link 2025-08-30T23:10Z 572K followers, 30K engagements

"Do consider: some humans [-----] years ago could paint better than you do (see Altamira bisons below). And the basics of civilization -- agriculture domestication writing complex stone architecture metallurgy (gold) were independently reinvented among populations that were completely isolated from each other for [-----] years"
X Link 2025-08-30T23:21Z 572K followers, 38.8K engagements

"I want to be absolutely clear: it is the scientific consensus that behaviorally and cognitively modern humans data back at least [-----] years. If you think fluid intelligence is something that recently appeared you are going against a vast body of evidence"
X Link 2025-08-31T00:33Z 572K followers, 27.9K engagements

"This is what we're dealing with. If you're sharing this data as "proof" that intelligence is a recent and exclusively European development you are showing yourself to be incapable of the most basic level of critical thinking"
X Link 2025-08-31T00:47Z 572.2K followers, 32.8K engagements

"The general idea of this world map is for every country that belongs to the "wrong" category Lynn would go and find a IQ test study conducted on a mentally disabled group (e.g. a few children in a mental institution a few children that took part in a study on malnourishment etc.) and then would report the IQ number as a "national average" with no further context. Hence why you end up with IQs reflecting mental disability To be clear this is not how science works and Richard Lynn is not a scientist"
X Link 2025-08-31T01:15Z 572.2K followers, 37.9K engagements

"1980s Japan didn't just have a roaring economy it also had many of the best AI research labs in the world. It still retained world-leading robotics expertise up until the mid 2000s. But it is all but absent from the current AI wave"
X Link 2025-09-02T18:26Z 572.9K followers, 151.3K engagements

"You haven't so much invented such a solution as you have discovered the coordinate system in which the problem becomes trivial"
X Link 2025-09-06T21:29Z 573.6K followers, 25.6K engagements

"This is why to solve a difficult open-ended problem (like AGI) you must always start by asking the right question"
X Link 2025-09-06T21:31Z 573.6K followers, 23.2K engagements

"An experiment: I started a subscriber feed. For the time being all of my spicy opinion tweets will go there. Let's see how it works out"
X Link 2025-09-07T18:50Z 574.1K followers, 45.4K engagements

""Does a causal substrate necessarily need to be symbolic" you ask. Could it be based on parametric curves Yes it does because in such a substrate the model is isomorphic to the graph of causal factors of what you are modeling. And such a graph is necessarily very sparse i.e. it's a symbolic graph. It has completely different properties from a continuous manifold"
X Link 2025-09-07T19:44Z 574.3K followers, 31.7K engagements

"I like the analogy of the "bicycle for the mind" because riding a bike requires effort from you and the bike multiplies the effect of that effort. I don't think the end goal of technology should be to let you sit around and twiddle your thumbs"
X Link 2025-09-07T23:33Z 573.7K followers, 91.2K engagements

"As slop floods the Internet and as humans start relying on generative AI more and more it's inevitable that future models will be mostly trained on slop (except for verifiable reasoning tasks where the training will be done in sims). Culture will turn into slop remixed from slop remixed from slop"
X Link 2025-09-08T03:55Z 573.7K followers, 116K engagements

"AGI will not be an algorithmic encoding of an individual mind but of the process of Science itself. The light of reason made manifest"
X Link 2025-09-08T17:55Z 574.2K followers, 49.7K engagements

"Worth noting that the output artifacts of science -- the models it produces -- are symbolic in nature. Most commonly expressed in mathematical form sometimes in code. Science is a program synthesis process"
X Link 2025-09-08T17:56Z 574.7K followers, 22.1K engagements

"The keyword here isn't "understand". It's "novel". You "truly understand" a thing if your model of it lets you make sense of every possible instance of the thing including those that are very far from what you've seen before (extreme generalization). You "somewhat understand" the thing if you can approach at least some new instances of it (local generalization). If you can only handle what you have seen before you are merely memorizing / retrieving. A student who truly understands F=ma can solve more novel problems than a Transformer that has memorized every physics textbook ever written. A"
X Link 2025-09-10T18:22Z 574.3K followers, 73.9K engagements

""Understanding" isn't some magical ineffable concept. It's a very concrete and practical property of an agent reflected in what it can do. We know LLMs don't "truly understand" because we can see what they can't do. It's as simple as that"
X Link 2025-09-10T18:24Z 574.3K followers, 21.4K engagements

"Theory is a ghost until you confront it with reality. You cannot choose the evidence you find. Always go with the evidence and never fall in love with ghosts of your own creation"
X Link 2025-09-12T15:07Z 575K followers, 39.2K engagements

""entangled representations" is a symptom not the cause. The cause is using as your representation substrate parametric curves fitted via gradient descent. The fix is to use maximally concise symbolic programs instead. Which by construction will be disentagled (otherwise they wouldn't be maximally concise)"
X Link 2025-09-13T02:33Z 573.6K followers, [----] engagements

"The most important skill for a researcher is not technical ability. It's taste. The ability to identify interesting and tractable problems and recognize important ideas when they show up. This can't be taught directly. It's cultivated through curiosity and broad reading"
X Link 2025-09-13T15:57Z 576K followers, [----] engagements

"NNs are not a dead end. They are a great fit for problems that must be naturally understood by embedding samples in continuous manifolds where distance approximates semantic similarity -- this covers all perception and intuition problems. They are suboptimal for anything that must be naturally understood as symbolic function composition. There program synthesis is the best fit. Future AI systems will feature both. Some already do. You can try to make NN representations closer to symbolic programs but this will remain suboptimal even if it yields progress. Optimality will be reached when our"
X Link 2025-09-14T16:17Z 573.6K followers, [----] engagements

"99% of research is finding out what doesn't work. The other 1% is what they write the textbooks about"
X Link 2025-09-16T21:02Z 576.1K followers, [----] engagements

"If you see me posting less here it's because I'm posting on the private feed which is where all the unfiltered nonsense is"
X Link 2025-09-17T20:42Z 574.9K followers, 37.8K engagements

"@kevin_jordan__ There's a ton of new content on LLMs and LLM centric workflows"
X Link 2025-09-18T17:48Z 574.7K followers, [----] engagements

"In [----] there was a big uptick in companies marketing their products as AI powered. [----] will be the year of companies marketing their products as AI free (a trend already underway)"
X Link 2025-09-26T18:28Z 577.3K followers, 91.2K engagements

"No theory feels true to me unless it is simple (relative to what it is explaining). The solution most likely to generalize is always the simplest one"
X Link 2025-09-27T14:33Z 577.6K followers, 34.9K engagements

"The point of science is to cover the greatest number of empirical facts at the least model complexity cost"
X Link 2025-09-27T14:34Z 576.6K followers, 20.6K engagements

"By now there are probably more agents platforms than agentic workflows actually in use"
X Link 2025-09-28T12:14Z 577.7K followers, 52.9K engagements

"you see the killer advantage of mechahorses is that you dont need to buy a new carriage. You dont need to build a new mill. The mechahorse is a drop-in horse replacement for all the different devices horses are currently powering thousands of them"
X Link 2025-09-28T14:28Z 577.7K followers, 32K engagements

"Meanwhile AGI will in fact get better by simply adding more compute. It will not be bottlenecked by the availability of human-generated text"
X Link 2025-09-29T01:47Z 576.6K followers, 29.4K engagements

"As a reminder we are also making the content available for free online so anyone can learn from it: https://deeplearningwithpython.io/ https://deeplearningwithpython.io/"
X Link 2025-09-29T22:29Z 577.6K followers, 11.2K engagements

"@__asan__t Yes it's a good start for learning about LLMs and how to build with them (in addition to general ML theory)"
X Link 2025-09-29T23:29Z 575.9K followers, [---] engagements

"So why exactly is there a huge quantum computing bubble at this point in time Is it just that the excess froth from the AI bubble needed somewhere to go and "quantum computing" has a cool ring to it"
X Link 2025-10-03T22:08Z 578.4K followers, 72.4K engagements

"Aperture Science did pivot from shower curtains to teleportation so there's precedent for this arc"
X Link 2025-10-03T22:26Z 577.7K followers, 18.8K engagements

"@oscarle_x It's up to $20B mc for no revenue quantum cos. And while the valuation of AI startups is highly speculative gen AI is a real technology that actually works and has well established use cases and fast increasing demand"
X Link 2025-10-03T22:30Z 577.7K followers, [----] engagements

"The single best item you can get at 7-Eleven in Japan is the egg sandwich"
X Link 2025-10-03T23:35Z 577.7K followers, 31.6K engagements

"@curiousiter @Mrlucid21 The bubble is in the gap between revenue (and revenue growth) and investment. Right now the industry spends $10 to make $1. Which cannot go on forever so either everyone starts consuming hundreds of dollars a month of gen AI or investment drops by a massive amount"
X Link 2025-10-03T23:39Z 576.6K followers, [---] engagements

"The idea that villains actually think of themselves as righteous is what's unrealistic. Real life villains almost always know they're villains"
X Link 2025-10-04T09:25Z 577.9K followers, 23.1K engagements

"The way to think about AGI is as a scalable efficient formalization & implementation of the scientific method. Not a brain in a jar"
X Link 2025-10-05T01:04Z 578.6K followers, 26.7K engagements

"This is how I can tell it's a good book store"
X Link 2025-10-08T10:07Z 578.6K followers, 45.1K engagements

"@sudo_xai A lot of them in fact. Kaggle is very popular in Japan"
X Link 2025-10-08T11:16Z 577.6K followers, [----] engagements

"@VictorTaelin @arcprize @jm_alexia @makingAGI Well HRM is another approach that doesn't use a commercial frontier model (TRM comes directly from it)"
X Link 2025-10-16T23:01Z 578.7K followers, [---] engagements

"An implementation of Reinforcement Learning agents in Keras with @OpenAI Gym: by @oshtim https://github.com/osh/kerlym https://github.com/osh/kerlym"
X Link 2016-05-17T17:10Z 507.8K followers, [---] engagements

"Reassuring to know that SoftBank is hard at work thinking about our future: (via @kcimc) http://cdn.softbank.jp/en/corp/set/data/irinfo/investor/shareholders/pdf/36/softbank_meeting36_004.pdf http://cdn.softbank.jp/en/corp/set/data/irinfo/investor/shareholders/pdf/36/softbank_meeting36_004.pdf"
X Link 2016-07-19T04:12Z 573.4K followers, [---] engagements

"A Keras implementation of DenseNet: - an extension of ResNet where every block is connected to all previous blocks. https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/DenseNet https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/DenseNet"
X Link 2016-10-13T21:05Z 512K followers, [---] engagements

"Two competing RL-related announcements today at NIPS: OpenAI Universe & DeepMind labyrinth. Universe looks very exciting https://x.com/gdb/status/805663771976835072 Just released Universe the AI training infrastructure we've been planning since we founded OpenAI: https://t.co/SODGDZ65kk https://x.com/gdb/status/805663771976835072 Just released Universe the AI training infrastructure we've been planning since we founded OpenAI: https://t.co/SODGDZ65kk"
X Link 2016-12-05T23:01Z 507.3K followers, [---] engagements

"Great demo of the latest version of Spot by Boston Dynamics at #NIPS2016"
X Link 2016-12-07T15:39Z 507.7K followers, [--] engagements

"Video of the Spot demo by Boston Dynamics"
X Link 2016-12-07T18:04Z 507.8K followers, [---] engagements

"Super cool work from OpenAI. https://x.com/gdb/status/896160062997016576 Our AI is undefeated against the world's top professionals including @DendiBoss @Arteezy @SumaaaaiL in Dota [--] solo https://t.co/hAdsyt7Q6C https://x.com/gdb/status/896160062997016576 Our AI is undefeated against the world's top professionals including @DendiBoss @Arteezy @SumaaaaiL in Dota [--] solo https://t.co/hAdsyt7Q6C"
X Link 2017-08-12T00:08Z 507.3K followers, [---] engagements

"I think this is meant as a commentary on the progress of AI so it's worth remembering that Atlas is 100% hardcoded it involves no learning or anything that would qualify as AI these days. It's classical control theory https://x.com/elonmusk/status/934888089058549760 This is nothing. In a few years that bot will move so fast youll need a strobe light to see it. Sweet dreams https://t.co/0MYNixQXMw https://x.com/elonmusk/status/934888089058549760 This is nothing. In a few years that bot will move so fast youll need a strobe light to see it. Sweet dreams https://t.co/0MYNixQXMw"
X Link 2017-11-27T07:11Z 506.7K followers, [----] engagements

"A state-of-the-art convnet trained on millions of images and videos of prehistoric animals in the wild could not recognize an auroch in this picture. But you can. In fact you can even though you have never seen one"
X Link 2017-12-12T13:23Z 573.9K followers, [---] engagements

"A scalable deep learning model-serving API with Keras Redis and Flask https://www.pyimagesearch.com/2018/01/29/scalable-keras-deep-learning-rest-api/ https://www.pyimagesearch.com/2018/01/29/scalable-keras-deep-learning-rest-api/"
X Link 2018-01-29T20:07Z 570.9K followers, [---] engagements

"This is why there are strict regulations over who can invest in high-risk assets (like startups) or over who can day-trade on margin (in short: if you want to gamble you need to prove you can afford to). But crypto is unregulated. https://x.com/seldo/status/959244935567298560 As you watch Bitcoin plummet to earth keep in mind that in addition to the rich and greedy idiots losing their shirts a bunch of under-informed ordinary people are losing money they can't afford to lose. https://x.com/seldo/status/959244935567298560 As you watch Bitcoin plummet to earth keep in mind that in addition to"
X Link 2018-02-03T00:19Z 507.4K followers, [---] engagements

"Working on designing missiles at Lockheed Martin might have a lesser human cost than working on AI software at FB. I'd like to see the establishment of a kind of Hippocratic Oath for the software engineering profession in particular for ML engineers & researchers. Don't be evil"
X Link 2018-02-17T22:50Z 573.9K followers, [---] engagements

"Hypothesis: tech crashes and AI winters because they separate the believers from the opportunists may paradoxically act as catalyzers of progress -- much like mass extinctions may act as catalyzers of evolutionary change by abruptly transforming the selection landscape"
X Link 2018-02-19T01:22Z 571.4K followers, [---] engagements

"Magic Leap has raised $2.3B in total at a valuation of $6B. A consumer play before product/market fit. I hope there's more to it than what the public knows and that it works out for them 🤔 https://www.bloomberg.com/news/articles/2018-03-07/magic-leap-raises-461-million-from-saudis https://www.bloomberg.com/news/articles/2018-03-07/magic-leap-raises-461-million-from-saudis"
X Link 2018-03-07T16:35Z 507K followers, [---] engagements

"Seemed to me that for successful giant tech companies the fundamental tech breakthroughs take place before the startup gets formed then the seed funding is used to find product/market fit then the big $ is used to scale. Is this naive Are there historical counter-examples"
X Link 2018-03-07T16:41Z 506.9K followers, [--] engagements

"Nice work from OpenAI on evolving loss functions to quickly master new tasks: https://blog.openai.com/evolved-policy-gradients/ https://blog.openai.com/evolved-policy-gradients/"
X Link 2018-04-18T17:09Z 507K followers, [---] engagements

"🌉 SF bay area: area: [-----] km2 population: 7.8M density: 425.7/km2 1BR rent: $3.3k 🗼 Tokyo bay area: area: [-----] km2 (22% less) population: 37.8M (385% more (4.85x)) density: 2631/km2 1BR rent: $1k (70% less) I wonder how they do it. Ah I guess we'll never know🤔"
X Link 2018-04-25T23:50Z 513.1K followers, [----] engagements

Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing

@fchollet
/creator/twitter::fchollet