[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @bookwormengr GDP GDP posts on X about open ai, bytedance, india, has been the most. They currently have XXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours. ### Engagements: XXXXX [#](/creator/twitter::171962385/interactions)  - X Week XXXXXXX +47% - X Month XXXXXXX +114% - X Months XXXXXXX -XX% - X Year XXXXXXXXX +16,317% ### Mentions: XX [#](/creator/twitter::171962385/posts_active)  - X Week XX -XX% - X Month XX +67% - X Months XXX +64% - X Year XXX +1,023% ### Followers: XXXXX [#](/creator/twitter::171962385/followers)  - X Week XXXXX +1.20% - X Month XXXXX +3.20% - X Months XXXXX +14% - X Year XXXXX +1,748% ### CreatorRank: XXXXXXX [#](/creator/twitter::171962385/influencer_rank)  ### Social Influence [#](/creator/twitter::171962385/influence) --- **Social category influence** [technology brands](/list/technology-brands) XXXXX% [countries](/list/countries) XXXX% [stocks](/list/stocks) XXXX% [travel destinations](/list/travel-destinations) XXXX% [finance](/list/finance) XXXX% **Social topic influence** [open ai](/topic/open-ai) 8.82%, [bytedance](/topic/bytedance) 5.88%, [india](/topic/india) 2.94%, [has been](/topic/has-been) 2.94%, [lynx](/topic/lynx) 2.94%, [singapore](/topic/singapore) 2.94%, [next to](/topic/next-to) 2.94%, [te](/topic/te) 2.94%, [seoul](/topic/seoul) 2.94%, [japan](/topic/japan) XXXX% **Top accounts mentioned or mentioned by** [@teortaxestex](/creator/undefined) [@tszzl](/creator/undefined) [@bit3snake](/creator/undefined) [@yilengyao1](/creator/undefined) [@semianalysis](/creator/undefined) [@dravidasishu](/creator/undefined) [@aiforsuccess](/creator/undefined) [@ozenhati](/creator/undefined) [@zaiorg](/creator/undefined) [@antifundvc](/creator/undefined) [@geoffreywoo](/creator/undefined) [@markchen90](/creator/undefined) [@billpeeb](/creator/undefined) [@venturetwins](/creator/undefined) [@zephyrz9](/creator/undefined) [@coreweave](/creator/undefined) [@dwarkeshsp](/creator/undefined) [@cognition](/creator/undefined) [@windsurf](/creator/undefined) [@scottwu46](/creator/undefined) ### Top Social Posts [#](/creator/twitter::171962385/posts) --- Top posts by engagements in the last XX hours "@dravidasishu India has massive deficit of doctors that need to filled. It will take couple of decades for it at current rate" [X Link](https://x.com/bookwormengr/status/1980526434180559136) [@bookwormengr](/creator/x/bookwormengr) 2025-10-21T06:47Z 8733 followers, XXX engagements "Why Andrej likes this DeepSeek-OCR paper As you can see in this architecture diagram - a page with lot of text that would result in thousands of tokens is just represented by XXX tokens and still the LLM is able to understand the representation and convert it to proper OCR output. It does it with more than XX% accuracy. So the obvious conclusion is that this system has been able to compress far many more tokens in just XXX tokens. However this lunch is not totally free. You have to a SAM (Segment Anything Model) a simple convolution layer to reduce token count and CLIP encoder for the" [X Link](https://x.com/bookwormengr/status/1980662349762818499) [@bookwormengr](/creator/x/bookwormengr) 2025-10-21T15:47Z 8733 followers, 3856 engagements "@venturetwins That technology exists. This is from Lynx paper by ByteDance:" [X Link](https://x.com/bookwormengr/status/1976488825129193939) [@bookwormengr](/creator/x/bookwormengr) 2025-10-10T03:23Z 8736 followers, XXX engagements "OpenAi's 2024 revenue was less than 4B If Epoch AI is right their inference bill was 2B So it means inferencing that only XX% margin - this is quite to contrary of what @SemiAnalysis_ reported (their projection was upwards of 80-90% if I am not mistaken)" [X Link](https://x.com/bookwormengr/status/1977260456906493994) [@bookwormengr](/creator/x/bookwormengr) 2025-10-12T06:30Z 8736 followers, 42.6K engagements "Ever wondered how SORA X can generate consistent multishot video ๐งต Read SeeDance paper by ByteDance Seed. - The model is trained on XX second clips that are subdivided into 'shot'. - Each shot (1-12 seconds adding up to 12) has dense captions. Since model has seen this data during training it learns to generate similar videos based on captions describing multiple successive shots. They use DiT (pretty standard these days) to de-noise the image tokens starting from base noise. We will see data prep on the next tweet" [X Link](https://x.com/bookwormengr/status/1978688413491417130) [@bookwormengr](/creator/x/bookwormengr) 2025-10-16T05:04Z 8734 followers, 1957 engagements "Poolside guys aim to over do Sama. CoreWeave will be a supplier and a customer" [X Link](https://x.com/bookwormengr/status/1978768607233339881) [@bookwormengr](/creator/x/bookwormengr) 2025-10-16T10:22Z 8733 followers, 1041 engagements "Took Pune metro. It is outstanding. Will write a thread with videos. Exceeded all my expectation. And mind you I have a house Singapore next to the best line in Singapore TE (though I don't take it much). So I have seen the best (including Schenzen Seoul etc.) Within the constraints Pune MRT is outstanding and super duper clean. Most people are very well behaved" [X Link](https://x.com/bookwormengr/status/1979101200839774373) [@bookwormengr](/creator/x/bookwormengr) 2025-10-17T08:24Z 8733 followers, 26.2K engagements "Low Information bandwidth of vibe coding vs High Information bandwidth of doing it yourself. Andrej Karpathy give precise dense wording to what all of us experience. Andrej explains that typing out what you want in English to an AI model is too much typing and that directly navigating to the relevant part of the code and using autocomplete provides a very high information bandwidth way to specify what you want. This is so true. That is why LLM coding is great for something that is common across most applications but it is not say great for something specific you want. You would have write so" [X Link](https://x.com/bookwormengr/status/1979726892509937894) [@bookwormengr](/creator/x/bookwormengr) 2025-10-19T01:50Z 8733 followers, 2680 engagements "This rack scale server from AMD is in direct competition to Nvidia's NVL72. These servers allow the memory of all the 72GPUs to act as a single giant memory allowing users to run long context inference with very large Mixture of Expert (MoE) models. This was missing offering from AMD now they have it The game begins" [X Link](https://x.com/bookwormengr/status/1978141066067873931) [@bookwormengr](/creator/x/bookwormengr) 2025-10-14T16:49Z 8733 followers, 15.2K engagements "@vikramchandra These papers are targeted at boomer audience ๐. Since human learning and LLM learning are not same; you can not apply lessons across" [X Link](https://x.com/bookwormengr/status/1980563717520757068) [@bookwormengr](/creator/x/bookwormengr) 2025-10-21T09:16Z 8733 followers, XX engagements "It is year 2030 Nvidia is X generations past Blackwell. Andrej: excited to release new repo: NanoAGI. As for training runs: If you spend 100$ (run it while finishing a meal)- it will be as smart as a PhD. If you spend 1000$ (basically over night) - it will help solve nuclear fusion. Did not use openAI AGI to write the code for NanoAGI. It was too out of distribution" [X Link](https://x.com/bookwormengr/status/1977780527835787535) [@bookwormengr](/creator/x/bookwormengr) 2025-10-13T16:56Z 8733 followers, 331.7K engagements "Must listen. @OpenAI VP of Research @MillionInt crediting DeepSeek for teaching non-OpenAI American labs how to build reasoning models. @tszzl & @DarioAmodei mentioned DeepSeek GRPO's impact to be minimal. That definitely is not the case - we all know. Credit where credit has due. OpenAI Anthropic are amazing & path breaking; so is DeepSeek with their limited resources. Listen to great podcast with @MillionInt. I greatly appreciate his humility. Also great insights about OpenAI. Listen around 49:59" [X Link](https://x.com/bookwormengr/status/1979019834638696941) [@bookwormengr](/creator/x/bookwormengr) 2025-10-17T03:01Z 8733 followers, 73.4K engagements "Your daily reminder most of the fentanyl seized in United States in manufactured in Mexico by Mexican cartels (more than 96%). These cartels import ingredient from China (which are nothing but components of legitimate medicinal drugs) and turn them into fentanyl. Accusing China unfairly is misdiagnosing the problem that will ensure there is no proper solution. This is from BBC" [X Link](https://x.com/bookwormengr/status/1979892823152337024) [@bookwormengr](/creator/x/bookwormengr) 2025-10-19T12:50Z 8733 followers, 3642 engagements "Beautiful capture. Happy Diwali my friends. Feels so good to be home" [X Link](https://x.com/bookwormengr/status/1980668993204809951) [@bookwormengr](/creator/x/bookwormengr) 2025-10-21T16:14Z 8733 followers, XXX engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
GDP posts on X about open ai, bytedance, india, has been the most. They currently have XXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours.
Social category influence technology brands XXXXX% countries XXXX% stocks XXXX% travel destinations XXXX% finance XXXX%
Social topic influence open ai 8.82%, bytedance 5.88%, india 2.94%, has been 2.94%, lynx 2.94%, singapore 2.94%, next to 2.94%, te 2.94%, seoul 2.94%, japan XXXX%
Top accounts mentioned or mentioned by @teortaxestex @tszzl @bit3snake @yilengyao1 @semianalysis @dravidasishu @aiforsuccess @ozenhati @zaiorg @antifundvc @geoffreywoo @markchen90 @billpeeb @venturetwins @zephyrz9 @coreweave @dwarkeshsp @cognition @windsurf @scottwu46
Top posts by engagements in the last XX hours
"@dravidasishu India has massive deficit of doctors that need to filled. It will take couple of decades for it at current rate"
X Link @bookwormengr 2025-10-21T06:47Z 8733 followers, XXX engagements
"Why Andrej likes this DeepSeek-OCR paper As you can see in this architecture diagram - a page with lot of text that would result in thousands of tokens is just represented by XXX tokens and still the LLM is able to understand the representation and convert it to proper OCR output. It does it with more than XX% accuracy. So the obvious conclusion is that this system has been able to compress far many more tokens in just XXX tokens. However this lunch is not totally free. You have to a SAM (Segment Anything Model) a simple convolution layer to reduce token count and CLIP encoder for the"
X Link @bookwormengr 2025-10-21T15:47Z 8733 followers, 3856 engagements
"@venturetwins That technology exists. This is from Lynx paper by ByteDance:"
X Link @bookwormengr 2025-10-10T03:23Z 8736 followers, XXX engagements
"OpenAi's 2024 revenue was less than 4B If Epoch AI is right their inference bill was 2B So it means inferencing that only XX% margin - this is quite to contrary of what @SemiAnalysis_ reported (their projection was upwards of 80-90% if I am not mistaken)"
X Link @bookwormengr 2025-10-12T06:30Z 8736 followers, 42.6K engagements
"Ever wondered how SORA X can generate consistent multishot video ๐งต Read SeeDance paper by ByteDance Seed. - The model is trained on XX second clips that are subdivided into 'shot'. - Each shot (1-12 seconds adding up to 12) has dense captions. Since model has seen this data during training it learns to generate similar videos based on captions describing multiple successive shots. They use DiT (pretty standard these days) to de-noise the image tokens starting from base noise. We will see data prep on the next tweet"
X Link @bookwormengr 2025-10-16T05:04Z 8734 followers, 1957 engagements
"Poolside guys aim to over do Sama. CoreWeave will be a supplier and a customer"
X Link @bookwormengr 2025-10-16T10:22Z 8733 followers, 1041 engagements
"Took Pune metro. It is outstanding. Will write a thread with videos. Exceeded all my expectation. And mind you I have a house Singapore next to the best line in Singapore TE (though I don't take it much). So I have seen the best (including Schenzen Seoul etc.) Within the constraints Pune MRT is outstanding and super duper clean. Most people are very well behaved"
X Link @bookwormengr 2025-10-17T08:24Z 8733 followers, 26.2K engagements
"Low Information bandwidth of vibe coding vs High Information bandwidth of doing it yourself. Andrej Karpathy give precise dense wording to what all of us experience. Andrej explains that typing out what you want in English to an AI model is too much typing and that directly navigating to the relevant part of the code and using autocomplete provides a very high information bandwidth way to specify what you want. This is so true. That is why LLM coding is great for something that is common across most applications but it is not say great for something specific you want. You would have write so"
X Link @bookwormengr 2025-10-19T01:50Z 8733 followers, 2680 engagements
"This rack scale server from AMD is in direct competition to Nvidia's NVL72. These servers allow the memory of all the 72GPUs to act as a single giant memory allowing users to run long context inference with very large Mixture of Expert (MoE) models. This was missing offering from AMD now they have it The game begins"
X Link @bookwormengr 2025-10-14T16:49Z 8733 followers, 15.2K engagements
"@vikramchandra These papers are targeted at boomer audience ๐. Since human learning and LLM learning are not same; you can not apply lessons across"
X Link @bookwormengr 2025-10-21T09:16Z 8733 followers, XX engagements
"It is year 2030 Nvidia is X generations past Blackwell. Andrej: excited to release new repo: NanoAGI. As for training runs: If you spend 100$ (run it while finishing a meal)- it will be as smart as a PhD. If you spend 1000$ (basically over night) - it will help solve nuclear fusion. Did not use openAI AGI to write the code for NanoAGI. It was too out of distribution"
X Link @bookwormengr 2025-10-13T16:56Z 8733 followers, 331.7K engagements
"Must listen. @OpenAI VP of Research @MillionInt crediting DeepSeek for teaching non-OpenAI American labs how to build reasoning models. @tszzl & @DarioAmodei mentioned DeepSeek GRPO's impact to be minimal. That definitely is not the case - we all know. Credit where credit has due. OpenAI Anthropic are amazing & path breaking; so is DeepSeek with their limited resources. Listen to great podcast with @MillionInt. I greatly appreciate his humility. Also great insights about OpenAI. Listen around 49:59"
X Link @bookwormengr 2025-10-17T03:01Z 8733 followers, 73.4K engagements
"Your daily reminder most of the fentanyl seized in United States in manufactured in Mexico by Mexican cartels (more than 96%). These cartels import ingredient from China (which are nothing but components of legitimate medicinal drugs) and turn them into fentanyl. Accusing China unfairly is misdiagnosing the problem that will ensure there is no proper solution. This is from BBC"
X Link @bookwormengr 2025-10-19T12:50Z 8733 followers, 3642 engagements
"Beautiful capture. Happy Diwali my friends. Feels so good to be home"
X Link @bookwormengr 2025-10-21T16:14Z 8733 followers, XXX engagements
/creator/x::bookwormengr