Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

# ![@AIatMeta Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::1034844617261248512.png) @AIatMeta AI at Meta

AI at Meta posts on X about meta, ai, the first, future the most. They currently have XXXXXXX followers and XXX posts still getting attention that total XXXXX engagements in the last XX hours.

### Engagements: XXXXX [#](/creator/twitter::1034844617261248512/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1034844617261248512/c:line/m:interactions.svg)

- X Week XXXXXX -XX%
- X Month XXXXXXXXX +880%
- X Months XXXXXXXXX -XX%
- X Year XXXXXXXXXX -XX%

### Mentions: XX [#](/creator/twitter::1034844617261248512/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1034844617261248512/c:line/m:posts_active.svg)

- X Week XX -XX%
- X Month XX +130%
- X Months XX -XX%
- X Year XXX -XX%

### Followers: XXXXXXX [#](/creator/twitter::1034844617261248512/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1034844617261248512/c:line/m:followers.svg)

- X Week XXXXXXX +0.17%
- X Month XXXXXXX +1.10%
- X Months XXXXXXX +7.80%
- X Year XXXXXXX +20%

### CreatorRank: XXXXXXX [#](/creator/twitter::1034844617261248512/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::1034844617261248512/c:line/m:influencer_rank.svg)

### Social Influence

**Social category influence**
[technology brands](/list/technology-brands)  [fashion brands](/list/fashion-brands)  [cryptocurrencies](/list/cryptocurrencies)  [stocks](/list/stocks)  [social networks](/list/social-networks) 

**Social topic influence**
[meta](/topic/meta) #3361, [ai](/topic/ai), [the first](/topic/the-first), [future](/topic/future), [collection](/topic/collection) #4499, [to the](/topic/to-the), [rayban](/topic/rayban) #115, [asr](/topic/asr) #33, [tribe](/topic/tribe), [open source](/topic/open-source)

**Top assets mentioned**
[Microsoft Corp. (MSFT)](/topic/microsoft)
### Top Social Posts
Top posts by engagements in the last XX hours

"πŸ† We're thrilled to announce that Meta FAIRs Brain & AI team won 1st place at the prestigious Algonauts 2025 brain modeling competition. Their 1B parameter model TRIBE (Trimodal Brain Encoder) is the first deep neural network trained to predict brain responses to stimuli across multiple modalities cortical areas and individuals. The approach combines pretrained representations of several foundational models from Meta text (Llama 3.2) audio (Wav2Vec2-BERT from Seamless) and video (V-JEPA 2) to predict a very large amount (80 hours per subject) of spatio-temporal fMRI brain responses to movies"  
[X Link](https://x.com/AIatMeta/status/1954865388749205984)  2025-08-11T11:20Z 734.3K followers, 1.1M engagements


"We built this place on open source Meta Chief Product Officer Chris Cox took to the stage to kick off LlamaCon 2025 reflecting on our long legacy of open source contributions. 🧡"  
[X Link](https://x.com/AIatMeta/status/1917353526088589409)  2025-04-29T23:01Z 734.2K followers, 34.9K engagements


""Exo's use of Llama 405B and consumer-grade devices to run inference at scale on the edge shows that the future of AI is open source and decentralized." - @mo_baioumy"  
[X Link](https://x.com/AIatMeta/status/1834633042339741961)  2024-09-13T16:39Z 734.3K followers, 868.7K engagements


"Today is the start of a new era of natively multimodal AI innovation. Today were introducing the first Llama X models: Llama X Scout and Llama X Maverick our most advanced models yet and the best in their class for multimodality. Llama X Scout 17B-active-parameter model with XX experts. Industry-leading context window of 10M tokens. Outperforms Gemma X Gemini XXX Flash-Lite and Mistral XXX across a broad range of widely accepted benchmarks. Llama X Maverick 17B-active-parameter model with XXX experts. Best-in-class image grounding with the ability to align user prompts with relevant visual"  
[X Link](https://x.com/AIatMeta/status/1908598456144531660)  2025-04-05T19:11Z 734.3K followers, 3.7M engagements


"Your first look at whats coming up for LlamaCon on April 29th Mark will be sitting down with Microsoft Chairman and CEO @satyanadella to discuss the latest trends in AI for devs; and with @databricks Co-Founder and CEO Ali Ghodsi on open source AI + advice for founders"  
[X Link](https://x.com/AIatMeta/status/1910361356500685206)  2025-04-10T15:56Z 734.3K followers, 170.1K engagements


"Today Mark shared Metas vision for the future of personal superintelligence for everyone. Read his full letter here:"  
[X Link](https://x.com/AIatMeta/status/1950543458609037550)  2025-07-30T13:06Z 734.3K followers, 2.2M engagements


"Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful high-resolution image features. For the first time a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense prediction tasks. Learn more about DINOv3 here:"  
[X Link](https://x.com/AIatMeta/status/1956027795051831584)  2025-08-14T16:19Z 734.3K followers, 896.2K engagements


"Today were excited to unveil a new generation of Segment Anything Models: X SAM X enables detecting segmenting and tracking of objects across images and videos now with short text phrases and exemplar prompts. πŸ”— Learn more about SAM 3: X SAM 3D brings the model collection into the 3rd dimension to enable precise reconstruction of 3D objects and people from a single 2D image. πŸ”— Learn more about SAM 3D: These models offer innovative capabilities and unique tools for developers and researchers to create experiment and uplevel media workflows"  
[X Link](https://x.com/AIatMeta/status/1991178519557046380)  2025-11-19T16:15Z 734.3K followers, 1.1M engagements


"Introducing SAM 3D the newest addition to the SAM collection bringing common sense 3D understanding of everyday images. SAM 3D includes two models: πŸ›‹ SAM 3D Objects for object and scene reconstruction πŸ§‘πŸ€πŸ§‘ SAM 3D Body for human pose and shape estimation Both models achieve state-of-the-art performance transforming static 2D images into vivid accurate reconstructions. πŸ”— Learn more:"  
[X Link](https://x.com/AIatMeta/status/1991184188402237877)  2025-11-19T16:37Z 734.3K followers, 841.8K engagements


"Meet SAM X a unified model that enables detection segmentation and tracking of objects across images and videos. SAM X introduces some of our most highly requested features like text and exemplar prompts to segment all objects of a target category. Learnings from SAM X will help power new features in Instagram Edits and Vibes bringing advanced segmentation capabilities directly to creators. πŸ”—Learn"  
[X Link](https://x.com/AIatMeta/status/1991191525867270158)  2025-11-19T17:07Z 734.3K followers, 102.2K engagements


"Collecting a high quality dataset with 4M unique phrases and 52M corresponding object masks helped SAM X achieve 2x the performance of baseline models. Kate a researcher on SAM X explains how the data engine made this leap possible. πŸ”— Read the SAM X research paper:"  
[X Link](https://x.com/AIatMeta/status/1991640180185317644)  2025-11-20T22:49Z 734.3K followers, 35.3K engagements


"Were advancing on-device AI with ExecuTorch now deployed across devices including Meta Quest X Ray-Ban Meta Oakley Meta Vanguard and Meta Ray-Ban Display. By eliminating conversion steps and supporting pre-deployment validation in PyTorch ExecuTorch accelerates the path from research to production ensuring consistent efficient AI across a diverse hardware ecosystem. Read the full technical deep dive:"  
[X Link](https://x.com/AIatMeta/status/1991901746579509542)  2025-11-21T16:09Z 734.3K followers, 71.6K engagements


"The Segment Anything Playground is a new way to interact with media. Experiment with Metas most advanced segmentation models including SAM X + SAM 3D and discover how these capabilities can transform your creative projects and technical workflows. Check out some inspo and tips in the 🧡 below then head over to the Playground to get started:"  
[X Link](https://x.com/AIatMeta/status/1991942484633821553)  2025-11-21T18:51Z 734.3K followers, 20.2K engagements


"SAM 3s ability to precisely detect and track objects is helping @ConservationX measure the survival of animal species around the world and prevent their extinction. πŸ”— Learn more about the work:"  
[X Link](https://x.com/AIatMeta/status/1993020997721899473)  2025-11-24T18:16Z 734.3K followers, 22.9K engagements


"We partnered with @ConservationX to build the SA-FARI dataset with 10000+ annotated videos including over XXX species of animals. Were sharing this dataset to help with conservation efforts around the globe. πŸ”— Find it here:"  
[X Link](https://x.com/AIatMeta/status/1993020999286263869)  2025-11-24T18:16Z 734.3K followers, 10K engagements


"SAM 3D is helping advance the future of rehabilitation. See how researchers at @CarnegieMellon are using SAM 3D to capture and analyze human movement in clinical settings opening the doors to personalized data-driven insights in the recovery process. πŸ”— Learn more about SAM 3D:"  
[X Link](https://x.com/AIatMeta/status/1993386243170714073)  2025-11-25T18:28Z 734.3K followers, 60.2K engagements


"Introducing Meta Omnilingual Automatic Speech Recognition (ASR) a suite of models providing ASR capabilities for over 1600 languages including XXX low-coverage languages never before served by any ASR system. While most ASR systems focus on a limited set of languages that are well-represented on the internet this release marks a major step toward building a truly universal transcription system. πŸ”— Learn more:"  
[X Link](https://x.com/AIatMeta/status/1987946571439444361)  2025-11-10T18:12Z 734.3K followers, 529.6K engagements


"Were in San Diego this week for #NeurIPS2025 Stop by the Meta booth (#1223) to meet our team and check out: πŸ”Ž Demos of our latest research including DINOv3 and UMA ⚑ Lightning talks from researchers behind SAM X Omnilingual ASR and more (see schedule below) πŸ‘“ Hands-on demos with our latest AI glasses including the Meta Ray-Ban Display Our team is also sharing 19+ papers and 13+ workshops this week. We hope to see you there"  
[X Link](https://x.com/AIatMeta/status/1995531733391798452)  2025-12-01T16:33Z 734.3K followers, 76.6K engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@AIatMeta Avatar @AIatMeta AI at Meta

AI at Meta posts on X about meta, ai, the first, future the most. They currently have XXXXXXX followers and XXX posts still getting attention that total XXXXX engagements in the last XX hours.

Engagements: XXXXX #

Engagements Line Chart

  • X Week XXXXXX -XX%
  • X Month XXXXXXXXX +880%
  • X Months XXXXXXXXX -XX%
  • X Year XXXXXXXXXX -XX%

Mentions: XX #

Mentions Line Chart

  • X Week XX -XX%
  • X Month XX +130%
  • X Months XX -XX%
  • X Year XXX -XX%

Followers: XXXXXXX #

Followers Line Chart

  • X Week XXXXXXX +0.17%
  • X Month XXXXXXX +1.10%
  • X Months XXXXXXX +7.80%
  • X Year XXXXXXX +20%

CreatorRank: XXXXXXX #

CreatorRank Line Chart

Social Influence

Social category influence technology brands fashion brands cryptocurrencies stocks social networks

Social topic influence meta #3361, ai, the first, future, collection #4499, to the, rayban #115, asr #33, tribe, open source

Top assets mentioned Microsoft Corp. (MSFT)

Top Social Posts

Top posts by engagements in the last XX hours

"πŸ† We're thrilled to announce that Meta FAIRs Brain & AI team won 1st place at the prestigious Algonauts 2025 brain modeling competition. Their 1B parameter model TRIBE (Trimodal Brain Encoder) is the first deep neural network trained to predict brain responses to stimuli across multiple modalities cortical areas and individuals. The approach combines pretrained representations of several foundational models from Meta text (Llama 3.2) audio (Wav2Vec2-BERT from Seamless) and video (V-JEPA 2) to predict a very large amount (80 hours per subject) of spatio-temporal fMRI brain responses to movies"
X Link 2025-08-11T11:20Z 734.3K followers, 1.1M engagements

"We built this place on open source Meta Chief Product Officer Chris Cox took to the stage to kick off LlamaCon 2025 reflecting on our long legacy of open source contributions. 🧡"
X Link 2025-04-29T23:01Z 734.2K followers, 34.9K engagements

""Exo's use of Llama 405B and consumer-grade devices to run inference at scale on the edge shows that the future of AI is open source and decentralized." - @mo_baioumy"
X Link 2024-09-13T16:39Z 734.3K followers, 868.7K engagements

"Today is the start of a new era of natively multimodal AI innovation. Today were introducing the first Llama X models: Llama X Scout and Llama X Maverick our most advanced models yet and the best in their class for multimodality. Llama X Scout 17B-active-parameter model with XX experts. Industry-leading context window of 10M tokens. Outperforms Gemma X Gemini XXX Flash-Lite and Mistral XXX across a broad range of widely accepted benchmarks. Llama X Maverick 17B-active-parameter model with XXX experts. Best-in-class image grounding with the ability to align user prompts with relevant visual"
X Link 2025-04-05T19:11Z 734.3K followers, 3.7M engagements

"Your first look at whats coming up for LlamaCon on April 29th Mark will be sitting down with Microsoft Chairman and CEO @satyanadella to discuss the latest trends in AI for devs; and with @databricks Co-Founder and CEO Ali Ghodsi on open source AI + advice for founders"
X Link 2025-04-10T15:56Z 734.3K followers, 170.1K engagements

"Today Mark shared Metas vision for the future of personal superintelligence for everyone. Read his full letter here:"
X Link 2025-07-30T13:06Z 734.3K followers, 2.2M engagements

"Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful high-resolution image features. For the first time a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense prediction tasks. Learn more about DINOv3 here:"
X Link 2025-08-14T16:19Z 734.3K followers, 896.2K engagements

"Today were excited to unveil a new generation of Segment Anything Models: X SAM X enables detecting segmenting and tracking of objects across images and videos now with short text phrases and exemplar prompts. πŸ”— Learn more about SAM 3: X SAM 3D brings the model collection into the 3rd dimension to enable precise reconstruction of 3D objects and people from a single 2D image. πŸ”— Learn more about SAM 3D: These models offer innovative capabilities and unique tools for developers and researchers to create experiment and uplevel media workflows"
X Link 2025-11-19T16:15Z 734.3K followers, 1.1M engagements

"Introducing SAM 3D the newest addition to the SAM collection bringing common sense 3D understanding of everyday images. SAM 3D includes two models: πŸ›‹ SAM 3D Objects for object and scene reconstruction πŸ§‘πŸ€πŸ§‘ SAM 3D Body for human pose and shape estimation Both models achieve state-of-the-art performance transforming static 2D images into vivid accurate reconstructions. πŸ”— Learn more:"
X Link 2025-11-19T16:37Z 734.3K followers, 841.8K engagements

"Meet SAM X a unified model that enables detection segmentation and tracking of objects across images and videos. SAM X introduces some of our most highly requested features like text and exemplar prompts to segment all objects of a target category. Learnings from SAM X will help power new features in Instagram Edits and Vibes bringing advanced segmentation capabilities directly to creators. πŸ”—Learn"
X Link 2025-11-19T17:07Z 734.3K followers, 102.2K engagements

"Collecting a high quality dataset with 4M unique phrases and 52M corresponding object masks helped SAM X achieve 2x the performance of baseline models. Kate a researcher on SAM X explains how the data engine made this leap possible. πŸ”— Read the SAM X research paper:"
X Link 2025-11-20T22:49Z 734.3K followers, 35.3K engagements

"Were advancing on-device AI with ExecuTorch now deployed across devices including Meta Quest X Ray-Ban Meta Oakley Meta Vanguard and Meta Ray-Ban Display. By eliminating conversion steps and supporting pre-deployment validation in PyTorch ExecuTorch accelerates the path from research to production ensuring consistent efficient AI across a diverse hardware ecosystem. Read the full technical deep dive:"
X Link 2025-11-21T16:09Z 734.3K followers, 71.6K engagements

"The Segment Anything Playground is a new way to interact with media. Experiment with Metas most advanced segmentation models including SAM X + SAM 3D and discover how these capabilities can transform your creative projects and technical workflows. Check out some inspo and tips in the 🧡 below then head over to the Playground to get started:"
X Link 2025-11-21T18:51Z 734.3K followers, 20.2K engagements

"SAM 3s ability to precisely detect and track objects is helping @ConservationX measure the survival of animal species around the world and prevent their extinction. πŸ”— Learn more about the work:"
X Link 2025-11-24T18:16Z 734.3K followers, 22.9K engagements

"We partnered with @ConservationX to build the SA-FARI dataset with 10000+ annotated videos including over XXX species of animals. Were sharing this dataset to help with conservation efforts around the globe. πŸ”— Find it here:"
X Link 2025-11-24T18:16Z 734.3K followers, 10K engagements

"SAM 3D is helping advance the future of rehabilitation. See how researchers at @CarnegieMellon are using SAM 3D to capture and analyze human movement in clinical settings opening the doors to personalized data-driven insights in the recovery process. πŸ”— Learn more about SAM 3D:"
X Link 2025-11-25T18:28Z 734.3K followers, 60.2K engagements

"Introducing Meta Omnilingual Automatic Speech Recognition (ASR) a suite of models providing ASR capabilities for over 1600 languages including XXX low-coverage languages never before served by any ASR system. While most ASR systems focus on a limited set of languages that are well-represented on the internet this release marks a major step toward building a truly universal transcription system. πŸ”— Learn more:"
X Link 2025-11-10T18:12Z 734.3K followers, 529.6K engagements

"Were in San Diego this week for #NeurIPS2025 Stop by the Meta booth (#1223) to meet our team and check out: πŸ”Ž Demos of our latest research including DINOv3 and UMA ⚑ Lightning talks from researchers behind SAM X Omnilingual ASR and more (see schedule below) πŸ‘“ Hands-on demos with our latest AI glasses including the Meta Ray-Ban Display Our team is also sharing 19+ papers and 13+ workshops this week. We hope to see you there"
X Link 2025-12-01T16:33Z 734.3K followers, 76.6K engagements

@AIatMeta
/creator/twitter::AIatMeta