[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @Marktechpost Marktechpost AI Dev News ⚡ Marktechpost AI Dev News ⚡ posts on X about uc, compact, $googl, ace the most. They currently have XXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours. ### Engagements: XXXXX [#](/creator/twitter::717930546391687170/interactions)  - X Week XXXXXX +129% - X Month XXXXXX +163% - X Months XXXXXXX -XX% - X Year XXXXXXX +87% ### Mentions: X [#](/creator/twitter::717930546391687170/posts_active)  - X Week XX -XXXX% - X Month XX +50% - X Months XXX +11% - X Year XXX +14% ### Followers: XXXXX [#](/creator/twitter::717930546391687170/followers)  - X Week XXXXX +0.41% - X Month XXXXX +1.80% - X Months XXXXX +11% - X Year XXXXX +33% ### CreatorRank: XXXXXXX [#](/creator/twitter::717930546391687170/influencer_rank)  ### Social Influence [#](/creator/twitter::717930546391687170/influence) --- **Social category influence** [technology brands](/list/technology-brands) XXXXX% [stocks](/list/stocks) XXXXX% [cryptocurrencies](/list/cryptocurrencies) XXXX% [travel destinations](/list/travel-destinations) XXXX% **Social topic influence** [uc](/topic/uc) 6.06%, [compact](/topic/compact) 6.06%, [$googl](/topic/$googl) 6.06%, [ace](/topic/ace) 3.03%, [instead of](/topic/instead-of) 3.03%, [playbook](/topic/playbook) 3.03%, [llm](/topic/llm) 3.03%, [agentic](/topic/agentic) 3.03%, [university of](/topic/university-of) 3.03%, [open ai](/topic/open-ai) XXXX% **Top accounts mentioned or mentioned by** [@stanford](/creator/undefined) [@sfresearch](/creator/undefined) [@servicenow](/creator/undefined) [@servicenowrsrch](/creator/undefined) [@googleaidevs](/creator/undefined) [@googleresearch](/creator/undefined) [@googleai](/creator/undefined) [@nvidia](/creator/undefined) [@nvidiaai](/creator/undefined) [@nvidianewsroom](/creator/undefined) [@alibabaqwen](/creator/undefined) [@langchainai](/creator/undefined) [@sambanovaai](/creator/undefined) [@wenhaocha1](/creator/undefined) [@zhoushang](/creator/undefined) [@zihanzheng71803](/creator/undefined) [@databricks](/creator/undefined) [@dbrxmosaicai](/creator/undefined) [@splunk](/creator/undefined) [@aiatmeta](/creator/undefined) **Top assets mentioned** [Alphabet Inc Class A (GOOGL)](/topic/$googl) [IBM (IBM)](/topic/ibm) [ServiceNow Inc (NOW)](/topic/servicenow) ### Top Social Posts [#](/creator/twitter::717930546391687170/posts) --- Top posts by engagements in the last XX hours "Agentic Context Engineering (ACE): Self-Improving LLMs via Evolving Contexts Not Fine-Tuning TL;DR: A team of researchers from Stanford University SambaNova Systems and UC Berkeley introduce ACE framework that improves LLM performance by editing and growing the input context instead of updating model weights. Context is treated as a living playbook maintained by three rolesGenerator Reflector Curatorwith small delta items merged incrementally to avoid brevity bias and context collapse. Reported gains: +10.6% on AppWorld agent tasks +8.6% on finance reasoning and XXXX% average latency" [X Link](https://x.com/Marktechpost/status/1976614553002930678) [@Marktechpost](/creator/x/Marktechpost) 2025-10-10T11:43Z 9872 followers, 6434 engagements "Are your LLM code benchmarks actually rejecting wrong-complexity solutions and interactive-protocol violations or are they passing under-specified unit tests A team of researchers from UCSD NYU University of Washington Princeton University Canyon Crest Academy OpenAI UC Berkeley MIT University of Waterloo and Sentient Labs introduce AutoCode a new AI framework that lets LLMs create and verify competitive programming problems mirroring the workflow of human problem setters. AutoCode reframes evaluation for code-reasoning models by treating problem setting (not only problem solving) as the" [X Link](https://x.com/Marktechpost/status/1979473678065905847) [@Marktechpost](/creator/x/Marktechpost) 2025-10-18T09:04Z 9871 followers, XXX engagements "Alibabas Qwen AI Releases Compact Dense Qwen3-VL 4B/8B (Instruct & Thinking) With FP8 Checkpoints Qwen introduced compact dense Qwen3-VL models at 4B and 8B each in Instruct and Thinking variants plus first-party FP8 checkpoints that use fine-grained FP8 (block size 128) and report near-BF16 quality for materially lower VRAM. The release retains the full capability surfacelong-document and video understanding 32-language OCR spatial groundingand supports a 256K context window extensible to 1M positioning these SKUs for single-GPU and edge deployments without sacrificing multimodal breadth." [X Link](https://x.com/Marktechpost/status/1978294944537415777) [@Marktechpost](/creator/x/Marktechpost) 2025-10-15T03:00Z 9862 followers, 1264 engagements "Google + Yale release C2S-Scale 27B (Gemma based model): converts scRNA-seq into cell sentences for LLM-native single-cell analysis. Dual-context virtual screen across 4000 compounds targets interferon-conditional antigen presentation. Model flags CK2 inhibition (silmitasertib) + low-dose IFN MHC-I boost; prediction validated in living cells. Open weights on Hugging Face enable replication and benchmarking. Full analysis: Paper: Model on HF: GitHub Repo: @googleaidevs @GoogleResearch @GoogleAI" [X Link](https://x.com/Marktechpost/status/1979091268484633031) [@Marktechpost](/creator/x/Marktechpost) 2025-10-17T07:45Z 9861 followers, 2463 engagements "IBM has released two new embedding models granite-embedding-english-r2 (149M) and granite-embedding-small-english-r2 (47M) built on ModernBERT with support for 8192-token context optimized attention mechanisms and FlashAttention X. Both models deliver strong performance on benchmarks like MTEB BEIR CoIR and MLDR while maintaining high throughput on GPUs and CPUs making them ideal for large-scale retrieval and RAG pipelines. Crucially they are released under the Apache XXX license ensuring unrestricted commercial use. full analysis: paper: granite-embedding-small-english-r2:" [X Link](https://x.com/Marktechpost/status/1966714502235525123) [@Marktechpost](/creator/x/Marktechpost) 2025-09-13T04:04Z 9868 followers, XXX engagements "Andrej Karpathy Releases nanochat: A Minimal End-to-End ChatGPT-Style Pipeline You Can Train in X Hours for $XXX Andrej Karpathys nanochat is a 8K-LOC dependency-light full-stack ChatGPT-style pipeline that you can run end-to-end on a single 8H100 node via producing a usable chat model and Web UI in X hours for roughly $XXX. The stack includes a Rust BPE tokenizer base pretraining on FineWeb-EDU mid-training (SmolTalk/MMLU aux/GSM8K with tool-use tags) SFT optional simplified GRPO on GSM8K a thin inference Engine (KV cache prefill/decode Python-interpreter tool) and an auto-generated with" [X Link](https://x.com/Marktechpost/status/1978155416162083035) [@Marktechpost](/creator/x/Marktechpost) 2025-10-14T17:46Z 9863 followers, 3843 engagements "Weak-for-Strong (W4S): A Novel Reinforcement Learning Algorithm that Trains a weak Meta Agent to Design Agentic Workflows with Stronger LLMs TL;DR (1) W4S trains a 7B weak meta agent with RLAO to write Python workflows that harness stronger executors modeled as a multi turn MDP. (2) On HumanEval with GPT 4o mini as executor W4S reaches Pass@1 of XXXX with about XX minutes optimization and about XXX dollars total cost beating automated baselines under the same executor. (3) Across XX benchmarks W4S improves over the strongest baseline by XXX% to XXXX% while avoiding fine tuning of the strong" [X Link](https://x.com/Marktechpost/status/1979803547173794180) [@Marktechpost](/creator/x/Marktechpost) 2025-10-19T06:55Z 9870 followers, 1713 engagements "Meet LangChains DeepAgents Library and a Practical Example to See How DeepAgents Actually Work in Action While a basic Large Language Model (LLM) agentone that repeatedly calls external toolsis easy to create these agents often struggle with long and complex tasks because they lack the ability to plan ahead and manage their work over time. They can be considered shallow in their execution. The deepagents library is designed to overcome this limitation by implementing a general architecture inspired by advanced applications like Deep Research and Claude Code. Full Analysis and Implementation:" [X Link](https://x.com/Marktechpost/status/1980257744029687887) [@Marktechpost](/creator/x/Marktechpost) 2025-10-20T13:00Z 9862 followers, XXX engagements "Meet OpenTSLM: A Family of Time-Series Language Models (TSLMs) Revolutionizing Medical Time-Series Analysis A significant development is set to transform AI in healthcare. Researchers at Stanford University in collaboration with ETH Zurich and tech leaders including Google Research and Amazon have introduced OpenTSLM a novel family of Time-Series Language Models (TSLMs). This breakthrough addresses a critical limitation in current LLMs by enabling them to interpret and reason over complex continuous medical time-series data such as ECGs EEGs and wearable sensor streams a feat where even" [X Link](https://x.com/Marktechpost/status/1977146374572626063) [@Marktechpost](/creator/x/Marktechpost) 2025-10-11T22:56Z 9871 followers, XXX engagements "ServiceNow AI Research Releases DRBench a Realistic Enterprise Deep-Research Benchmark DRBench is a reproducible enterprise-grade benchmark and environment for evaluating deep research agents on open-ended tasks that require synthesizing evidence from both public web sources and private organizational data (documents emails chats cloud files). The initial release includes XX tasks across XX domains distributes relevant and distractor insights across multiple applications and scores outputs on Insight Recall Distractor Avoidance Factuality and Report Quality. A baseline DRBench Agent (DRBA)" [X Link](https://x.com/Marktechpost/status/1978003687059722627) [@Marktechpost](/creator/x/Marktechpost) 2025-10-14T07:43Z 9871 followers, 1486 engagements "DeepSeek Just Released a 3B OCR Model: A 3B VLM Designed for High-Performance OCR and Structured Document Conversion Deepseek AI releases Deepseek OCR a 3B vision language model for document understanding. It encodes pages into compact vision tokens then decodes with a MoE decoder to recover text. This design cuts sequence length and memory growth on long documents. Reported results show about XX% decoding precision near 10x compression on Fox. The research team also report strong efficiency on OmniDocBench surpassing GOT OCR XXX using about XXX vision tokens and outperforming MinerU 2.0" [X Link](https://x.com/Marktechpost/status/1980434402875437178) [@Marktechpost](/creator/x/Marktechpost) 2025-10-21T00:42Z 9871 followers, XXX engagements "The Local AI Revolution: Expanding Generative AI with GPT-OSS-20B and the NVIDIA RTX AI PC The landscape of AI is expanding.Today many of the most powerfulLLMs (large language models)reside primarily in the cloud offering incredible capabilities but also concerns about privacy and limitations around how many files you can upload or how long they stay loaded.Now a powerful new paradigm is emerging. This is the dawn oflocal private AI. This switch to local PCs is catalyzed by the release of powerful open models like OpenAIs newgpt-oss and supercharged by accelerations provided byNVIDIA RTX AI" [X Link](https://x.com/Marktechpost/status/1980309356496429338) [@Marktechpost](/creator/x/Marktechpost) 2025-10-20T16:25Z 9870 followers, 33.7K engagements "How I Built an Intelligent Multi-Agent Systems with AutoGen LangChain and Hugging Face to Demonstrate Practical Agentic AI Workflows In this tutorial we dive into the essence of Agentic AI by uniting LangChain AutoGen and Hugging Face into a single fully functional framework that runs without paid APIs. We begin by setting up a lightweight open-source pipeline and then progress through structured reasoning multi-step workflows and collaborative agent interactions. As we move from LangChain chains to simulated multi-agent systems we experience how reasoning planning and execution can" [X Link](https://x.com/Marktechpost/status/1980862797291688265) [@Marktechpost](/creator/x/Marktechpost) 2025-10-22T05:04Z 9871 followers, 1068 engagements "Anthrogen Introduces Odyssey: A 102B Parameter Protein Language Model that Replaces Attention with Consensus and Trains with Discrete Diffusion Odyssey is Anthrogens multimodal protein language model family that fuses sequence tokens FSQ structure tokens and functional context for generation editing and conditional design it replaces self attention with Consensus that scales as O(L) and reports improved training stability it trains and samples with discrete diffusion for joint sequence and structure denoising it ships in production variants from 1.2B to 102B parameters it claims about 10x" [X Link](https://x.com/Marktechpost/status/1981253059776033199) [@Marktechpost](/creator/x/Marktechpost) 2025-10-23T06:55Z 9870 followers, XXX engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
@Marktechpost Marktechpost AI Dev News ⚡Marktechpost AI Dev News ⚡ posts on X about uc, compact, $googl, ace the most. They currently have XXXXX followers and XX posts still getting attention that total XXXXX engagements in the last XX hours.
Social category influence technology brands XXXXX% stocks XXXXX% cryptocurrencies XXXX% travel destinations XXXX%
Social topic influence uc 6.06%, compact 6.06%, $googl 6.06%, ace 3.03%, instead of 3.03%, playbook 3.03%, llm 3.03%, agentic 3.03%, university of 3.03%, open ai XXXX%
Top accounts mentioned or mentioned by @stanford @sfresearch @servicenow @servicenowrsrch @googleaidevs @googleresearch @googleai @nvidia @nvidiaai @nvidianewsroom @alibabaqwen @langchainai @sambanovaai @wenhaocha1 @zhoushang @zihanzheng71803 @databricks @dbrxmosaicai @splunk @aiatmeta
Top assets mentioned Alphabet Inc Class A (GOOGL) IBM (IBM) ServiceNow Inc (NOW)
Top posts by engagements in the last XX hours
"Agentic Context Engineering (ACE): Self-Improving LLMs via Evolving Contexts Not Fine-Tuning TL;DR: A team of researchers from Stanford University SambaNova Systems and UC Berkeley introduce ACE framework that improves LLM performance by editing and growing the input context instead of updating model weights. Context is treated as a living playbook maintained by three rolesGenerator Reflector Curatorwith small delta items merged incrementally to avoid brevity bias and context collapse. Reported gains: +10.6% on AppWorld agent tasks +8.6% on finance reasoning and XXXX% average latency"
X Link @Marktechpost 2025-10-10T11:43Z 9872 followers, 6434 engagements
"Are your LLM code benchmarks actually rejecting wrong-complexity solutions and interactive-protocol violations or are they passing under-specified unit tests A team of researchers from UCSD NYU University of Washington Princeton University Canyon Crest Academy OpenAI UC Berkeley MIT University of Waterloo and Sentient Labs introduce AutoCode a new AI framework that lets LLMs create and verify competitive programming problems mirroring the workflow of human problem setters. AutoCode reframes evaluation for code-reasoning models by treating problem setting (not only problem solving) as the"
X Link @Marktechpost 2025-10-18T09:04Z 9871 followers, XXX engagements
"Alibabas Qwen AI Releases Compact Dense Qwen3-VL 4B/8B (Instruct & Thinking) With FP8 Checkpoints Qwen introduced compact dense Qwen3-VL models at 4B and 8B each in Instruct and Thinking variants plus first-party FP8 checkpoints that use fine-grained FP8 (block size 128) and report near-BF16 quality for materially lower VRAM. The release retains the full capability surfacelong-document and video understanding 32-language OCR spatial groundingand supports a 256K context window extensible to 1M positioning these SKUs for single-GPU and edge deployments without sacrificing multimodal breadth."
X Link @Marktechpost 2025-10-15T03:00Z 9862 followers, 1264 engagements
"Google + Yale release C2S-Scale 27B (Gemma based model): converts scRNA-seq into cell sentences for LLM-native single-cell analysis. Dual-context virtual screen across 4000 compounds targets interferon-conditional antigen presentation. Model flags CK2 inhibition (silmitasertib) + low-dose IFN MHC-I boost; prediction validated in living cells. Open weights on Hugging Face enable replication and benchmarking. Full analysis: Paper: Model on HF: GitHub Repo: @googleaidevs @GoogleResearch @GoogleAI"
X Link @Marktechpost 2025-10-17T07:45Z 9861 followers, 2463 engagements
"IBM has released two new embedding models granite-embedding-english-r2 (149M) and granite-embedding-small-english-r2 (47M) built on ModernBERT with support for 8192-token context optimized attention mechanisms and FlashAttention X. Both models deliver strong performance on benchmarks like MTEB BEIR CoIR and MLDR while maintaining high throughput on GPUs and CPUs making them ideal for large-scale retrieval and RAG pipelines. Crucially they are released under the Apache XXX license ensuring unrestricted commercial use. full analysis: paper: granite-embedding-small-english-r2:"
X Link @Marktechpost 2025-09-13T04:04Z 9868 followers, XXX engagements
"Andrej Karpathy Releases nanochat: A Minimal End-to-End ChatGPT-Style Pipeline You Can Train in X Hours for $XXX Andrej Karpathys nanochat is a 8K-LOC dependency-light full-stack ChatGPT-style pipeline that you can run end-to-end on a single 8H100 node via producing a usable chat model and Web UI in X hours for roughly $XXX. The stack includes a Rust BPE tokenizer base pretraining on FineWeb-EDU mid-training (SmolTalk/MMLU aux/GSM8K with tool-use tags) SFT optional simplified GRPO on GSM8K a thin inference Engine (KV cache prefill/decode Python-interpreter tool) and an auto-generated with"
X Link @Marktechpost 2025-10-14T17:46Z 9863 followers, 3843 engagements
"Weak-for-Strong (W4S): A Novel Reinforcement Learning Algorithm that Trains a weak Meta Agent to Design Agentic Workflows with Stronger LLMs TL;DR (1) W4S trains a 7B weak meta agent with RLAO to write Python workflows that harness stronger executors modeled as a multi turn MDP. (2) On HumanEval with GPT 4o mini as executor W4S reaches Pass@1 of XXXX with about XX minutes optimization and about XXX dollars total cost beating automated baselines under the same executor. (3) Across XX benchmarks W4S improves over the strongest baseline by XXX% to XXXX% while avoiding fine tuning of the strong"
X Link @Marktechpost 2025-10-19T06:55Z 9870 followers, 1713 engagements
"Meet LangChains DeepAgents Library and a Practical Example to See How DeepAgents Actually Work in Action While a basic Large Language Model (LLM) agentone that repeatedly calls external toolsis easy to create these agents often struggle with long and complex tasks because they lack the ability to plan ahead and manage their work over time. They can be considered shallow in their execution. The deepagents library is designed to overcome this limitation by implementing a general architecture inspired by advanced applications like Deep Research and Claude Code. Full Analysis and Implementation:"
X Link @Marktechpost 2025-10-20T13:00Z 9862 followers, XXX engagements
"Meet OpenTSLM: A Family of Time-Series Language Models (TSLMs) Revolutionizing Medical Time-Series Analysis A significant development is set to transform AI in healthcare. Researchers at Stanford University in collaboration with ETH Zurich and tech leaders including Google Research and Amazon have introduced OpenTSLM a novel family of Time-Series Language Models (TSLMs). This breakthrough addresses a critical limitation in current LLMs by enabling them to interpret and reason over complex continuous medical time-series data such as ECGs EEGs and wearable sensor streams a feat where even"
X Link @Marktechpost 2025-10-11T22:56Z 9871 followers, XXX engagements
"ServiceNow AI Research Releases DRBench a Realistic Enterprise Deep-Research Benchmark DRBench is a reproducible enterprise-grade benchmark and environment for evaluating deep research agents on open-ended tasks that require synthesizing evidence from both public web sources and private organizational data (documents emails chats cloud files). The initial release includes XX tasks across XX domains distributes relevant and distractor insights across multiple applications and scores outputs on Insight Recall Distractor Avoidance Factuality and Report Quality. A baseline DRBench Agent (DRBA)"
X Link @Marktechpost 2025-10-14T07:43Z 9871 followers, 1486 engagements
"DeepSeek Just Released a 3B OCR Model: A 3B VLM Designed for High-Performance OCR and Structured Document Conversion Deepseek AI releases Deepseek OCR a 3B vision language model for document understanding. It encodes pages into compact vision tokens then decodes with a MoE decoder to recover text. This design cuts sequence length and memory growth on long documents. Reported results show about XX% decoding precision near 10x compression on Fox. The research team also report strong efficiency on OmniDocBench surpassing GOT OCR XXX using about XXX vision tokens and outperforming MinerU 2.0"
X Link @Marktechpost 2025-10-21T00:42Z 9871 followers, XXX engagements
"The Local AI Revolution: Expanding Generative AI with GPT-OSS-20B and the NVIDIA RTX AI PC The landscape of AI is expanding.Today many of the most powerfulLLMs (large language models)reside primarily in the cloud offering incredible capabilities but also concerns about privacy and limitations around how many files you can upload or how long they stay loaded.Now a powerful new paradigm is emerging. This is the dawn oflocal private AI. This switch to local PCs is catalyzed by the release of powerful open models like OpenAIs newgpt-oss and supercharged by accelerations provided byNVIDIA RTX AI"
X Link @Marktechpost 2025-10-20T16:25Z 9870 followers, 33.7K engagements
"How I Built an Intelligent Multi-Agent Systems with AutoGen LangChain and Hugging Face to Demonstrate Practical Agentic AI Workflows In this tutorial we dive into the essence of Agentic AI by uniting LangChain AutoGen and Hugging Face into a single fully functional framework that runs without paid APIs. We begin by setting up a lightweight open-source pipeline and then progress through structured reasoning multi-step workflows and collaborative agent interactions. As we move from LangChain chains to simulated multi-agent systems we experience how reasoning planning and execution can"
X Link @Marktechpost 2025-10-22T05:04Z 9871 followers, 1068 engagements
"Anthrogen Introduces Odyssey: A 102B Parameter Protein Language Model that Replaces Attention with Consensus and Trains with Discrete Diffusion Odyssey is Anthrogens multimodal protein language model family that fuses sequence tokens FSQ structure tokens and functional context for generation editing and conditional design it replaces self attention with Consensus that scales as O(L) and reports improved training stability it trains and samples with discrete diffusion for joint sequence and structure denoising it ships in production variants from 1.2B to 102B parameters it claims about 10x"
X Link @Marktechpost 2025-10-23T06:55Z 9870 followers, XXX engagements
/creator/twitter::Marktechpost