Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

# ![@Marktechpost Avatar](https://lunarcrush.com/gi/w:26/cr:twitter::717930546391687170.png) @Marktechpost Marktechpost AI Dev News ⚡

Marktechpost AI Dev News ⚡ posts on X about $googl, servicenow, samsung, outer the most. They currently have XXXXX followers and XXX posts still getting attention that total XXX engagements in the last XX hours.

### Engagements: XXX [#](/creator/twitter::717930546391687170/interactions)
![Engagements Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::717930546391687170/c:line/m:interactions.svg)

- X Week XXXXXX -XX%
- X Month XXXXXX +177%
- X Months XXXXXXX -XX%
- X Year XXXXXXX +83%

### Mentions: X [#](/creator/twitter::717930546391687170/posts_active)
![Mentions Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::717930546391687170/c:line/m:posts_active.svg)

- X Week XX no change
- X Month XX +42%
- X Months XXX +5%
- X Year XXX +17%

### Followers: XXXXX [#](/creator/twitter::717930546391687170/followers)
![Followers Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::717930546391687170/c:line/m:followers.svg)

- X Week XXXXX +0.47%
- X Month XXXXX +1.80%
- X Months XXXXX +12%
- X Year XXXXX +33%

### CreatorRank: XXXXXXXXX [#](/creator/twitter::717930546391687170/influencer_rank)
![CreatorRank Line Chart](https://lunarcrush.com/gi/w:600/cr:twitter::717930546391687170/c:line/m:influencer_rank.svg)

### Social Influence [#](/creator/twitter::717930546391687170/influence)
---

**Social category influence**
[technology brands](/list/technology-brands)  XXXX% [stocks](/list/stocks)  XXXX% [finance](/list/finance)  XXX% [cryptocurrencies](/list/cryptocurrencies)  XXX% [travel destinations](/list/travel-destinations)  XXX%

**Social topic influence**
[$googl](/topic/$googl) 1.79%, [servicenow](/topic/servicenow) 1.19%, [samsung](/topic/samsung) 0.6%, [outer](/topic/outer) 0.6%, [a very](/topic/a-very) 0.6%, [cest](/topic/cest) 0.6%, [token](/topic/token) 0.6%, [ema](/topic/ema) 0.6%, [ace](/topic/ace) 0.6%, [instead of](/topic/instead-of) XXX%

**Top accounts mentioned or mentioned by**
[@ipfconline1](/creator/undefined) [@sfresearch](/creator/undefined) [@nvidia](/creator/undefined) [@nvidiaai](/creator/undefined) [@bytedancetalk](/creator/undefined) [@aiatmeta](/creator/undefined) [@microsoft](/creator/undefined) [@googledeepmind](/creator/undefined) [@googleai](/creator/undefined) [@ibm](/creator/undefined) [@alibabaqwen](/creator/undefined) [@openai](/creator/undefined) [@ibmresearch](/creator/undefined) [@meta](/creator/undefined) [@servicenow](/creator/undefined) [@servicenowrsrch](/creator/undefined) [@mistralai](/creator/undefined) [@googleaidevs](/creator/undefined) [@stanford](/creator/undefined) [@togethercompute](/creator/undefined)

**Top assets mentioned**
[Alphabet Inc Class A (GOOGL)](/topic/$googl) [ServiceNow Inc (NOW)](/topic/servicenow)
### Top Social Posts [#](/creator/twitter::717930546391687170/posts)
---
Top posts by engagements in the last XX hours

"Samsung introduced a tiny X Million parameter model that just beat DeepSeek-R1 Gemini XXX pro and o3-mini at reasoning on both ARG-AGI X and ARC-AGI X Samsungs Tiny Recursive Model (TRM) is a 7M-parameter two-layer solver that replaces token-by-token decoding with an iterative draft latent-think revise loop: X scratchpad updates per outer step unrolled up to XX steps with full backprop through the recursion. On public protocols it reports XX% on ARC-AGI-1 and X% (two-try) on ARC-AGI-2 and also XXXX% on Sudoku-Extreme and XXXX% on Maze-Hard. Code is available on GitHub. full analysis: paper:"  
[X Link](https://x.com/Marktechpost/status/1976405546157961356) [@Marktechpost](/creator/x/Marktechpost) 2025-10-09T21:52Z 9851 followers, 1182 engagements


"QeRL: NVFP4-Quantized Reinforcement Learning (RL) Brings 32B LLM Training to a Single H100While Improving Exploration TL;DR: QeRL open-sources a quantization-enhanced RL pipeline that runs 4-bit NVFP4 weights with LoRA updates to accelerate the rollout bottleneck. QeRL reports XXX rollout speedups parity or gains over 16-bit LoRA/QLoRA on math reasoning and the first RL training of a 32B policy on a single H100-80GB. Adaptive Quantization Noise schedules channel-wise perturbations to raise policy entropy and improve exploration during training. NVFP4 provides a hardware-optimized 4-bit"  
[X Link](https://x.com/Marktechpost/status/1978681811795636718) [@Marktechpost](/creator/x/Marktechpost) 2025-10-16T04:38Z 9853 followers, 10.3K engagements


"ServiceNow AI Releases Apriel-1.5-15B-Thinker: An Open-Weights Multimodal Reasoning Model that Hits Frontier-Level Performance on a Single-GPU Budget ServiceNow AI Researchs Apriel-1.5-15B-Thinker is a 15-billion-parameter open-weights multimodal reasoning model trained via mid-training (continual pretraining) plus supervised fine-tuningwith no reinforcement learningthat achieves an Artificial Analysis Intelligence Index (AAI) score of XX and discloses task results of AIME 2025 XX GPQA Diamond XX LiveCodeBench XX Instruction-Following Benchmark XX and Tau-squared Bench (Telecom) 68; it is"  
[X Link](https://x.com/Marktechpost/status/1973632373263929812) [@Marktechpost](/creator/x/Marktechpost) 2025-10-02T06:13Z 9842 followers, XXX engagements


"Here is a very interesting upcoming AI webinar from deepset Topic: Scaling AI with Haystack Enterprise: A Developers Guide When: October XX 2025 10am ET 3pm BST 4pm CEST In this webinarJulian RischandBilge Ycelwill show howHaystack Enterprisehelps developers bridge that gap bringing the speed and flexibility of open source together with the support enterprises need. Youll learn how to: (1) Extend your expertisewith direct access to the Haystack engineering team through private support and consultation hours. (2) Deploy with confidenceusing Helm charts and best-practice guides for secure"  
[X Link](https://x.com/Marktechpost/status/1976389104842748262) [@Marktechpost](/creator/x/Marktechpost) 2025-10-09T20:47Z 9843 followers, XXX engagements


"Google Introduces Speech-to-Retrieval (S2R) Approach that Maps a Spoken Query Directly to an Embedding and Retrieves Information without First Converting Speech to Text Googles Speech-to-Retrieval (S2R) replaces the ASRtextretrieval cascade with a dual-encoder system that maps spoken queries directly to audio embeddings and retrieves matching document embeddings; in production its now powering Voice Search in multiple languages where evaluations show S2R beating the cascade baseline and approaching an upper bound built from human-verified transcripts while Google has released the Simple Voice"  
[X Link](https://x.com/Marktechpost/status/1977557715582108153) [@Marktechpost](/creator/x/Marktechpost) 2025-10-13T02:11Z 9843 followers, XXX engagements


"NVIDIA Researchers Propose Reinforcement Learning Pretraining (RLP): Reinforcement as a Pretraining Objective for Building Reasoning During Pretraining RLP makes think-before-predict a pretraining objective: it samples a short chain-of-thought as an action and rewards it by information gainthe log-likelihood improvement of the next token versus a no-think EMA teacheryielding a verifier-free dense position-wise signal that works on ordinary text streams at scale; empirically RLP lifts Qwen3-1.7B math+science averages by +19% vs Base and +17% vs compute-matched CPT with gains persisting after"  
[X Link](https://x.com/Marktechpost/status/1978038591138979874) [@Marktechpost](/creator/x/Marktechpost) 2025-10-14T10:02Z 9841 followers, 1069 engagements


"Agentic Context Engineering (ACE): Self-Improving LLMs via Evolving Contexts Not Fine-Tuning TL;DR: A team of researchers from Stanford University SambaNova Systems and UC Berkeley introduce ACE framework that improves LLM performance by editing and growing the input context instead of updating model weights. Context is treated as a living playbook maintained by three rolesGenerator Reflector Curatorwith small delta items merged incrementally to avoid brevity bias and context collapse. Reported gains: +10.6% on AppWorld agent tasks +8.6% on finance reasoning and XXXX% average latency"  
[X Link](https://x.com/Marktechpost/status/1976614553002930678) [@Marktechpost](/creator/x/Marktechpost) 2025-10-10T11:43Z 9851 followers, 6408 engagements


"ServiceNow AI Research Releases DRBench a Realistic Enterprise Deep-Research Benchmark DRBench is a reproducible enterprise-grade benchmark and environment for evaluating deep research agents on open-ended tasks that require synthesizing evidence from both public web sources and private organizational data (documents emails chats cloud files). The initial release includes XX tasks across XX domains distributes relevant and distractor insights across multiple applications and scores outputs on Insight Recall Distractor Avoidance Factuality and Report Quality. A baseline DRBench Agent (DRBA)"  
[X Link](https://x.com/Marktechpost/status/1978003687059722627) [@Marktechpost](/creator/x/Marktechpost) 2025-10-14T07:43Z 9850 followers, 1377 engagements


"Meet OpenTSLM: A Family of Time-Series Language Models (TSLMs) Revolutionizing Medical Time-Series Analysis A significant development is set to transform AI in healthcare. Researchers at Stanford University in collaboration with ETH Zurich and tech leaders including Google Research and Amazon have introduced OpenTSLM a novel family of Time-Series Language Models (TSLMs). This breakthrough addresses a critical limitation in current LLMs by enabling them to interpret and reason over complex continuous medical time-series data such as ECGs EEGs and wearable sensor streams a feat where even"  
[X Link](https://x.com/Marktechpost/status/1977146374572626063) [@Marktechpost](/creator/x/Marktechpost) 2025-10-11T22:56Z 9850 followers, XXX engagements


"Andrej Karpathy Releases nanochat: A Minimal End-to-End ChatGPT-Style Pipeline You Can Train in X Hours for $XXX Andrej Karpathys nanochat is a 8K-LOC dependency-light full-stack ChatGPT-style pipeline that you can run end-to-end on a single 8H100 node via producing a usable chat model and Web UI in X hours for roughly $XXX. The stack includes a Rust BPE tokenizer base pretraining on FineWeb-EDU mid-training (SmolTalk/MMLU aux/GSM8K with tool-use tags) SFT optional simplified GRPO on GSM8K a thin inference Engine (KV cache prefill/decode Python-interpreter tool) and an auto-generated with"  
[X Link](https://x.com/Marktechpost/status/1978155416162083035) [@Marktechpost](/creator/x/Marktechpost) 2025-10-14T17:46Z 9853 followers, 3744 engagements


"Google + Yale release C2S-Scale 27B (Gemma based model): converts scRNA-seq into cell sentences for LLM-native single-cell analysis. Dual-context virtual screen across 4000 compounds targets interferon-conditional antigen presentation. Model flags CK2 inhibition (silmitasertib) + low-dose IFN MHC-I boost; prediction validated in living cells. Open weights on Hugging Face enable replication and benchmarking. Full analysis: Paper: Model on HF: GitHub Repo: @googleaidevs @GoogleResearch @GoogleAI"  
[X Link](https://x.com/Marktechpost/status/1979091268484633031) [@Marktechpost](/creator/x/Marktechpost) 2025-10-17T07:45Z 9853 followers, 2348 engagements


"Are your LLM code benchmarks actually rejecting wrong-complexity solutions and interactive-protocol violations or are they passing under-specified unit tests A team of researchers from UCSD NYU University of Washington Princeton University Canyon Crest Academy OpenAI UC Berkeley MIT University of Waterloo and Sentient Labs introduce AutoCode a new AI framework that lets LLMs create and verify competitive programming problems mirroring the workflow of human problem setters. AutoCode reframes evaluation for code-reasoning models by treating problem setting (not only problem solving) as the"  
[X Link](https://x.com/Marktechpost/status/1979473678065905847) [@Marktechpost](/creator/x/Marktechpost) 2025-10-18T09:04Z 9853 followers, XXX engagements


"Weak-for-Strong (W4S): A Novel Reinforcement Learning Algorithm that Trains a weak Meta Agent to Design Agentic Workflows with Stronger LLMs TL;DR (1) W4S trains a 7B weak meta agent with RLAO to write Python workflows that harness stronger executors modeled as a multi turn MDP. (2) On HumanEval with GPT 4o mini as executor W4S reaches Pass@1 of XXXX with about XX minutes optimization and about XXX dollars total cost beating automated baselines under the same executor. (3) Across XX benchmarks W4S improves over the strongest baseline by XXX% to XXXX% while avoiding fine tuning of the strong"  
[X Link](https://x.com/Marktechpost/status/1979803547173794180) [@Marktechpost](/creator/x/Marktechpost) 2025-10-19T06:55Z 9853 followers, 1397 engagements

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

@Marktechpost Avatar @Marktechpost Marktechpost AI Dev News ⚡

Marktechpost AI Dev News ⚡ posts on X about $googl, servicenow, samsung, outer the most. They currently have XXXXX followers and XXX posts still getting attention that total XXX engagements in the last XX hours.

Engagements: XXX #

Engagements Line Chart

  • X Week XXXXXX -XX%
  • X Month XXXXXX +177%
  • X Months XXXXXXX -XX%
  • X Year XXXXXXX +83%

Mentions: X #

Mentions Line Chart

  • X Week XX no change
  • X Month XX +42%
  • X Months XXX +5%
  • X Year XXX +17%

Followers: XXXXX #

Followers Line Chart

  • X Week XXXXX +0.47%
  • X Month XXXXX +1.80%
  • X Months XXXXX +12%
  • X Year XXXXX +33%

CreatorRank: XXXXXXXXX #

CreatorRank Line Chart

Social Influence #


Social category influence technology brands XXXX% stocks XXXX% finance XXX% cryptocurrencies XXX% travel destinations XXX%

Social topic influence $googl 1.79%, servicenow 1.19%, samsung 0.6%, outer 0.6%, a very 0.6%, cest 0.6%, token 0.6%, ema 0.6%, ace 0.6%, instead of XXX%

Top accounts mentioned or mentioned by @ipfconline1 @sfresearch @nvidia @nvidiaai @bytedancetalk @aiatmeta @microsoft @googledeepmind @googleai @ibm @alibabaqwen @openai @ibmresearch @meta @servicenow @servicenowrsrch @mistralai @googleaidevs @stanford @togethercompute

Top assets mentioned Alphabet Inc Class A (GOOGL) ServiceNow Inc (NOW)

Top Social Posts #


Top posts by engagements in the last XX hours

"Samsung introduced a tiny X Million parameter model that just beat DeepSeek-R1 Gemini XXX pro and o3-mini at reasoning on both ARG-AGI X and ARC-AGI X Samsungs Tiny Recursive Model (TRM) is a 7M-parameter two-layer solver that replaces token-by-token decoding with an iterative draft latent-think revise loop: X scratchpad updates per outer step unrolled up to XX steps with full backprop through the recursion. On public protocols it reports XX% on ARC-AGI-1 and X% (two-try) on ARC-AGI-2 and also XXXX% on Sudoku-Extreme and XXXX% on Maze-Hard. Code is available on GitHub. full analysis: paper:"
X Link @Marktechpost 2025-10-09T21:52Z 9851 followers, 1182 engagements

"QeRL: NVFP4-Quantized Reinforcement Learning (RL) Brings 32B LLM Training to a Single H100While Improving Exploration TL;DR: QeRL open-sources a quantization-enhanced RL pipeline that runs 4-bit NVFP4 weights with LoRA updates to accelerate the rollout bottleneck. QeRL reports XXX rollout speedups parity or gains over 16-bit LoRA/QLoRA on math reasoning and the first RL training of a 32B policy on a single H100-80GB. Adaptive Quantization Noise schedules channel-wise perturbations to raise policy entropy and improve exploration during training. NVFP4 provides a hardware-optimized 4-bit"
X Link @Marktechpost 2025-10-16T04:38Z 9853 followers, 10.3K engagements

"ServiceNow AI Releases Apriel-1.5-15B-Thinker: An Open-Weights Multimodal Reasoning Model that Hits Frontier-Level Performance on a Single-GPU Budget ServiceNow AI Researchs Apriel-1.5-15B-Thinker is a 15-billion-parameter open-weights multimodal reasoning model trained via mid-training (continual pretraining) plus supervised fine-tuningwith no reinforcement learningthat achieves an Artificial Analysis Intelligence Index (AAI) score of XX and discloses task results of AIME 2025 XX GPQA Diamond XX LiveCodeBench XX Instruction-Following Benchmark XX and Tau-squared Bench (Telecom) 68; it is"
X Link @Marktechpost 2025-10-02T06:13Z 9842 followers, XXX engagements

"Here is a very interesting upcoming AI webinar from deepset Topic: Scaling AI with Haystack Enterprise: A Developers Guide When: October XX 2025 10am ET 3pm BST 4pm CEST In this webinarJulian RischandBilge Ycelwill show howHaystack Enterprisehelps developers bridge that gap bringing the speed and flexibility of open source together with the support enterprises need. Youll learn how to: (1) Extend your expertisewith direct access to the Haystack engineering team through private support and consultation hours. (2) Deploy with confidenceusing Helm charts and best-practice guides for secure"
X Link @Marktechpost 2025-10-09T20:47Z 9843 followers, XXX engagements

"Google Introduces Speech-to-Retrieval (S2R) Approach that Maps a Spoken Query Directly to an Embedding and Retrieves Information without First Converting Speech to Text Googles Speech-to-Retrieval (S2R) replaces the ASRtextretrieval cascade with a dual-encoder system that maps spoken queries directly to audio embeddings and retrieves matching document embeddings; in production its now powering Voice Search in multiple languages where evaluations show S2R beating the cascade baseline and approaching an upper bound built from human-verified transcripts while Google has released the Simple Voice"
X Link @Marktechpost 2025-10-13T02:11Z 9843 followers, XXX engagements

"NVIDIA Researchers Propose Reinforcement Learning Pretraining (RLP): Reinforcement as a Pretraining Objective for Building Reasoning During Pretraining RLP makes think-before-predict a pretraining objective: it samples a short chain-of-thought as an action and rewards it by information gainthe log-likelihood improvement of the next token versus a no-think EMA teacheryielding a verifier-free dense position-wise signal that works on ordinary text streams at scale; empirically RLP lifts Qwen3-1.7B math+science averages by +19% vs Base and +17% vs compute-matched CPT with gains persisting after"
X Link @Marktechpost 2025-10-14T10:02Z 9841 followers, 1069 engagements

"Agentic Context Engineering (ACE): Self-Improving LLMs via Evolving Contexts Not Fine-Tuning TL;DR: A team of researchers from Stanford University SambaNova Systems and UC Berkeley introduce ACE framework that improves LLM performance by editing and growing the input context instead of updating model weights. Context is treated as a living playbook maintained by three rolesGenerator Reflector Curatorwith small delta items merged incrementally to avoid brevity bias and context collapse. Reported gains: +10.6% on AppWorld agent tasks +8.6% on finance reasoning and XXXX% average latency"
X Link @Marktechpost 2025-10-10T11:43Z 9851 followers, 6408 engagements

"ServiceNow AI Research Releases DRBench a Realistic Enterprise Deep-Research Benchmark DRBench is a reproducible enterprise-grade benchmark and environment for evaluating deep research agents on open-ended tasks that require synthesizing evidence from both public web sources and private organizational data (documents emails chats cloud files). The initial release includes XX tasks across XX domains distributes relevant and distractor insights across multiple applications and scores outputs on Insight Recall Distractor Avoidance Factuality and Report Quality. A baseline DRBench Agent (DRBA)"
X Link @Marktechpost 2025-10-14T07:43Z 9850 followers, 1377 engagements

"Meet OpenTSLM: A Family of Time-Series Language Models (TSLMs) Revolutionizing Medical Time-Series Analysis A significant development is set to transform AI in healthcare. Researchers at Stanford University in collaboration with ETH Zurich and tech leaders including Google Research and Amazon have introduced OpenTSLM a novel family of Time-Series Language Models (TSLMs). This breakthrough addresses a critical limitation in current LLMs by enabling them to interpret and reason over complex continuous medical time-series data such as ECGs EEGs and wearable sensor streams a feat where even"
X Link @Marktechpost 2025-10-11T22:56Z 9850 followers, XXX engagements

"Andrej Karpathy Releases nanochat: A Minimal End-to-End ChatGPT-Style Pipeline You Can Train in X Hours for $XXX Andrej Karpathys nanochat is a 8K-LOC dependency-light full-stack ChatGPT-style pipeline that you can run end-to-end on a single 8H100 node via producing a usable chat model and Web UI in X hours for roughly $XXX. The stack includes a Rust BPE tokenizer base pretraining on FineWeb-EDU mid-training (SmolTalk/MMLU aux/GSM8K with tool-use tags) SFT optional simplified GRPO on GSM8K a thin inference Engine (KV cache prefill/decode Python-interpreter tool) and an auto-generated with"
X Link @Marktechpost 2025-10-14T17:46Z 9853 followers, 3744 engagements

"Google + Yale release C2S-Scale 27B (Gemma based model): converts scRNA-seq into cell sentences for LLM-native single-cell analysis. Dual-context virtual screen across 4000 compounds targets interferon-conditional antigen presentation. Model flags CK2 inhibition (silmitasertib) + low-dose IFN MHC-I boost; prediction validated in living cells. Open weights on Hugging Face enable replication and benchmarking. Full analysis: Paper: Model on HF: GitHub Repo: @googleaidevs @GoogleResearch @GoogleAI"
X Link @Marktechpost 2025-10-17T07:45Z 9853 followers, 2348 engagements

"Are your LLM code benchmarks actually rejecting wrong-complexity solutions and interactive-protocol violations or are they passing under-specified unit tests A team of researchers from UCSD NYU University of Washington Princeton University Canyon Crest Academy OpenAI UC Berkeley MIT University of Waterloo and Sentient Labs introduce AutoCode a new AI framework that lets LLMs create and verify competitive programming problems mirroring the workflow of human problem setters. AutoCode reframes evaluation for code-reasoning models by treating problem setting (not only problem solving) as the"
X Link @Marktechpost 2025-10-18T09:04Z 9853 followers, XXX engagements

"Weak-for-Strong (W4S): A Novel Reinforcement Learning Algorithm that Trains a weak Meta Agent to Design Agentic Workflows with Stronger LLMs TL;DR (1) W4S trains a 7B weak meta agent with RLAO to write Python workflows that harness stronger executors modeled as a multi turn MDP. (2) On HumanEval with GPT 4o mini as executor W4S reaches Pass@1 of XXXX with about XX minutes optimization and about XXX dollars total cost beating automated baselines under the same executor. (3) Across XX benchmarks W4S improves over the strongest baseline by XXX% to XXXX% while avoiding fine tuning of the strong"
X Link @Marktechpost 2025-10-19T06:55Z 9853 followers, 1397 engagements

creator/x::Marktechpost
/creator/x::Marktechpost