[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
@Yesterday_work_
"Bloomberg Terminal: $24000/year Professional research: $10000/year Gemini XXX Pro: Free Same quality analysis. 100x cheaper. The financial analysis hack. Heres an exact mega prompt we use for stock research and investments:"
X Link @Yesterday_work_ 2025-08-28T10:00Z 10.8K followers, 1.9M engagements
"Holy shit.Stanford just built a system that converts research papers into working AI agents. Its called Paper2Agent and it literally: Recreates the method in the paper Applies it to your own dataset Answers questions like the author This changes how we do science forever. Let me explain"
X Link @Yesterday_work_ 2025-10-09T10:36Z 10.8K followers, 298.3K engagements
"Stanford just pulled off something wild 🤯 They made models smarter without touching a single weight. The papers called Agentic Context Engineering (ACE) and it flips the whole fine-tuning playbook. Instead of retraining the model rewrites itself. It runs a feedback loop write reflect edit until its own prompt becomes a living system. Think of it as giving the LLM memory but without changing the model. Just evolving the context. Results are stupid good: +10.6% better than GPT-4 agents on AppWorld +8.6% on finance reasoning XXXX% lower cost and latency The trick Everyones been obsessed with"
X Link @Yesterday_work_ 2025-10-11T10:36Z 10.8K followers, 39.1K engagements
"😳 Meta just broke the entire paradigm of how we train AI agents. No expert demonstrations. No reward engineering. No expensive human feedback loops. Just pure learning from experience and it destroys everything we thought was necessary. They're calling it Early Experience and it's the first approach that makes agents smarter by letting them fuck around and find out. Here's what everyone's been doing wrong: Training AI agents meant either copying human experts (doesn't scale) or chasing carefully designed reward signals (expensive and breaks constantly). Both approaches have the same fatal"
X Link @Yesterday_work_ 2025-10-20T10:20Z 10.8K followers, 26.8K engagements
"🚨 This MIT paper just broke everything we thought we knew about AI reasoning. These researchers built something called Tensor Logic that turns logical reasoning into pure mathematics. Not symbolic manipulation. Not heuristic search. Just tensor algebra. Here's how it works: Logical propositions become vectors. Inference rules become tensor operations. Truth values propagate through continuous transformations. Translation Deduction and neural computation finally speak the same language. This isn't symbolic AI bolted onto deep learning. It's not deep learning pretending to do logic. It's a"
X Link @Yesterday_work_ 2025-10-20T13:31Z 10.8K followers, 232.5K engagements
"So how does it work Their agentic pipeline PWAgent turns a PDF into an interactive site in X stages: X. Paper decomposition extract structured assets X. MCP ingestion build relational metadata X. Iterative refinement fix layout hierarchy & UX Each stage uses LLMs with real-time visual inspection"
X Link @Yesterday_work_ 2025-10-22T10:52Z 10.8K followers, XXX engagements
"Then comes evaluation. Paper2Web introduces metrics that actually understand design: - Connectivity (link depth & coherence) - Completeness (section coverage) - Aesthetics + Interactivity (via MLLM-as-a-Judge) PaperQuiz can readers retain knowledge from screenshots This isnt about pretty pages its about understanding"
X Link @Yesterday_work_ 2025-10-22T10:52Z 10.8K followers, XXX engagements
"Fine-tuning updates weights. ACE updates understanding. Its cheaper interpretable and reversible. You can literally watch how your AI learns one context delta at a time. This is the start of agentic self-learning where prompts become the new model weights"
X Link @Yesterday_work_ 2025-10-11T10:36Z 10.8K followers, XXX engagements
"This is wild 🤯 You can now fine-tune Gemma X 270M with custom data and ship it in under an hour. No cloud GPUs needed. No OpenAI. No lock-in. AI sovereignty just became real. Here's the complete guide"
X Link @Yesterday_work_ 2025-10-16T10:47Z 10.8K followers, 5617 engagements
"Gemma is Google DeepMinds open sibling to Gemini the same architecture scaled down for accessibility. The wild stat: 250M downloads 85000 community variations This things becoming the Linux of LLMs"
X Link @Yesterday_work_ 2025-10-16T10:47Z 10.8K followers, XXX engagements
"Step 1: Fine-tune Use QLoRA (Quantized Low-Rank Adaptation). You only update a few adapter weights instead of retraining billions. That means you can fine-tune Gemma X 270M on a free Colab T4 GPU in minutes"
X Link @Yesterday_work_ 2025-10-16T10:47Z 10.8K followers, XX engagements
"The problem with current AI agents is brutal. Imitation Learning: Agents only see expert demos. When they mess up they can't recover because they never learned what happens when you take wrong actions. RL: Needs verifiable rewards. Most real-world environments don't have them.Early Experience solves both"
X Link @Yesterday_work_ 2025-10-20T10:20Z 10.8K followers, XXX engagements
"Here's how Self-Reflection actually works: 1/ Agent sees an expert action at each state 2/ Agent proposes X alternative actions 3/ Environment shows what happens with each 4/ LLM generates reasoning: "Why was the expert choice better" 5/ Agent trains on this reasoning It's learning from contrast not just copying"
X Link @Yesterday_work_ 2025-10-20T10:20Z 10.8K followers, XXX engagements
"Here's how Self-Reflection actually works: 1/ Agent sees an expert action at each state 2/ Agent proposes X alternative actions 3/ Environment shows what happens with each 4/ LLM generates reasoning: "Why was the expert choice better" 5/ Agent trains on this reasoning It's learning from contrast not just copying"
X Link @Yesterday_work_ 2025-10-14T09:47Z 10.8K followers, 4957 engagements
"Meta just did the unthinkable. They figured out how to train AI agents without rewards human demos or supervision and it actually works better than both. Its called 'Early Experience' and it quietly kills the two biggest pain points in agent training: Human demonstrations that dont scale Reinforcement learning thats expensive and unstable Instead of copying experts or chasing reward signals agents now: - Take their own actions - Observe what happens - Learn directly from consequences no external rewards needed The numbers are wild: ✅ +18.4% on web navigation (WebShop) ✅ +15.0% on complex"
X Link @Yesterday_work_ 2025-10-14T09:47Z 10.8K followers, 183.3K engagements
"🚨 MIT just built an AI that rewrites its own code to evolve Its called SEAL (Self-Adapting Language Model) and it might be the first real step toward self-improving intelligence. Instead of waiting for humans to fine-tune it SEAL: - reads new information - rewrites it in its own words - performs gradient updates on itself Literally it learns how to learn. The results are insane: ✅ +40% boost in factual recall ✅ Outperforms GPT-4.1 using data it generated itself ✅ Masters new tasks without human input LLMs that train themselves arent sci-fi anymore. We just entered the era of self-evolving"
X Link @Yesterday_work_ 2025-10-15T10:14Z 10.8K followers, 29.9K engagements
"Every paper in the dataset was labeled as static multimedia or interactive. The findings are wild: Only XXX% of academic websites are interactive. Over XX% are still just static text dumps. Meaning: the research web is still trapped in 2005. Paper2Web is the first system to quantify why and fix it"
X Link @Yesterday_work_ 2025-10-22T10:52Z 10.8K followers, XXX engagements
"PWAgent hits Pareto-front performance best quality at moderate cost. It beats GPT-4o Gemini and alphaXiv on interactivity usability and knowledge transfer. For the first time a model doesnt just summarize research. It publishes it interactively. The paper made PDFs obsolete"
X Link @Yesterday_work_ 2025-10-22T10:53Z 10.8K followers, XXX engagements