#  @techNmak Tech with Mak
Tech with Mak posts on X about ai, llm, check, model the most. They currently have [------] followers and [---] posts still getting attention that total [------] engagements in the last [--] hours.
### Engagements: [------] [#](/creator/twitter::1818381581897412608/interactions)

- [--] Week [---------] +360%
- [--] Month [---------] +32%
- [--] Months [---------] +547%
- [--] Year [----------] +1,432%
### Mentions: [--] [#](/creator/twitter::1818381581897412608/posts_active)

- [--] Week [--] +42%
- [--] Month [---] +22%
- [--] Months [---] +190%
- [--] Year [---] +762%
### Followers: [------] [#](/creator/twitter::1818381581897412608/followers)

- [--] Week [------] +6.20%
- [--] Month [------] +14%
- [--] Months [------] +153%
- [--] Year [------] +760%
### CreatorRank: [-------] [#](/creator/twitter::1818381581897412608/influencer_rank)

### Social Influence
**Social category influence**
[technology brands](/list/technology-brands) 18% [stocks](/list/stocks) 9% [finance](/list/finance) 5% [social networks](/list/social-networks) 2% [celebrities](/list/celebrities) 1%
**Social topic influence**
[ai](/topic/ai) #4019, [llm](/topic/llm) #55, [check](/topic/check) 8%, [model](/topic/model) #2582, [microsoft](/topic/microsoft) 5%, [core](/topic/core) 5%, [agents](/topic/agents) #1297, [claude code](/topic/claude-code) 4%, [just a](/topic/just-a) 4%, [how to](/topic/how-to) 4%
**Top accounts mentioned or mentioned by**
[@anayatkhan09](/creator/undefined) [@therealzongi](/creator/undefined) [@nicoeft](/creator/undefined) [@grok](/creator/undefined) [@ghumare64](/creator/undefined) [@ritikaagrawal08](/creator/undefined) [@jordan0cl](/creator/undefined) [@yarnavid5872](/creator/undefined) [@_gannon_](/creator/undefined) [@stellarmanatee](/creator/undefined) [@saksama_my](/creator/undefined) [@pageman](/creator/undefined) [@huggingface](/creator/undefined) [@chipro](/creator/undefined) [@agenticgirl](/creator/undefined) [@orchidsapp](/creator/undefined) [@squidcorpink](/creator/undefined) [@a2axdev](/creator/undefined) [@elliotarledge](/creator/undefined) [@wwb16277](/creator/undefined)
**Top assets mentioned**
[Microsoft Corp. (MSFT)](/topic/microsoft) [Alphabet Inc Class A (GOOGL)](/topic/$googl) [Spotify Technology (SPOT)](/topic/$spot)
### Top Social Posts
Top posts by engagements in the last [--] hours
"10x isn't a talent anymore. It's a config file for Claude Code"
[X Link](https://x.com/techNmak/status/2018712721538081170) 2026-02-03T15:46Z 30.2K followers, 251.4K engagements
"This isn't just a bookshelf its a $300k/year survival kit"
[X Link](https://x.com/techNmak/status/2020529005729087789) 2026-02-08T16:03Z 30.2K followers, 72.1K engagements
"- by Jay Alammar and Maarten Grootendorst"
[X Link](https://x.com/techNmak/status/2022749875319001396) 2026-02-14T19:08Z 30.2K followers, [---] engagements
"LLM Course Check here: https://huggingface.co/learn/llm-course/chapter1/1 https://huggingface.co/learn/llm-course/chapter1/1"
[X Link](https://x.com/techNmak/status/2000841907975675975) 2025-12-16T08:14Z 29.5K followers, [----] engagements
"MCP Course Check here: https://huggingface.co/learn/mcp-course/unit0/introduction https://huggingface.co/learn/mcp-course/unit0/introduction"
[X Link](https://x.com/techNmak/status/2000841910517485909) 2025-12-16T08:14Z 29.5K followers, [----] engagements
"AI Agents Course Check here: https://huggingface.co/learn/agents-course/unit0/introduction https://huggingface.co/learn/agents-course/unit0/introduction"
[X Link](https://x.com/techNmak/status/2000841912702615994) 2025-12-16T08:14Z 29.5K followers, [----] engagements
"Deep RL Course Check here: https://huggingface.co/learn/deep-rl-course/unit0/introduction https://huggingface.co/learn/deep-rl-course/unit0/introduction"
[X Link](https://x.com/techNmak/status/2000841915290542486) 2025-12-16T08:14Z 29.5K followers, [----] engagements
"It is dangerously easy to build a neural network today without actually understanding how it works. We live in an era of 'import torch'. You can train a model in three lines of code but the moment you need to debug a collapsing loss function or a vanishing gradient syntax won't save you. You need first principles. I recently went through this notebook collection by Simon J.D. Prince and it is the antidote to tutorial hell. Instead of just showing you the code it forces you to visualize the mechanics: 1./ The Math = It builds the intuition for shallow networks and regions before adding"
[X Link](https://x.com/techNmak/status/2016760212980715878) 2026-01-29T06:27Z 29.8K followers, 51.8K engagements
"Imagine trying to teach someone how to swim just by letting them read books about water. That is how we have been training AI on physics using text descriptions. To really learn you need to get in the water. "The Well" is that water. Polymathic AI has released a massive 15TB open-source library of physics simulations. It allows AI models to experience physical phenomena directly. Instead of reading about a supernova the model processes the actual data of the explosion. Instead of reading about aerodynamics it analyzes the fluid flow. This moves us from Generative AI (making things up) to"
[X Link](https://x.com/techNmak/status/2017594513620111590) 2026-01-31T13:43Z 29.8K followers, 91.5K engagements
"After what felt like forever the books Ive been waiting for finally arrived. Cant wait to dig in ππ"
[X Link](https://x.com/techNmak/status/2017903245268840621) 2026-02-01T10:09Z 29.8K followers, 41.1K engagements
"GitHub is the new Harvard. The most starred AI repos that 99% of people still haven't explored. Bookmark this now. π§΅"
[X Link](https://x.com/techNmak/status/2018059257770610810) 2026-02-01T20:29Z 29.9K followers, 56.3K engagements
"Microsoft : ML for Beginners https://github.com/microsoft/ML-For-Beginners https://github.com/microsoft/ML-For-Beginners"
[X Link](https://x.com/techNmak/status/2018059272152891676) 2026-02-01T20:29Z 29.3K followers, [----] engagements
"Stable Diffusion https://github.com/CompVis/stable-diffusion https://github.com/CompVis/stable-diffusion"
[X Link](https://x.com/techNmak/status/2018059275726434619) 2026-02-01T20:29Z 29.2K followers, [----] engagements
"Stop paying $$$ for LLM Bootcamps. π The official code for the O'Reilly book "Hands-On Large Language Models" is FREE on GitHub. It covers the entire lifecycle of an LLM application. Chapter 1: Introduction to Language Models Chapter 2: Tokens and Embeddings Chapter 3: Looking Inside Transformer LLMs Chapter 4: Text Classification Chapter 5: Text Clustering and Topic Modeling Chapter 6: Prompt Engineering Chapter 7: Advanced Text Generation Techniques and Tools Chapter 8: Semantic Search and Retrieval-Augmented Generation Chapter 9: Multimodal Large Language Models Chapter 10: Creating Text"
[X Link](https://x.com/techNmak/status/2018557986374144209) 2026-02-03T05:31Z 30K followers, 25K engagements
"1 Unsloth Probably the fastest way to fine-tune LLMs today. β
Up to [--] faster fine-tuning β
70% less VRAM usage β
Works on Gemma Qwen LLaMA Mistral & more β
Runs on consumer GPUs (even Colab/Kaggle 3GB VRAM π€―) https://github.com/unslothai/unsloth https://github.com/unslothai/unsloth"
[X Link](https://x.com/techNmak/status/2019122927816569287) 2026-02-04T18:56Z 30.1K followers, [----] engagements
"Agents are 10x harder to test than chatbots. And almost no one is doing it right. Think about the architecture: Chatbot: Input Output Done. Agent: Input Think Tool Think Tool Output. One hallucinated parameter or one bad tool choice in the middle of that chain = complete failure. If you are only checking the final output you're missing 90% of the failure surface. DeepEval has moved beyond "black-box" testing. It now evaluates the entire agent trajectory by analyzing the execution trace: 1./ Tool Correctness Did it pick the optimal tool for the sub-task 2./ Argument Correctness Did it pass"
[X Link](https://x.com/techNmak/status/2019408317790244874) 2026-02-05T13:50Z 30K followers, [----] engagements
"Chatbot: You ask. It answers. RAG: You ask. It retrieves. It answers. RPA: You trigger. It executes a script. Agent: You give a goal. It figures out the rest. That's the difference. An agent has: Memory (learns from interactions) Planning (breaks down complex goals) Tool selection (chooses what to use not scripted) Feedback loops (adjusts based on results) Multi-agent coordination (delegates to specialists) Most "agents" in production are RAG pipelines with a for-loop. Real agentic AI has an orchestrator that thinks deciding which tools which sub-agents which approach and when to change"
[X Link](https://x.com/techNmak/status/2019538266580742601) 2026-02-05T22:26Z 30K followers, 56.7K engagements
"A founder messaged me last month. "Our agent costs are 4x what we budgeted." I asked him to trace a single workflow. [--] steps. The optimal path was [--]. The agent wasn't failing. It was succeeding expensively - calling the same APIs repeatedly looping through failed approaches backtracking through decisions it had already made. Task completion rate: 94% Efficiency rate: 22% DeepEval's Step Efficiency metric would have caught this before production. 1./ @ observe Decorator Wrap your agent workflow. Captures every LLM call tool invocation decision point. 2./ Trace Analysis Compares actual"
[X Link](https://x.com/techNmak/status/2019763638220026254) 2026-02-06T13:22Z 29.5K followers, [----] engagements
"I still cant believe this is free. Most bootcamps are charging $3000 to teach you outdated material. Meanwhile @huggingface is giving away the state-of-the-art curriculum for $0. This guide has you covered: 1./ NLP Teach machines to understand language. 2./ MCP Master orchestration for tool-using agents. 3./ LLM Unlock the power of GPT and diffusion models. 4./ RL Build decision-making agents for robotics and games. 5./ CV Create AI that sees and understands images. 6./ Audio Let machines listen and speak. Pick one. Build. Iterate. Expand. https://twitter.com/i/web/status/2019833223543681027"
[X Link](https://x.com/techNmak/status/2019833223543681027) 2026-02-06T17:59Z 30.1K followers, 27.3K engagements
"LLM Course Check here: https://huggingface.co/learn/llm-course/chapter1/1 https://huggingface.co/learn/llm-course/chapter1/1"
[X Link](https://x.com/techNmak/status/2019833226978865528) 2026-02-06T17:59Z 29.5K followers, [----] engagements
"MCP Course Check here: https://huggingface.co/learn/mcp-course/unit0/introduction https://huggingface.co/learn/mcp-course/unit0/introduction"
[X Link](https://x.com/techNmak/status/2019833229323431981) 2026-02-06T17:59Z 29.5K followers, [----] engagements
"AI Agents Course Check here: https://huggingface.co/learn/agents-course/unit0/introduction https://huggingface.co/learn/agents-course/unit0/introduction"
[X Link](https://x.com/techNmak/status/2019833231462498691) 2026-02-06T17:59Z 29.5K followers, [---] engagements
"Deep RL Course Check here: https://huggingface.co/learn/deep-rl-course/unit0/introduction https://huggingface.co/learn/deep-rl-course/unit0/introduction"
[X Link](https://x.com/techNmak/status/2019833233685487915) 2026-02-06T17:59Z 29.5K followers, [---] engagements
"Robotics Course Check here: https://huggingface.co/learn/robotics-course/unit0/1 https://huggingface.co/learn/robotics-course/unit0/1"
[X Link](https://x.com/techNmak/status/2019833235933655469) 2026-02-06T17:59Z 29.5K followers, [---] engagements
"smol Course Check here: https://huggingface.co/learn/smol-course/unit0/1 https://huggingface.co/learn/smol-course/unit0/1"
[X Link](https://x.com/techNmak/status/2019833238076920161) 2026-02-06T17:59Z 29.5K followers, [---] engagements
"Computer Vision Course Check here: https://huggingface.co/learn/computer-vision-course/unit0/welcome/welcome https://huggingface.co/learn/computer-vision-course/unit0/welcome/welcome"
[X Link](https://x.com/techNmak/status/2019833240299925983) 2026-02-06T17:59Z 29.5K followers, [---] engagements
"Learn AI for free with these repos. [--] GitHub repos every AI agent developer needs The ecosystem is overwhelming. Tools techniques and best practices are scattered across the web. Here's your shortcut. This list covers everything: 1./ Core Skills - AI-ML-Roadmap-from-scratch - Context Engineering A to https://t.co/XL8SK2l7To [--] GitHub repos every AI agent developer needs The ecosystem is overwhelming. Tools techniques and best practices are scattered across the web. Here's your shortcut. This list covers everything: 1./ Core Skills - AI-ML-Roadmap-from-scratch - Context Engineering A to"
[X Link](https://x.com/techNmak/status/2019840739639980423) 2026-02-06T18:28Z 29.5K followers, [----] engagements
"If you care about open-source AI youll want this bookmarked. One of the smarter AI tools Ive seen lately π Chip (@chipro) built a tracker that: monitors 14000+ open-source AI repos from 145000+ contributors scans daily using [---] AI keywords surfaces projects gaining traction auto-categorizes everything shows where AI builders are globally https://twitter.com/i/web/status/2020045074323894462 https://twitter.com/i/web/status/2020045074323894462"
[X Link](https://x.com/techNmak/status/2020045074323894462) 2026-02-07T08:00Z 29.8K followers, [----] engagements
"My cheat sheet on AI Agents ------ Everyone's building AI agents. Few understand how they actually work. Here's the breakdown - from architecture to deployment: What is an AI Agent An AI Agent isn't just a chatbot. It's a system that: Understands your goal Plans the steps to achieve it Takes actions using external tools Delivers results Core components: Language Model The "thinking brain" Tools The "hands" that interact with the real world Orchestration Layer The "decision-maker" that coordinates everything Language Models: Know the Difference LLMs = GPT-4o Gemini [---] DeepSeek-V3 = Complex"
[X Link](https://x.com/techNmak/status/2020081022696825264) 2026-02-07T10:23Z 30.2K followers, 20.9K engagements
"Close Claude Code. Open Claude Code. It remembers everything. Claude-Mem. Persistent memory across sessions. Automatically captures tool observations Generates semantic summaries Context appears on restart No manual saves. No /memory commands. Just installed it. Restarted Claude Code. Previous session context was already there. 100% opensource. GitHub link in comments. https://twitter.com/i/web/status/2020415690784567746 https://twitter.com/i/web/status/2020415690784567746"
[X Link](https://x.com/techNmak/status/2020415690784567746) 2026-02-08T08:33Z 29.5K followers, 13.7K engagements
"You don't need a $10000 bootcamp to learn AI. Microsoft just released a complete 12-week AI curriculum. For free. On GitHub. [--] lessons. Hands-on labs. Real projects. β‘The Reality of AI Education: Bootcamps charge $10000-$20000 for AI courses. Online platforms charge $500-$1000 per course. Universities charge $50000+ for AI degrees. Microsoft is giving it away for free. β‘What You Get: A structured 12-week curriculum designed by Microsoft Cloud Advocates: AI fundamentals Machine learning basics Neural networks and deep learning Computer vision Natural language processing TensorFlow and PyTorch"
[X Link](https://x.com/techNmak/status/2021041433637945452) 2026-02-10T02:00Z 30.2K followers, [----] engagements
"A curated list of ML System Design case studies I recently came across a repo that compiles 300+ real-world battle-tested ML system case studies from [--] companies including Spotify Netflix Microsoft and more. And heres the key insight: Some of the tech is outdated. That doesnt matter. What does matter is how decisions were made. These case studies teach you: How teams identify bottlenecks before they explode How failures are detected early (and fixed without chaos) How business requirements get translated into actual system design This is the difference between: π "I trained a model" and π"
[X Link](https://x.com/techNmak/status/2021101248175653275) 2026-02-10T05:57Z 29.9K followers, [----] engagements
"@orchidsapp This is huge. One place to build and deploy any app across stacks with freedom to use your own models and keys. Exactly what the AI builder ecosystem needs. Congrats on Orchids [---] π"
[X Link](https://x.com/techNmak/status/2021253704200953949) 2026-02-10T16:03Z 29.9K followers, [---] engagements
"Been thinking about why AI app builders feel so limiting. It comes down to two architectural choices: 1/ App and stack constraints. Most builders are optimized for one output - websites. Maybe mobile. Try building a Chrome extension or a Slack bot. It just doesn't work. 2/ Cost. Every AI app builder routes your AI calls through their system. They add markup with no model picker and no transparency. Feels great that Orchids solves both. You can build anything and use your own subscriptions/API key. Genuinely excited to try this out. Introducing Orchids [---] - the first AI app builder to build"
[X Link](https://x.com/techNmak/status/2021254872373526928) 2026-02-10T16:08Z 29.9K followers, [----] engagements
"@ghumare64 Yeah thats what excites me too. Feels like it opens the door to more thoughtful polished UX instead of one-size-fits-all layouts. π"
[X Link](https://x.com/techNmak/status/2021257287013367844) 2026-02-10T16:17Z 29.9K followers, [--] engagements
"The real reason devs love Claude Code. Reflecting on what engineers love about Claude Code one thing that jumps out is its customizability: hooks plugins LSPs MCPs skills effort custom agents status lines output styles etc. Every engineer uses their tools differently. We built Claude Code from the ground up Reflecting on what engineers love about Claude Code one thing that jumps out is its customizability: hooks plugins LSPs MCPs skills effort custom agents status lines output styles etc. Every engineer uses their tools differently. We built Claude Code from the ground up"
[X Link](https://x.com/techNmak/status/2021961937496723904) 2026-02-12T14:57Z 29.9K followers, [---] engagements
"OpenClaw is bloated. Nanobot argues it doesnt have to be. Nanobot is a personal AI assistant that claims to fit the core agent loop into 4k lines of code mostly Python with a thin TypeScript bridge where it makes sense. What stood out to me skimming the repo: the agent logic isnt buried under layers of abstraction startup is basically instant because theres very little there the architecture is modular almost micro-kernel-ish instead of one big framework The point isnt more features. Its that you can actually read the code reason about it and change it in an afternoon. If youre researching"
[X Link](https://x.com/techNmak/status/2018410563358130382) 2026-02-02T19:45Z 30.2K followers, 43.4K engagements
"There are [--] career paths in AI right now: The API Caller: Knows how to use an API. (Low leverage first to be automated $150k salary). The Architect: Knows how to build the API. (High leverage builds the tools $500k+ salary). Bootcamps train you to be an API Caller. This free 17-video Stanford course trains you to be an Architect. It's CS336: Language Modeling from Scratch. The syllabus is pure signal no noise: β‘ Data Collection & Curation (Lec 13-14) β‘ Building Transformers & MoE (Lec 3-4) β‘ Making it fast (Lec 5-8: GPUs Kernels Parallelism) β‘ Making it work (Lec 10: Inference) β‘ Making it"
[X Link](https://x.com/techNmak/status/1990802817305477448) 2025-11-18T15:22Z 30.2K followers, 655.3K engagements
"If youre building AI agents in [----] start here. https://t.co/PqGEcSahx8 https://t.co/PqGEcSahx8"
[X Link](https://x.com/techNmak/status/2009640095293550619) 2026-01-09T14:55Z 30.2K followers, 720.6K engagements
"Let's learn - LLM Generation Parameters These are the primary controls used to influence the output of a Large Language Model. 1./ Temperature Controls the randomness of token sampling by scaling the models probability distribution for the next token. Low Temperature (e.g. 0.2): Makes the output more deterministic by strongly favoring higher-probability tokens. This is ideal for factual tasks such as summarization code generation and direct Q&A. High Temperature (e.g. 1.0): Flattens the probability distribution increasing the chance of selecting lower-probability tokens. This leads to more"
[X Link](https://x.com/techNmak/status/2019014710658818482) 2026-02-04T11:46Z 30.2K followers, [----] engagements
"Fine-tuning LLMs doesnt have to be slow expensive or GPU-hungry anymore. Open-source tooling has leveled up HARD. You can now fine-tune powerful LLMs without enterprise hardware. Here are [--] must-know libraries making LLM fine-tuning faster & cheaper π§΅"
[X Link](https://x.com/techNmak/status/2019122925119631472) 2026-02-04T18:56Z 30.2K followers, 36.2K engagements
"This GitHub README is better than most $999 AI courses. Chip Huyen's AI Engineering companion repo. Free chapter summaries. Study notes. Resources. Case studies. What you get for $0: π [--] chapter summaries foundation models training evaluation prompting rag agents finetuning datasets inference architecture π study notes detailed notes for each chapter π ai engineering resources curated tools and frameworks π¬ prompt examples real prompts from production systems π case studies how companies build ai π§ ml theory fundamentals the math and concepts you need β misalignment ai understanding ai"
[X Link](https://x.com/techNmak/status/2019670439153528917) 2026-02-06T07:12Z 30.2K followers, 19.3K engagements
"Here's the paper: https://arxiv.org/pdf/2504.17033 https://arxiv.org/pdf/2504.17033"
[X Link](https://x.com/techNmak/status/2021623972115468434) 2026-02-11T16:34Z 30.2K followers, 17.4K engagements
"Follow @technmak for more such posts/insights. https://x.com/techNmak https://x.com/techNmak"
[X Link](https://x.com/techNmak/status/2022180414497206507) 2026-02-13T05:25Z 30.2K followers, [----] engagements
"by Chip Huyen"
[X Link](https://x.com/techNmak/status/2022749859007320471) 2026-02-14T19:08Z 30.2K followers, [---] engagements
"by Louis-Franois Bouchard and Louie Peters"
[X Link](https://x.com/techNmak/status/2022749867173695759) 2026-02-14T19:08Z 30.2K followers, [--] engagements
"What is RAG What is Agentic RAG Retrieval-Augmented Generation (RAG) ---------------------------------------------- Retrieval-Augmented Generation (RAG) is an architecture that enhances a language models outputs by grounding them in external knowledge sources at inference time. Instead of relying solely on parameters learned during training RAG systems dynamically retrieve relevant information and inject it into the models context before generation. = Canonical RAG workflow A user submits a query. The query is embedded and matched against a pre-indexed corpus (commonly stored in a vector"
[X Link](https://x.com/techNmak/status/2020734843597017384) 2026-02-09T05:41Z 30.2K followers, 32.1K engagements
"Google just killed the document extraction industry. LangExtract: Open-source. Free. Better than $50K enterprise tools. What it does: Extracts structured data from unstructured text Maps EVERY entity to its exact source location Handles 100+ page documents with high recall Generates interactive HTML for verification Works with Gemini Ollama local models What it replaces: Regex pattern matching Custom NER pipelines Expensive extraction APIs Manual data entry Define your task with a few examples. Point it at any document. Get structured verifiable results. No fine-tuning. No complex setup."
[X Link](https://x.com/techNmak/status/2020867240753819983) 2026-02-09T14:27Z 30.2K followers, 735.7K engagements
"Here's the GitHub: https://github.com/google/langextract https://github.com/google/langextract"
[X Link](https://x.com/techNmak/status/2020867243857326124) 2026-02-09T14:27Z 30.2K followers, 40.6K engagements
"For [--] years computer scientists believed Dijkstra's algorithm was optimal for sparse graphs. The logic seemed airtight: Dijkstra sorts vertices by distance. Sorting has a lower bound of O(n log n). Therefore shortest paths can't be faster. [--] researchers proved the assumption wrong. The trick: combine Dijkstra's priority queue with Bellman-Ford's dynamic programming. Divide and conquer on vertex sets. Shrink the frontier. Result: O(m log(2/3) n) First improvement for directed graphs since Fibonacci heap in [----]. Tsinghua. Stanford. Max Planck. [--] pages"
[X Link](https://x.com/techNmak/status/2021623968546115959) 2026-02-11T16:34Z 30.2K followers, 236K engagements
"@SquidCorp_ink To be fair VS Code is great at what it was designed for. I just dont think AI fits neatly into the "sidebar assistant" model. Running agents in parallel headless or inside CI feels way more natural once you try it"
[X Link](https://x.com/techNmak/status/2022349631695503810) 2026-02-13T16:38Z 30.2K followers, [---] engagements
"@RitikaAgrawal08 Totally. The UI is nice but the isolated state is the real win. It maps cleanly to how we already think about branches/tasks. One idea one agent. No weird cross-talk. Thats what makes parallel work actually viable"
[X Link](https://x.com/techNmak/status/2022349928618758481) 2026-02-13T16:39Z 30.2K followers, [---] engagements
"' by Paul Iusztin and Maxime Labonne"
[X Link](https://x.com/techNmak/status/2022749863063261233) 2026-02-14T19:08Z 30.2K followers, [---] engagements
"Follow @techNmak for more such insights :) Dont forget to enjoy your weekend. Slow down and take some rest. https://x.com/techNmak https://x.com/techNmak"
[X Link](https://x.com/techNmak/status/2022749878812582119) 2026-02-14T19:08Z 30.2K followers, [---] engagements
"These are literally the kind of LLM interview questions most candidates wish they had seen earlier. A curated list of LLM interview questions - shared by Hao Hoang Want this doc Follow @techNmak and comment LLM - Ill send it over"
[X Link](https://x.com/techNmak/status/2002057926341767193) 2025-12-19T16:46Z 30.2K followers, 420.7K engagements
"The foundation of data science. Bayes' Theorem Spam filters. Medical diagnosis. Any time you update probability with new info. OLS Cost Linear regression. Predicting house prices. Minimizing how wrong you are. Entropy Decision trees. Information gain. Measuring how mixed your data is. Normal Distribution A/B testing. Confidence intervals. Assumes most things cluster around the mean. F1-Score Imbalanced datasets. Fraud detection. When accuracy lies to you. Sigmoid Logistic regression. Neural network outputs. Turning anything into a probability. Know the formula. Know when to use it."
[X Link](https://x.com/techNmak/status/2021704600668320120) 2026-02-11T21:55Z 30.2K followers, [----] engagements
"Elliot's book - CUDA for Deep Learning https://www.manning.com/books/cuda-for-deep-learning https://www.manning.com/books/cuda-for-deep-learning"
[X Link](https://x.com/techNmak/status/2022180412408443009) 2026-02-13T05:25Z 30.2K followers, [----] engagements
"Found a UI library that made me mass mass mass mass mass angry. Angry that this isn't how everything works. Oat: 6KB CSS + 2.2KB JS Zero dependencies No framework required No build step Semantic HTML only You write button. It looks good. You write dialog. It looks good. You write input. It looks good. No className="px-4 py-2 rounded-md bg-blue-500" No Button variant="primary" size="md" Just HTML. Accessible. Keyboard navigable. Dark mode included. Built by Kailash Nadh (CTO @ Zerodha) https://twitter.com/i/web/status/2022287404128973056 https://twitter.com/i/web/status/2022287404128973056"
[X Link](https://x.com/techNmak/status/2022287404128973056) 2026-02-13T12:31Z 30.2K followers, 295.5K engagements
"@nicoeft @A2AxDev Totally fair. I dont think CLI replaces extensions for everyone. A lot of devs live in VS Code all day and want AI right there. My point is more that the process model unlocks workflows extensions cant especially around parallelism and CI. Different tools for different habits"
[X Link](https://x.com/techNmak/status/2022350256072200284) 2026-02-13T16:40Z 30.2K followers, [---] engagements
"5 practical AI books worth reading β― by Chip Huyen β« Comprehensive guide for building AI applications with foundation models covering system design deployment and scaling. β― ' by Paul Iusztin and Maxime Labonne β« Practical manual for LLM development covering data engineering fine-tuning RAG pipelines and production deployment with AWS implementations and downloadable code. β― by Louis-Franois Bouchard and Louie Peters β« Focuses on deploying LLMs in production environments using prompt engineering fine-tuning and RAG techniques for scalable AI applications. β― ( ) by Sebastian Raschka PhD"
[X Link](https://x.com/techNmak/status/2022749855047897541) 2026-02-14T19:08Z 30.2K followers, 15K engagements
"$10 hardware. 10MB RAM. [--] second boot. That's a full AI assistant. Just found PicoClaw. While everyone's running AI on $599 Mac Minis with 1GB+ RAM these folks are running it on a $9.90 LicheeRV Nano. The comparison: OpenClaw: TypeScript 1GB RAM 500s startup $599 NanoBot: Python 100MB RAM 30s startup $50 PicoClaw: Go 10MB RAM 1s startup $10 99% less memory. 98% cheaper. 400x faster boot. Single binary. Runs on RISC-V ARM x86. The wildest part 95% of the code was written by an AI agent. They used AI to bootstrap the entire Go migration. GitHub in comments."
[X Link](https://x.com/techNmak/status/2022957086922186899) 2026-02-15T08:52Z 30.2K followers, [----] engagements
""Programmers will automate themselves out of existence." That's what they said. And programmers laughed so hard they couldn't respond. Two years into the AI revolution here's what actually happened: What the doomers predicted: 1./ AI writes all the code 2./ Developers become obsolete 3./ Only managers and marketers survive 4./ Programming becomes a dead career What actually happened: 1./ AI writes code that needs debugging by developers 2./ Developers spend more time reviewing AI output than writing from scratch 3./ The skill gap between good and bad developers got WIDER not narrower 4./"
[X Link](https://x.com/techNmak/status/2016163707580207586) 2026-01-27T14:57Z 30.2K followers, 127.4K engagements
"Web Scraping is dead. Web Agenting is here. Writing selectors (div .class span) breaks every time a site updates. Building custom bots for every new target is a waste of engineering hours. TinyFish turns the entire real-time web into a single API. Input: Natural Language ("Find availability for X"). Target: [--] or [---] URLs. Output: Structured JSON. This isn't a simulation. It visits the Real-Time Web. 1./ One API Many Sites - Same contract whether you hit [--] URL or [--]. You focus on the Goal (Business Logic). TinyFish handles the How (Navigation Clicks Inputs). 2./ Real Automation - It doesn't"
[X Link](https://x.com/techNmak/status/2017281169004609637) 2026-01-30T16:58Z 30.2K followers, 122.2K engagements
"Unpopular opinion: We're celebrating vibe coding while ignoring its inevitable collapse. Saw this Reddit post: 30+ Python files Code is "super disorganized" Duplicate loops everywhere Claude keeps forgetting basic imports Asking to fix bugs "breaks everything" The developer's confession: "I have [--] knowledge about Python." This isn't an AI limitation problem. This is a technical debt problem created by someone who doesn't understand what they built. AI coding tools are incredible. I use them daily. But they're accelerators not replacements. They accelerate good developers into great ones. They"
[X Link](https://x.com/techNmak/status/2020125645951681001) 2026-02-07T13:21Z 30.2K followers, [----] engagements
"You're wasting hours every week. Re-teaching your AI things it already learned. Rebuilding skills for each new tool. Working in isolation from your team's AI knowledge. SkillKit fixes all three: 1./ Memory AI remembers across sessions 2./ Primer Skills work across 30+ agents 3./ Mesh Team shares learnings automatically One CLI. Universal platform. 100% Open Source Stop re-doing work your AI already did. π SkillKit just launched on Product Hunt. If you've ever been frustrated re-teaching your AI go upvote this. Links in comments"
[X Link](https://x.com/techNmak/status/2020227586128015751) 2026-02-07T20:06Z 30.2K followers, 12.1K engagements
"I have one interview question I use to find real ML engineers: "Explain Backpropagation. No not the concept. The math. From scratch." [--] out of [--] candidates can't. They can use a library. They can't build one. The 1/10 who can They've all built the foundation. This 26-video playlist is that foundation. For free. While everyone else is chasing the newest "AI agent" or prompt hack they're building on a foundation of sand. This free course from Professor Bryce is the foundation. It's a full university-level curriculum on the math that actually makes AI work. The syllabus is pure signal no noise:"
[X Link](https://x.com/techNmak/status/2020316657575477376) 2026-02-08T02:00Z 30.2K followers, 67K engagements
"These are literally the kind of LLM interview questions most candidates wish they had seen earlier. A curated list of [--] LLM interview questions - shared by Hao Hoang. What's covered: Fundamentals: Tokenization and why it matters Attention mechanisms in transformers Context windows and their tradeoffs Embeddings and initialization Positional encodings Fine-tuning & Efficiency: LoRA vs QLoRA PEFT to prevent catastrophic forgetting Model distillation Adaptive Softmax for large vocabularies Generation & Decoding: Beam search vs greedy decoding Temperature top-k top-p sampling Autoregressive vs"
[X Link](https://x.com/techNmak/status/2021459379564970353) 2026-02-11T05:40Z 30.2K followers, 82.9K engagements
"Finally a lightweight VLM that beats the giants at OCR. (1.7B parameters SOTA on OmniDocBench) dots. ocr is a new multilingual document parser that proves you don't need massive models for perfect document understanding. Current SOTA models are often massive (72B+) or require expensive API calls (GPT-4o). dots. ocr changes the game. It is built on a compact 1.7B LLM foundation but outperforms much larger models like Qwen2-VL-72B and GPT-4o on key benchmarks. What makes it special Unified Architecture: Handles text tables formulas and layout detection in one pass. Top-Tier Precision: Achieves"
[X Link](https://x.com/techNmak/status/2021852034526515596) 2026-02-12T07:41Z 30.2K followers, 16.7K engagements
"If you are preparing for your System Design Interview these resources will be very helpful for you. π Read it. Bookmark it"
[X Link](https://x.com/techNmak/status/2022028785701597699) 2026-02-12T19:23Z 30.2K followers, 42.1K engagements
"This free CUDA course is worth more than most CS degrees. [--] hours that separate library users from GPU engineers. I watched senior devs struggle with concepts taught in hour [--]. What makes it different: No hand-waving. No "just use this library." You build an MLP trainer FOUR times: PyTorch (the easy way) NumPy (getting harder) C (now we're cooking) CUDA (chef's kiss) Same model. Same dataset. Four implementations. By the end you understand WHY PyTorch is fast. The curriculum nobody else teaches: β‘ GPU architecture (not just "it's parallel") β‘ Writing kernels that don't suck β‘ Profiling at"
[X Link](https://x.com/techNmak/status/2022180404627640480) 2026-02-13T05:25Z 30.2K followers, 62.1K engagements
"What Oat has: Accordion Alert Badge Button Card Dialog Dropdown Form elements Meter Progress Spinner Skeleton Sidebar Switch Table Tabs Tooltip Toast Grid What Oat doesn't have: Dependencies Build step Framework requirement npm install 8KB total. That's the whole library. https://twitter.com/i/web/status/2022287407685746701 https://twitter.com/i/web/status/2022287407685746701"
[X Link](https://x.com/techNmak/status/2022287407685746701) 2026-02-13T12:31Z 30.2K followers, 33.9K engagements
"Check here: https://github.com/knadh/oat https://github.com/knadh/oat"
[X Link](https://x.com/techNmak/status/2022287410126848333) 2026-02-13T12:31Z 30.2K followers, 31.8K engagements
"Unpopular opinion: VS Code is holding back AI coding. Ive been spinning up multiple Cline instances across tmux panes and terminal tabs. Each agent has its own isolated state and can run a different task branch or idea in parallel. They just keep going while I focus on something else no context collisions no babysitting a single chat thread. You can pipe input chain output or run them headless in CI/CD. It fits into the workflow I already use in the terminal. Getting started literally took me [--] seconds: npm install -g cline AI is finally just another process in my workflow instead of a panel"
[X Link](https://x.com/techNmak/status/2022341951224484133) 2026-02-13T16:07Z 30.2K followers, 42.7K engagements
"Bookmark this. You'll need it when your AI project grows past one file. config/ YAML configs prompt templates logging src/llm/ Separate clients for Claude GPT etc. src/prompt_engineering/ Templates few-shot chaining src/utils/ Cache rate limiting token counting data/ Prompts cache outputs embeddings examples/ basic_completion.py chat_session.py notebooks/ Testing analysis experimentation Modular. Scalable. Maintainable. https://twitter.com/i/web/status/2022426803298800077 https://twitter.com/i/web/status/2022426803298800077"
[X Link](https://x.com/techNmak/status/2022426803298800077) 2026-02-13T21:44Z 30.2K followers, 14.9K engagements
"( ) by Sebastian Raschka"
[X Link](https://x.com/techNmak/status/2022749871439282490) 2026-02-14T19:08Z 30.2K followers, [---] engagements
"This guy built GPT from scratch in pure C. No PyTorch. No TensorFlow. No libraries. Just raw C code. What he implemented: Custom random number generator (xorshift) Character-level tokenizer Multi-head self-attention RMS normalization Softmax from scratch Full backpropagation Adam optimizer The model: [--] embedding dimensions [--] attention heads [--] transformer layers [--] token context window This is how you actually understand transformers. Not by importing torch.nn.Transformer. By writing every matrix multiplication yourself. https://t.co/dPziIqNdQX https://t.co/dPziIqNdQX"
[X Link](https://x.com/techNmak/status/2023049943909593241) 2026-02-15T15:01Z 30.2K followers, 15.7K engagements
"Prompting isnt just asking the AI a question. Its a deliberate engineered input design process and a critical skill when working with Large Language Models (LLMs). Let's breakdown the prompting techniques. β
[--]. Core Prompting Techniques Zero-shot - No examples provided. Just the task. One-shot - One example shown before the task. Few-shot - A handful of examples used to teach patterns. π§ [--]. Reasoning-Enhancing Techniques Chain-of-Thought (CoT) - Encourage step-by-step reasoning. Self-Consistency - Sample multiple CoTs; choose the best. Tree-of-Thought (ToT) - Explore multiple reasoning paths"
[X Link](https://x.com/techNmak/status/2014904145497751832) 2026-01-24T03:32Z 27.8K followers, 19.8K engagements
"We built MCP so we could build MCP"
[X Link](https://x.com/techNmak/status/2015035423505145899) 2026-01-24T12:14Z 27.3K followers, [----] engagements
"Code is becoming a commodity. Engineering judgment is the new scarcity. Most developers use AI to type faster. The best engineers use AI to think deeper. AI is not a junior developer. It is an infinite stochastic reasoning engine with zero understanding of "why." It offers Velocity (speed). You provide Vector (direction). Velocity without Vector is just crashing faster. Here is the framework for high-leverage Human-AI Engineering: 1./ The Divergence/Convergence Protocol AI is a divergence engine (generating options). Humans are convergence engines (selecting the truth). The Trap = Accepting"
[X Link](https://x.com/techNmak/status/2015137422351442304) 2026-01-24T18:59Z 27.7K followers, 14.5K engagements
"LLM observability is where API monitoring was in [----]. Everyone knows they need it. Nobody knows how to do it. The problem: We're using [----] tools for [----] problems. Here's what traditional APM gives you: Request/response logs Latency metrics Error rates Uptime monitoring Here's what you need for LLMs: Was the output accurate Did it hallucinate Did it use the context correctly Why did the agent make this decision Totally different questions. Traditional tools can't answer them. The gap: Built an AI agent last month. Works great in testing. Production: It's making decisions I can't explain."
[X Link](https://x.com/techNmak/status/2015724546716770561) 2026-01-26T09:52Z 27.4K followers, [----] engagements
"You dont need a degree or a bootcamp to learn AI Microsoft already put the entire 12-week playbook on GitHub for free. You don't need a $10000 bootcamp to learn AI. Microsoft just released a complete 12-week AI curriculum. For free. On GitHub. [--] lessons. Hands-on labs. Real projects. https://t.co/ShBeTD6dwy You don't need a $10000 bootcamp to learn AI. Microsoft just released a complete 12-week AI curriculum. For free. On GitHub. [--] lessons. Hands-on labs. Real projects. https://t.co/ShBeTD6dwy"
[X Link](https://x.com/techNmak/status/2015833975802233167) 2026-01-26T17:07Z 27.6K followers, 20.2K engagements
"Let's Understand GraphRAG Everyone thinks GraphRAG just means swapping your Vector DB for a Graph DB. It doesn't. GraphRAG (specifically the microsoft research implementation) is a fundamental shift in how data is indexed to solve the one problem Standard Vector RAG struggles with: Holistic Reasoning. 1./ the problem = Vector RAG is 'Myopic' - Standard Vector RAG retrieves chunks based on semantic similarity. Query = "What does this dataset say about Apple" Vector RAG = Finds the top [--] chunks containing "Apple". Success. Query = "What are the top [--] recurring themes in this entire dataset""
[X Link](https://x.com/techNmak/status/2016018956868583884) 2026-01-27T05:22Z 27.8K followers, 14K engagements
"Recruiters be like: We want one engineer who can replace Google Cloud Netflix and NASA combined"
[X Link](https://x.com/techNmak/status/2016477718096068727) 2026-01-28T11:45Z 27.7K followers, [----] engagements
"Build LLMs from Scratchπ Found this gem by Vizuara Technologies a 43-lecture series that actually delivers on its promise: building Large Language Models from the ground up. What's inside: Transformer architecture GPT internals Tokenization (BPE) Attention mechanisms Complete Python implementations Perfect for ML engineers and developers who want to understand what's really happening under the hood of ChatGPT Claude and similar models. π Playlist link in comments https://twitter.com/i/web/status/2015404407438090521 https://twitter.com/i/web/status/2015404407438090521"
[X Link](https://x.com/techNmak/status/2015404407438090521) 2026-01-25T12:40Z 28.3K followers, 33K engagements
"AI Engineering Roadmap [----] AI Engineering is becoming less about the AI and more about the Engineering. Ive been looking at how the requirements for AI roles are evolving for [----] and this roadmap captures the transition perfectly. We are moving towards an era of rigorous engineering. https://t.co/zJSxfuh8UK AI Engineering is becoming less about the AI and more about the Engineering. Ive been looking at how the requirements for AI roles are evolving for [----] and this roadmap captures the transition perfectly. We are moving towards an era of rigorous engineering. https://t.co/zJSxfuh8UK"
[X Link](https://x.com/techNmak/status/2015450857413001665) 2026-01-25T15:45Z 28.1K followers, 44.7K engagements
"Clawdbot (now Moltbot) In case you don't know. 1./ Moltbot is a self-hosted AI assistant designed to run on your own machines (macOS Windows Linux) rather than a closed hosted service. It aims to be always-on customizable and integrated with your workflows. 2./ It connects with messaging platforms like WhatsApp Telegram Slack Discord Google Chat Signal iMessage and others letting you interact with it like a chat contact. 3./ Moltbot isnt just for answering questions it can perform actions on your behalf such as automating tasks running scripts handling Cron-like scheduled jobs browsing the"
[X Link](https://x.com/techNmak/status/2016147964113416206) 2026-01-27T13:55Z 28.2K followers, 17.3K engagements
"The [----] guide for building modern UI in the agentic era. CopilotKit dropped the blueprint for how AI will finally break out of the chat box and generate the UI layer on demand. In case you need this 14-page guide on AG-UI A2UI and MCP Apps comment "Gen UI""
[X Link](https://x.com/techNmak/status/2016946399523602444) 2026-01-29T18:47Z 28K followers, [----] engagements
"You can now run "Claude Code" without Claude. Let me explain. We typically view Claude Code as a proprietary product locked behind Anthropic's paywall. But architecturally it is just a highly polished Agentic CLI (the interface) that sends instructions to an LLM API (the intelligence). Until now that API URL was hardcoded to Anthropics servers. Ollama v0.14 just changed the architecture. They have implemented full Anthropic Messages API Compatibility. This creates a "Drop-in Replacement" endpoint. You can now trick the claude CLI into believing your local machine is the Anthropic Cloud. Why"
[X Link](https://x.com/techNmak/status/2017636759371448672) 2026-01-31T16:31Z 28.2K followers, 36.2K engagements
"Read here: https://ollama.com/blog/claude https://ollama.com/blog/claude"
[X Link](https://x.com/techNmak/status/2017636820511821845) 2026-01-31T16:31Z 28K followers, [----] engagements
"Here's the GitHub: https://github.com/HKUDS/nanobot https://github.com/HKUDS/nanobot"
[X Link](https://x.com/techNmak/status/2018410565945954715) 2026-02-02T19:45Z 28.3K followers, [----] engagements
"2 LLaMA Factory Your all-in-one fine-tuning toolkit. β Supports 100+ models β CLI + WebUI (beginner friendly) β LoRA QLoRA full & frozen FT (28 bit) β Built-in datasets training monitors & exports https://github.com/hiyouga/LlamaFactory https://github.com/hiyouga/LlamaFactory"
[X Link](https://x.com/techNmak/status/2019122930496729332) 2026-02-04T18:56Z 28.3K followers, [----] engagements
"Teenager: "I cleaned my room." You: "Why is there water on the ceiling" Teenager: "You didn't say HOW to clean it." Agent: "Task completed." You: "Why did you call the payment API [--] times" Agent: "You didn't say how MANY times." The instructions were clear to you. The interpretation was creative. This is why task completion metrics lie. They tell you the room is clean. They don't tell you about the ceiling. DeepEval traces the full decision path: Every tool call Every reasoning step Every loop and backtrack You see exactly how the agent "cleaned the room." Then you can actually fix the"
[X Link](https://x.com/techNmak/status/2019710905072619931) 2026-02-06T09:52Z 28.2K followers, [----] engagements
"Elon's boldest prediction: "In [--] months the most economically compelling place to put AI will be space." Not [--] years. Not [--] years. [--] months. His math: Solar panels are 5x more effective in space No day/night cycle no clouds no atmosphere No batteries needed No permits required "It's always sunny in space." The only place you can truly scale is space. Once you think in terms of what percentage of the Sun's power you're harnessing you realize you can't scale on Earth"
[X Link](https://x.com/techNmak/status/2020239453772763210) 2026-02-07T20:53Z 28.4K followers, [--] engagements
"Elon on the irony of AI company names: "Midjourney is not mid." "Stability AI is unstable." "OpenAI is closed." "Anthropic Misanthropic." Why did he name it X "It's a name you can't invert. Largely irony-proof. By design." He thinks simulation theory is real. And whoever's running us loves ironic outcomes. "The most ironic outcome is the most likely." https://twitter.com/i/web/status/2020239455987331567 https://twitter.com/i/web/status/2020239455987331567"
[X Link](https://x.com/techNmak/status/2020239455987331567) 2026-02-07T20:53Z 28.4K followers, [--] engagements
"If you're a Software Developer you should understand why is the Go-To Database for modern apps. π . π The founders faced challenges building large-scale web applications with existing databases. As the internet grew with more dynamic websites and apps the old database tools couldn't keep up. MongoDB was designed to fill this gap. It offered the flexibility scalability and ease of use that developers needed for the new web. MongoDB is primarily written in C++ but utilizes JavaScript for its shell and Python for some tools and drivers. π ' - At its core MongoDB = NoSQL document-oriented"
[X Link](https://x.com/techNmak/status/1946485628763615530) 2025-07-19T08:21Z 29.9K followers, [----] engagements
"- MongoDB was born out of necessity the founders needed a better database to handle modern web apps. - MongoDB doesnt support traditional SQL joins but it offers $lookup in the aggregation pipeline for basic join functionality. - MongoDB isnt just a database its a core part of how modern apps scale. π Free Newsletter - http://thecuriousmak.substack.com https://twitter.com/i/web/status/1946486227940803005 http://thecuriousmak.substack.com https://twitter.com/i/web/status/1946486227940803005"
[X Link](https://x.com/techNmak/status/1946486227940803005) 2025-07-19T08:24Z 29.9K followers, [---] engagements
"Why "Delete" doesn't actually Delete (the tombstone trap) In Log-Structured Merge (LSM) databases like Cassandra ScyllaDB or RocksDB files are immutable. Once written they cannot be modified. So how do you delete a record You write a new one. 1./ The Tombstone To delete User123 the database writes a new record with a special marker: Key: User123 Value: TOMBSTONE A Tombstone is effectively a note that says: "This key is dead as of 10:05 AM." [--]. /The Read Path When you query data the database reads both the old record and the new marker: User123: "Alice" (Timestamp: 10:00) User123: TOMBSTONE"
[X Link](https://x.com/techNmak/status/1991479627869741428) 2025-11-20T12:11Z 30.1K followers, [----] engagements
"Robotics Course Check here: https://huggingface.co/learn/robotics-course/unit0/1 https://huggingface.co/learn/robotics-course/unit0/1"
[X Link](https://x.com/techNmak/status/2000841917429694623) 2025-12-16T08:14Z 29.5K followers, [----] engagements
"smol Course Check here: https://huggingface.co/learn/smol-course/unit0/1 https://huggingface.co/learn/smol-course/unit0/1"
[X Link](https://x.com/techNmak/status/2000841919564538275) 2025-12-16T08:14Z 29.5K followers, [----] engagements
"Computer Vision Course Check here: https://huggingface.co/learn/computer-vision-course/unit0/welcome/welcome https://huggingface.co/learn/computer-vision-course/unit0/welcome/welcome"
[X Link](https://x.com/techNmak/status/2000841921678516406) 2025-12-16T08:14Z 29.5K followers, [----] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
@techNmak Tech with MakTech with Mak posts on X about ai, llm, check, model the most. They currently have [------] followers and [---] posts still getting attention that total [------] engagements in the last [--] hours.
Social category influence technology brands 18% stocks 9% finance 5% social networks 2% celebrities 1%
Social topic influence ai #4019, llm #55, check 8%, model #2582, microsoft 5%, core 5%, agents #1297, claude code 4%, just a 4%, how to 4%
Top accounts mentioned or mentioned by @anayatkhan09 @therealzongi @nicoeft @grok @ghumare64 @ritikaagrawal08 @jordan0cl @yarnavid5872 @gannon @stellarmanatee @saksama_my @pageman @huggingface @chipro @agenticgirl @orchidsapp @squidcorpink @a2axdev @elliotarledge @wwb16277
Top assets mentioned Microsoft Corp. (MSFT) Alphabet Inc Class A (GOOGL) Spotify Technology (SPOT)
Top posts by engagements in the last [--] hours
"10x isn't a talent anymore. It's a config file for Claude Code"
X Link 2026-02-03T15:46Z 30.2K followers, 251.4K engagements
"This isn't just a bookshelf its a $300k/year survival kit"
X Link 2026-02-08T16:03Z 30.2K followers, 72.1K engagements
"- by Jay Alammar and Maarten Grootendorst"
X Link 2026-02-14T19:08Z 30.2K followers, [---] engagements
"LLM Course Check here: https://huggingface.co/learn/llm-course/chapter1/1 https://huggingface.co/learn/llm-course/chapter1/1"
X Link 2025-12-16T08:14Z 29.5K followers, [----] engagements
"MCP Course Check here: https://huggingface.co/learn/mcp-course/unit0/introduction https://huggingface.co/learn/mcp-course/unit0/introduction"
X Link 2025-12-16T08:14Z 29.5K followers, [----] engagements
"AI Agents Course Check here: https://huggingface.co/learn/agents-course/unit0/introduction https://huggingface.co/learn/agents-course/unit0/introduction"
X Link 2025-12-16T08:14Z 29.5K followers, [----] engagements
"Deep RL Course Check here: https://huggingface.co/learn/deep-rl-course/unit0/introduction https://huggingface.co/learn/deep-rl-course/unit0/introduction"
X Link 2025-12-16T08:14Z 29.5K followers, [----] engagements
"It is dangerously easy to build a neural network today without actually understanding how it works. We live in an era of 'import torch'. You can train a model in three lines of code but the moment you need to debug a collapsing loss function or a vanishing gradient syntax won't save you. You need first principles. I recently went through this notebook collection by Simon J.D. Prince and it is the antidote to tutorial hell. Instead of just showing you the code it forces you to visualize the mechanics: 1./ The Math = It builds the intuition for shallow networks and regions before adding"
X Link 2026-01-29T06:27Z 29.8K followers, 51.8K engagements
"Imagine trying to teach someone how to swim just by letting them read books about water. That is how we have been training AI on physics using text descriptions. To really learn you need to get in the water. "The Well" is that water. Polymathic AI has released a massive 15TB open-source library of physics simulations. It allows AI models to experience physical phenomena directly. Instead of reading about a supernova the model processes the actual data of the explosion. Instead of reading about aerodynamics it analyzes the fluid flow. This moves us from Generative AI (making things up) to"
X Link 2026-01-31T13:43Z 29.8K followers, 91.5K engagements
"After what felt like forever the books Ive been waiting for finally arrived. Cant wait to dig in ππ"
X Link 2026-02-01T10:09Z 29.8K followers, 41.1K engagements
"GitHub is the new Harvard. The most starred AI repos that 99% of people still haven't explored. Bookmark this now. π§΅"
X Link 2026-02-01T20:29Z 29.9K followers, 56.3K engagements
"Microsoft : ML for Beginners https://github.com/microsoft/ML-For-Beginners https://github.com/microsoft/ML-For-Beginners"
X Link 2026-02-01T20:29Z 29.3K followers, [----] engagements
"Stable Diffusion https://github.com/CompVis/stable-diffusion https://github.com/CompVis/stable-diffusion"
X Link 2026-02-01T20:29Z 29.2K followers, [----] engagements
"Stop paying $$$ for LLM Bootcamps. π The official code for the O'Reilly book "Hands-On Large Language Models" is FREE on GitHub. It covers the entire lifecycle of an LLM application. Chapter 1: Introduction to Language Models Chapter 2: Tokens and Embeddings Chapter 3: Looking Inside Transformer LLMs Chapter 4: Text Classification Chapter 5: Text Clustering and Topic Modeling Chapter 6: Prompt Engineering Chapter 7: Advanced Text Generation Techniques and Tools Chapter 8: Semantic Search and Retrieval-Augmented Generation Chapter 9: Multimodal Large Language Models Chapter 10: Creating Text"
X Link 2026-02-03T05:31Z 30K followers, 25K engagements
"1 Unsloth Probably the fastest way to fine-tune LLMs today. β
Up to [--] faster fine-tuning β
70% less VRAM usage β
Works on Gemma Qwen LLaMA Mistral & more β
Runs on consumer GPUs (even Colab/Kaggle 3GB VRAM π€―) https://github.com/unslothai/unsloth https://github.com/unslothai/unsloth"
X Link 2026-02-04T18:56Z 30.1K followers, [----] engagements
"Agents are 10x harder to test than chatbots. And almost no one is doing it right. Think about the architecture: Chatbot: Input Output Done. Agent: Input Think Tool Think Tool Output. One hallucinated parameter or one bad tool choice in the middle of that chain = complete failure. If you are only checking the final output you're missing 90% of the failure surface. DeepEval has moved beyond "black-box" testing. It now evaluates the entire agent trajectory by analyzing the execution trace: 1./ Tool Correctness Did it pick the optimal tool for the sub-task 2./ Argument Correctness Did it pass"
X Link 2026-02-05T13:50Z 30K followers, [----] engagements
"Chatbot: You ask. It answers. RAG: You ask. It retrieves. It answers. RPA: You trigger. It executes a script. Agent: You give a goal. It figures out the rest. That's the difference. An agent has: Memory (learns from interactions) Planning (breaks down complex goals) Tool selection (chooses what to use not scripted) Feedback loops (adjusts based on results) Multi-agent coordination (delegates to specialists) Most "agents" in production are RAG pipelines with a for-loop. Real agentic AI has an orchestrator that thinks deciding which tools which sub-agents which approach and when to change"
X Link 2026-02-05T22:26Z 30K followers, 56.7K engagements
"A founder messaged me last month. "Our agent costs are 4x what we budgeted." I asked him to trace a single workflow. [--] steps. The optimal path was [--]. The agent wasn't failing. It was succeeding expensively - calling the same APIs repeatedly looping through failed approaches backtracking through decisions it had already made. Task completion rate: 94% Efficiency rate: 22% DeepEval's Step Efficiency metric would have caught this before production. 1./ @ observe Decorator Wrap your agent workflow. Captures every LLM call tool invocation decision point. 2./ Trace Analysis Compares actual"
X Link 2026-02-06T13:22Z 29.5K followers, [----] engagements
"I still cant believe this is free. Most bootcamps are charging $3000 to teach you outdated material. Meanwhile @huggingface is giving away the state-of-the-art curriculum for $0. This guide has you covered: 1./ NLP Teach machines to understand language. 2./ MCP Master orchestration for tool-using agents. 3./ LLM Unlock the power of GPT and diffusion models. 4./ RL Build decision-making agents for robotics and games. 5./ CV Create AI that sees and understands images. 6./ Audio Let machines listen and speak. Pick one. Build. Iterate. Expand. https://twitter.com/i/web/status/2019833223543681027"
X Link 2026-02-06T17:59Z 30.1K followers, 27.3K engagements
"LLM Course Check here: https://huggingface.co/learn/llm-course/chapter1/1 https://huggingface.co/learn/llm-course/chapter1/1"
X Link 2026-02-06T17:59Z 29.5K followers, [----] engagements
"MCP Course Check here: https://huggingface.co/learn/mcp-course/unit0/introduction https://huggingface.co/learn/mcp-course/unit0/introduction"
X Link 2026-02-06T17:59Z 29.5K followers, [----] engagements
"AI Agents Course Check here: https://huggingface.co/learn/agents-course/unit0/introduction https://huggingface.co/learn/agents-course/unit0/introduction"
X Link 2026-02-06T17:59Z 29.5K followers, [---] engagements
"Deep RL Course Check here: https://huggingface.co/learn/deep-rl-course/unit0/introduction https://huggingface.co/learn/deep-rl-course/unit0/introduction"
X Link 2026-02-06T17:59Z 29.5K followers, [---] engagements
"Robotics Course Check here: https://huggingface.co/learn/robotics-course/unit0/1 https://huggingface.co/learn/robotics-course/unit0/1"
X Link 2026-02-06T17:59Z 29.5K followers, [---] engagements
"smol Course Check here: https://huggingface.co/learn/smol-course/unit0/1 https://huggingface.co/learn/smol-course/unit0/1"
X Link 2026-02-06T17:59Z 29.5K followers, [---] engagements
"Computer Vision Course Check here: https://huggingface.co/learn/computer-vision-course/unit0/welcome/welcome https://huggingface.co/learn/computer-vision-course/unit0/welcome/welcome"
X Link 2026-02-06T17:59Z 29.5K followers, [---] engagements
"Learn AI for free with these repos. [--] GitHub repos every AI agent developer needs The ecosystem is overwhelming. Tools techniques and best practices are scattered across the web. Here's your shortcut. This list covers everything: 1./ Core Skills - AI-ML-Roadmap-from-scratch - Context Engineering A to https://t.co/XL8SK2l7To [--] GitHub repos every AI agent developer needs The ecosystem is overwhelming. Tools techniques and best practices are scattered across the web. Here's your shortcut. This list covers everything: 1./ Core Skills - AI-ML-Roadmap-from-scratch - Context Engineering A to"
X Link 2026-02-06T18:28Z 29.5K followers, [----] engagements
"If you care about open-source AI youll want this bookmarked. One of the smarter AI tools Ive seen lately π Chip (@chipro) built a tracker that: monitors 14000+ open-source AI repos from 145000+ contributors scans daily using [---] AI keywords surfaces projects gaining traction auto-categorizes everything shows where AI builders are globally https://twitter.com/i/web/status/2020045074323894462 https://twitter.com/i/web/status/2020045074323894462"
X Link 2026-02-07T08:00Z 29.8K followers, [----] engagements
"My cheat sheet on AI Agents ------ Everyone's building AI agents. Few understand how they actually work. Here's the breakdown - from architecture to deployment: What is an AI Agent An AI Agent isn't just a chatbot. It's a system that: Understands your goal Plans the steps to achieve it Takes actions using external tools Delivers results Core components: Language Model The "thinking brain" Tools The "hands" that interact with the real world Orchestration Layer The "decision-maker" that coordinates everything Language Models: Know the Difference LLMs = GPT-4o Gemini [---] DeepSeek-V3 = Complex"
X Link 2026-02-07T10:23Z 30.2K followers, 20.9K engagements
"Close Claude Code. Open Claude Code. It remembers everything. Claude-Mem. Persistent memory across sessions. Automatically captures tool observations Generates semantic summaries Context appears on restart No manual saves. No /memory commands. Just installed it. Restarted Claude Code. Previous session context was already there. 100% opensource. GitHub link in comments. https://twitter.com/i/web/status/2020415690784567746 https://twitter.com/i/web/status/2020415690784567746"
X Link 2026-02-08T08:33Z 29.5K followers, 13.7K engagements
"You don't need a $10000 bootcamp to learn AI. Microsoft just released a complete 12-week AI curriculum. For free. On GitHub. [--] lessons. Hands-on labs. Real projects. β‘The Reality of AI Education: Bootcamps charge $10000-$20000 for AI courses. Online platforms charge $500-$1000 per course. Universities charge $50000+ for AI degrees. Microsoft is giving it away for free. β‘What You Get: A structured 12-week curriculum designed by Microsoft Cloud Advocates: AI fundamentals Machine learning basics Neural networks and deep learning Computer vision Natural language processing TensorFlow and PyTorch"
X Link 2026-02-10T02:00Z 30.2K followers, [----] engagements
"A curated list of ML System Design case studies I recently came across a repo that compiles 300+ real-world battle-tested ML system case studies from [--] companies including Spotify Netflix Microsoft and more. And heres the key insight: Some of the tech is outdated. That doesnt matter. What does matter is how decisions were made. These case studies teach you: How teams identify bottlenecks before they explode How failures are detected early (and fixed without chaos) How business requirements get translated into actual system design This is the difference between: π "I trained a model" and π"
X Link 2026-02-10T05:57Z 29.9K followers, [----] engagements
"@orchidsapp This is huge. One place to build and deploy any app across stacks with freedom to use your own models and keys. Exactly what the AI builder ecosystem needs. Congrats on Orchids [---] π"
X Link 2026-02-10T16:03Z 29.9K followers, [---] engagements
"Been thinking about why AI app builders feel so limiting. It comes down to two architectural choices: 1/ App and stack constraints. Most builders are optimized for one output - websites. Maybe mobile. Try building a Chrome extension or a Slack bot. It just doesn't work. 2/ Cost. Every AI app builder routes your AI calls through their system. They add markup with no model picker and no transparency. Feels great that Orchids solves both. You can build anything and use your own subscriptions/API key. Genuinely excited to try this out. Introducing Orchids [---] - the first AI app builder to build"
X Link 2026-02-10T16:08Z 29.9K followers, [----] engagements
"@ghumare64 Yeah thats what excites me too. Feels like it opens the door to more thoughtful polished UX instead of one-size-fits-all layouts. π"
X Link 2026-02-10T16:17Z 29.9K followers, [--] engagements
"The real reason devs love Claude Code. Reflecting on what engineers love about Claude Code one thing that jumps out is its customizability: hooks plugins LSPs MCPs skills effort custom agents status lines output styles etc. Every engineer uses their tools differently. We built Claude Code from the ground up Reflecting on what engineers love about Claude Code one thing that jumps out is its customizability: hooks plugins LSPs MCPs skills effort custom agents status lines output styles etc. Every engineer uses their tools differently. We built Claude Code from the ground up"
X Link 2026-02-12T14:57Z 29.9K followers, [---] engagements
"OpenClaw is bloated. Nanobot argues it doesnt have to be. Nanobot is a personal AI assistant that claims to fit the core agent loop into 4k lines of code mostly Python with a thin TypeScript bridge where it makes sense. What stood out to me skimming the repo: the agent logic isnt buried under layers of abstraction startup is basically instant because theres very little there the architecture is modular almost micro-kernel-ish instead of one big framework The point isnt more features. Its that you can actually read the code reason about it and change it in an afternoon. If youre researching"
X Link 2026-02-02T19:45Z 30.2K followers, 43.4K engagements
"There are [--] career paths in AI right now: The API Caller: Knows how to use an API. (Low leverage first to be automated $150k salary). The Architect: Knows how to build the API. (High leverage builds the tools $500k+ salary). Bootcamps train you to be an API Caller. This free 17-video Stanford course trains you to be an Architect. It's CS336: Language Modeling from Scratch. The syllabus is pure signal no noise: β‘ Data Collection & Curation (Lec 13-14) β‘ Building Transformers & MoE (Lec 3-4) β‘ Making it fast (Lec 5-8: GPUs Kernels Parallelism) β‘ Making it work (Lec 10: Inference) β‘ Making it"
X Link 2025-11-18T15:22Z 30.2K followers, 655.3K engagements
"If youre building AI agents in [----] start here. https://t.co/PqGEcSahx8 https://t.co/PqGEcSahx8"
X Link 2026-01-09T14:55Z 30.2K followers, 720.6K engagements
"Let's learn - LLM Generation Parameters These are the primary controls used to influence the output of a Large Language Model. 1./ Temperature Controls the randomness of token sampling by scaling the models probability distribution for the next token. Low Temperature (e.g. 0.2): Makes the output more deterministic by strongly favoring higher-probability tokens. This is ideal for factual tasks such as summarization code generation and direct Q&A. High Temperature (e.g. 1.0): Flattens the probability distribution increasing the chance of selecting lower-probability tokens. This leads to more"
X Link 2026-02-04T11:46Z 30.2K followers, [----] engagements
"Fine-tuning LLMs doesnt have to be slow expensive or GPU-hungry anymore. Open-source tooling has leveled up HARD. You can now fine-tune powerful LLMs without enterprise hardware. Here are [--] must-know libraries making LLM fine-tuning faster & cheaper π§΅"
X Link 2026-02-04T18:56Z 30.2K followers, 36.2K engagements
"This GitHub README is better than most $999 AI courses. Chip Huyen's AI Engineering companion repo. Free chapter summaries. Study notes. Resources. Case studies. What you get for $0: π [--] chapter summaries foundation models training evaluation prompting rag agents finetuning datasets inference architecture π study notes detailed notes for each chapter π ai engineering resources curated tools and frameworks π¬ prompt examples real prompts from production systems π case studies how companies build ai π§ ml theory fundamentals the math and concepts you need β misalignment ai understanding ai"
X Link 2026-02-06T07:12Z 30.2K followers, 19.3K engagements
"Here's the paper: https://arxiv.org/pdf/2504.17033 https://arxiv.org/pdf/2504.17033"
X Link 2026-02-11T16:34Z 30.2K followers, 17.4K engagements
"Follow @technmak for more such posts/insights. https://x.com/techNmak https://x.com/techNmak"
X Link 2026-02-13T05:25Z 30.2K followers, [----] engagements
"by Chip Huyen"
X Link 2026-02-14T19:08Z 30.2K followers, [---] engagements
"by Louis-Franois Bouchard and Louie Peters"
X Link 2026-02-14T19:08Z 30.2K followers, [--] engagements
"What is RAG What is Agentic RAG Retrieval-Augmented Generation (RAG) ---------------------------------------------- Retrieval-Augmented Generation (RAG) is an architecture that enhances a language models outputs by grounding them in external knowledge sources at inference time. Instead of relying solely on parameters learned during training RAG systems dynamically retrieve relevant information and inject it into the models context before generation. = Canonical RAG workflow A user submits a query. The query is embedded and matched against a pre-indexed corpus (commonly stored in a vector"
X Link 2026-02-09T05:41Z 30.2K followers, 32.1K engagements
"Google just killed the document extraction industry. LangExtract: Open-source. Free. Better than $50K enterprise tools. What it does: Extracts structured data from unstructured text Maps EVERY entity to its exact source location Handles 100+ page documents with high recall Generates interactive HTML for verification Works with Gemini Ollama local models What it replaces: Regex pattern matching Custom NER pipelines Expensive extraction APIs Manual data entry Define your task with a few examples. Point it at any document. Get structured verifiable results. No fine-tuning. No complex setup."
X Link 2026-02-09T14:27Z 30.2K followers, 735.7K engagements
"Here's the GitHub: https://github.com/google/langextract https://github.com/google/langextract"
X Link 2026-02-09T14:27Z 30.2K followers, 40.6K engagements
"For [--] years computer scientists believed Dijkstra's algorithm was optimal for sparse graphs. The logic seemed airtight: Dijkstra sorts vertices by distance. Sorting has a lower bound of O(n log n). Therefore shortest paths can't be faster. [--] researchers proved the assumption wrong. The trick: combine Dijkstra's priority queue with Bellman-Ford's dynamic programming. Divide and conquer on vertex sets. Shrink the frontier. Result: O(m log(2/3) n) First improvement for directed graphs since Fibonacci heap in [----]. Tsinghua. Stanford. Max Planck. [--] pages"
X Link 2026-02-11T16:34Z 30.2K followers, 236K engagements
"@SquidCorp_ink To be fair VS Code is great at what it was designed for. I just dont think AI fits neatly into the "sidebar assistant" model. Running agents in parallel headless or inside CI feels way more natural once you try it"
X Link 2026-02-13T16:38Z 30.2K followers, [---] engagements
"@RitikaAgrawal08 Totally. The UI is nice but the isolated state is the real win. It maps cleanly to how we already think about branches/tasks. One idea one agent. No weird cross-talk. Thats what makes parallel work actually viable"
X Link 2026-02-13T16:39Z 30.2K followers, [---] engagements
"' by Paul Iusztin and Maxime Labonne"
X Link 2026-02-14T19:08Z 30.2K followers, [---] engagements
"Follow @techNmak for more such insights :) Dont forget to enjoy your weekend. Slow down and take some rest. https://x.com/techNmak https://x.com/techNmak"
X Link 2026-02-14T19:08Z 30.2K followers, [---] engagements
"These are literally the kind of LLM interview questions most candidates wish they had seen earlier. A curated list of LLM interview questions - shared by Hao Hoang Want this doc Follow @techNmak and comment LLM - Ill send it over"
X Link 2025-12-19T16:46Z 30.2K followers, 420.7K engagements
"The foundation of data science. Bayes' Theorem Spam filters. Medical diagnosis. Any time you update probability with new info. OLS Cost Linear regression. Predicting house prices. Minimizing how wrong you are. Entropy Decision trees. Information gain. Measuring how mixed your data is. Normal Distribution A/B testing. Confidence intervals. Assumes most things cluster around the mean. F1-Score Imbalanced datasets. Fraud detection. When accuracy lies to you. Sigmoid Logistic regression. Neural network outputs. Turning anything into a probability. Know the formula. Know when to use it."
X Link 2026-02-11T21:55Z 30.2K followers, [----] engagements
"Elliot's book - CUDA for Deep Learning https://www.manning.com/books/cuda-for-deep-learning https://www.manning.com/books/cuda-for-deep-learning"
X Link 2026-02-13T05:25Z 30.2K followers, [----] engagements
"Found a UI library that made me mass mass mass mass mass angry. Angry that this isn't how everything works. Oat: 6KB CSS + 2.2KB JS Zero dependencies No framework required No build step Semantic HTML only You write button. It looks good. You write dialog. It looks good. You write input. It looks good. No className="px-4 py-2 rounded-md bg-blue-500" No Button variant="primary" size="md" Just HTML. Accessible. Keyboard navigable. Dark mode included. Built by Kailash Nadh (CTO @ Zerodha) https://twitter.com/i/web/status/2022287404128973056 https://twitter.com/i/web/status/2022287404128973056"
X Link 2026-02-13T12:31Z 30.2K followers, 295.5K engagements
"@nicoeft @A2AxDev Totally fair. I dont think CLI replaces extensions for everyone. A lot of devs live in VS Code all day and want AI right there. My point is more that the process model unlocks workflows extensions cant especially around parallelism and CI. Different tools for different habits"
X Link 2026-02-13T16:40Z 30.2K followers, [---] engagements
"5 practical AI books worth reading β― by Chip Huyen β« Comprehensive guide for building AI applications with foundation models covering system design deployment and scaling. β― ' by Paul Iusztin and Maxime Labonne β« Practical manual for LLM development covering data engineering fine-tuning RAG pipelines and production deployment with AWS implementations and downloadable code. β― by Louis-Franois Bouchard and Louie Peters β« Focuses on deploying LLMs in production environments using prompt engineering fine-tuning and RAG techniques for scalable AI applications. β― ( ) by Sebastian Raschka PhD"
X Link 2026-02-14T19:08Z 30.2K followers, 15K engagements
"$10 hardware. 10MB RAM. [--] second boot. That's a full AI assistant. Just found PicoClaw. While everyone's running AI on $599 Mac Minis with 1GB+ RAM these folks are running it on a $9.90 LicheeRV Nano. The comparison: OpenClaw: TypeScript 1GB RAM 500s startup $599 NanoBot: Python 100MB RAM 30s startup $50 PicoClaw: Go 10MB RAM 1s startup $10 99% less memory. 98% cheaper. 400x faster boot. Single binary. Runs on RISC-V ARM x86. The wildest part 95% of the code was written by an AI agent. They used AI to bootstrap the entire Go migration. GitHub in comments."
X Link 2026-02-15T08:52Z 30.2K followers, [----] engagements
""Programmers will automate themselves out of existence." That's what they said. And programmers laughed so hard they couldn't respond. Two years into the AI revolution here's what actually happened: What the doomers predicted: 1./ AI writes all the code 2./ Developers become obsolete 3./ Only managers and marketers survive 4./ Programming becomes a dead career What actually happened: 1./ AI writes code that needs debugging by developers 2./ Developers spend more time reviewing AI output than writing from scratch 3./ The skill gap between good and bad developers got WIDER not narrower 4./"
X Link 2026-01-27T14:57Z 30.2K followers, 127.4K engagements
"Web Scraping is dead. Web Agenting is here. Writing selectors (div .class span) breaks every time a site updates. Building custom bots for every new target is a waste of engineering hours. TinyFish turns the entire real-time web into a single API. Input: Natural Language ("Find availability for X"). Target: [--] or [---] URLs. Output: Structured JSON. This isn't a simulation. It visits the Real-Time Web. 1./ One API Many Sites - Same contract whether you hit [--] URL or [--]. You focus on the Goal (Business Logic). TinyFish handles the How (Navigation Clicks Inputs). 2./ Real Automation - It doesn't"
X Link 2026-01-30T16:58Z 30.2K followers, 122.2K engagements
"Unpopular opinion: We're celebrating vibe coding while ignoring its inevitable collapse. Saw this Reddit post: 30+ Python files Code is "super disorganized" Duplicate loops everywhere Claude keeps forgetting basic imports Asking to fix bugs "breaks everything" The developer's confession: "I have [--] knowledge about Python." This isn't an AI limitation problem. This is a technical debt problem created by someone who doesn't understand what they built. AI coding tools are incredible. I use them daily. But they're accelerators not replacements. They accelerate good developers into great ones. They"
X Link 2026-02-07T13:21Z 30.2K followers, [----] engagements
"You're wasting hours every week. Re-teaching your AI things it already learned. Rebuilding skills for each new tool. Working in isolation from your team's AI knowledge. SkillKit fixes all three: 1./ Memory AI remembers across sessions 2./ Primer Skills work across 30+ agents 3./ Mesh Team shares learnings automatically One CLI. Universal platform. 100% Open Source Stop re-doing work your AI already did. π SkillKit just launched on Product Hunt. If you've ever been frustrated re-teaching your AI go upvote this. Links in comments"
X Link 2026-02-07T20:06Z 30.2K followers, 12.1K engagements
"I have one interview question I use to find real ML engineers: "Explain Backpropagation. No not the concept. The math. From scratch." [--] out of [--] candidates can't. They can use a library. They can't build one. The 1/10 who can They've all built the foundation. This 26-video playlist is that foundation. For free. While everyone else is chasing the newest "AI agent" or prompt hack they're building on a foundation of sand. This free course from Professor Bryce is the foundation. It's a full university-level curriculum on the math that actually makes AI work. The syllabus is pure signal no noise:"
X Link 2026-02-08T02:00Z 30.2K followers, 67K engagements
"These are literally the kind of LLM interview questions most candidates wish they had seen earlier. A curated list of [--] LLM interview questions - shared by Hao Hoang. What's covered: Fundamentals: Tokenization and why it matters Attention mechanisms in transformers Context windows and their tradeoffs Embeddings and initialization Positional encodings Fine-tuning & Efficiency: LoRA vs QLoRA PEFT to prevent catastrophic forgetting Model distillation Adaptive Softmax for large vocabularies Generation & Decoding: Beam search vs greedy decoding Temperature top-k top-p sampling Autoregressive vs"
X Link 2026-02-11T05:40Z 30.2K followers, 82.9K engagements
"Finally a lightweight VLM that beats the giants at OCR. (1.7B parameters SOTA on OmniDocBench) dots. ocr is a new multilingual document parser that proves you don't need massive models for perfect document understanding. Current SOTA models are often massive (72B+) or require expensive API calls (GPT-4o). dots. ocr changes the game. It is built on a compact 1.7B LLM foundation but outperforms much larger models like Qwen2-VL-72B and GPT-4o on key benchmarks. What makes it special Unified Architecture: Handles text tables formulas and layout detection in one pass. Top-Tier Precision: Achieves"
X Link 2026-02-12T07:41Z 30.2K followers, 16.7K engagements
"If you are preparing for your System Design Interview these resources will be very helpful for you. π Read it. Bookmark it"
X Link 2026-02-12T19:23Z 30.2K followers, 42.1K engagements
"This free CUDA course is worth more than most CS degrees. [--] hours that separate library users from GPU engineers. I watched senior devs struggle with concepts taught in hour [--]. What makes it different: No hand-waving. No "just use this library." You build an MLP trainer FOUR times: PyTorch (the easy way) NumPy (getting harder) C (now we're cooking) CUDA (chef's kiss) Same model. Same dataset. Four implementations. By the end you understand WHY PyTorch is fast. The curriculum nobody else teaches: β‘ GPU architecture (not just "it's parallel") β‘ Writing kernels that don't suck β‘ Profiling at"
X Link 2026-02-13T05:25Z 30.2K followers, 62.1K engagements
"What Oat has: Accordion Alert Badge Button Card Dialog Dropdown Form elements Meter Progress Spinner Skeleton Sidebar Switch Table Tabs Tooltip Toast Grid What Oat doesn't have: Dependencies Build step Framework requirement npm install 8KB total. That's the whole library. https://twitter.com/i/web/status/2022287407685746701 https://twitter.com/i/web/status/2022287407685746701"
X Link 2026-02-13T12:31Z 30.2K followers, 33.9K engagements
"Check here: https://github.com/knadh/oat https://github.com/knadh/oat"
X Link 2026-02-13T12:31Z 30.2K followers, 31.8K engagements
"Unpopular opinion: VS Code is holding back AI coding. Ive been spinning up multiple Cline instances across tmux panes and terminal tabs. Each agent has its own isolated state and can run a different task branch or idea in parallel. They just keep going while I focus on something else no context collisions no babysitting a single chat thread. You can pipe input chain output or run them headless in CI/CD. It fits into the workflow I already use in the terminal. Getting started literally took me [--] seconds: npm install -g cline AI is finally just another process in my workflow instead of a panel"
X Link 2026-02-13T16:07Z 30.2K followers, 42.7K engagements
"Bookmark this. You'll need it when your AI project grows past one file. config/ YAML configs prompt templates logging src/llm/ Separate clients for Claude GPT etc. src/prompt_engineering/ Templates few-shot chaining src/utils/ Cache rate limiting token counting data/ Prompts cache outputs embeddings examples/ basic_completion.py chat_session.py notebooks/ Testing analysis experimentation Modular. Scalable. Maintainable. https://twitter.com/i/web/status/2022426803298800077 https://twitter.com/i/web/status/2022426803298800077"
X Link 2026-02-13T21:44Z 30.2K followers, 14.9K engagements
"( ) by Sebastian Raschka"
X Link 2026-02-14T19:08Z 30.2K followers, [---] engagements
"This guy built GPT from scratch in pure C. No PyTorch. No TensorFlow. No libraries. Just raw C code. What he implemented: Custom random number generator (xorshift) Character-level tokenizer Multi-head self-attention RMS normalization Softmax from scratch Full backpropagation Adam optimizer The model: [--] embedding dimensions [--] attention heads [--] transformer layers [--] token context window This is how you actually understand transformers. Not by importing torch.nn.Transformer. By writing every matrix multiplication yourself. https://t.co/dPziIqNdQX https://t.co/dPziIqNdQX"
X Link 2026-02-15T15:01Z 30.2K followers, 15.7K engagements
"Prompting isnt just asking the AI a question. Its a deliberate engineered input design process and a critical skill when working with Large Language Models (LLMs). Let's breakdown the prompting techniques. β
[--]. Core Prompting Techniques Zero-shot - No examples provided. Just the task. One-shot - One example shown before the task. Few-shot - A handful of examples used to teach patterns. π§ [--]. Reasoning-Enhancing Techniques Chain-of-Thought (CoT) - Encourage step-by-step reasoning. Self-Consistency - Sample multiple CoTs; choose the best. Tree-of-Thought (ToT) - Explore multiple reasoning paths"
X Link 2026-01-24T03:32Z 27.8K followers, 19.8K engagements
"We built MCP so we could build MCP"
X Link 2026-01-24T12:14Z 27.3K followers, [----] engagements
"Code is becoming a commodity. Engineering judgment is the new scarcity. Most developers use AI to type faster. The best engineers use AI to think deeper. AI is not a junior developer. It is an infinite stochastic reasoning engine with zero understanding of "why." It offers Velocity (speed). You provide Vector (direction). Velocity without Vector is just crashing faster. Here is the framework for high-leverage Human-AI Engineering: 1./ The Divergence/Convergence Protocol AI is a divergence engine (generating options). Humans are convergence engines (selecting the truth). The Trap = Accepting"
X Link 2026-01-24T18:59Z 27.7K followers, 14.5K engagements
"LLM observability is where API monitoring was in [----]. Everyone knows they need it. Nobody knows how to do it. The problem: We're using [----] tools for [----] problems. Here's what traditional APM gives you: Request/response logs Latency metrics Error rates Uptime monitoring Here's what you need for LLMs: Was the output accurate Did it hallucinate Did it use the context correctly Why did the agent make this decision Totally different questions. Traditional tools can't answer them. The gap: Built an AI agent last month. Works great in testing. Production: It's making decisions I can't explain."
X Link 2026-01-26T09:52Z 27.4K followers, [----] engagements
"You dont need a degree or a bootcamp to learn AI Microsoft already put the entire 12-week playbook on GitHub for free. You don't need a $10000 bootcamp to learn AI. Microsoft just released a complete 12-week AI curriculum. For free. On GitHub. [--] lessons. Hands-on labs. Real projects. https://t.co/ShBeTD6dwy You don't need a $10000 bootcamp to learn AI. Microsoft just released a complete 12-week AI curriculum. For free. On GitHub. [--] lessons. Hands-on labs. Real projects. https://t.co/ShBeTD6dwy"
X Link 2026-01-26T17:07Z 27.6K followers, 20.2K engagements
"Let's Understand GraphRAG Everyone thinks GraphRAG just means swapping your Vector DB for a Graph DB. It doesn't. GraphRAG (specifically the microsoft research implementation) is a fundamental shift in how data is indexed to solve the one problem Standard Vector RAG struggles with: Holistic Reasoning. 1./ the problem = Vector RAG is 'Myopic' - Standard Vector RAG retrieves chunks based on semantic similarity. Query = "What does this dataset say about Apple" Vector RAG = Finds the top [--] chunks containing "Apple". Success. Query = "What are the top [--] recurring themes in this entire dataset""
X Link 2026-01-27T05:22Z 27.8K followers, 14K engagements
"Recruiters be like: We want one engineer who can replace Google Cloud Netflix and NASA combined"
X Link 2026-01-28T11:45Z 27.7K followers, [----] engagements
"Build LLMs from Scratchπ Found this gem by Vizuara Technologies a 43-lecture series that actually delivers on its promise: building Large Language Models from the ground up. What's inside: Transformer architecture GPT internals Tokenization (BPE) Attention mechanisms Complete Python implementations Perfect for ML engineers and developers who want to understand what's really happening under the hood of ChatGPT Claude and similar models. π Playlist link in comments https://twitter.com/i/web/status/2015404407438090521 https://twitter.com/i/web/status/2015404407438090521"
X Link 2026-01-25T12:40Z 28.3K followers, 33K engagements
"AI Engineering Roadmap [----] AI Engineering is becoming less about the AI and more about the Engineering. Ive been looking at how the requirements for AI roles are evolving for [----] and this roadmap captures the transition perfectly. We are moving towards an era of rigorous engineering. https://t.co/zJSxfuh8UK AI Engineering is becoming less about the AI and more about the Engineering. Ive been looking at how the requirements for AI roles are evolving for [----] and this roadmap captures the transition perfectly. We are moving towards an era of rigorous engineering. https://t.co/zJSxfuh8UK"
X Link 2026-01-25T15:45Z 28.1K followers, 44.7K engagements
"Clawdbot (now Moltbot) In case you don't know. 1./ Moltbot is a self-hosted AI assistant designed to run on your own machines (macOS Windows Linux) rather than a closed hosted service. It aims to be always-on customizable and integrated with your workflows. 2./ It connects with messaging platforms like WhatsApp Telegram Slack Discord Google Chat Signal iMessage and others letting you interact with it like a chat contact. 3./ Moltbot isnt just for answering questions it can perform actions on your behalf such as automating tasks running scripts handling Cron-like scheduled jobs browsing the"
X Link 2026-01-27T13:55Z 28.2K followers, 17.3K engagements
"The [----] guide for building modern UI in the agentic era. CopilotKit dropped the blueprint for how AI will finally break out of the chat box and generate the UI layer on demand. In case you need this 14-page guide on AG-UI A2UI and MCP Apps comment "Gen UI""
X Link 2026-01-29T18:47Z 28K followers, [----] engagements
"You can now run "Claude Code" without Claude. Let me explain. We typically view Claude Code as a proprietary product locked behind Anthropic's paywall. But architecturally it is just a highly polished Agentic CLI (the interface) that sends instructions to an LLM API (the intelligence). Until now that API URL was hardcoded to Anthropics servers. Ollama v0.14 just changed the architecture. They have implemented full Anthropic Messages API Compatibility. This creates a "Drop-in Replacement" endpoint. You can now trick the claude CLI into believing your local machine is the Anthropic Cloud. Why"
X Link 2026-01-31T16:31Z 28.2K followers, 36.2K engagements
"Read here: https://ollama.com/blog/claude https://ollama.com/blog/claude"
X Link 2026-01-31T16:31Z 28K followers, [----] engagements
"Here's the GitHub: https://github.com/HKUDS/nanobot https://github.com/HKUDS/nanobot"
X Link 2026-02-02T19:45Z 28.3K followers, [----] engagements
"2 LLaMA Factory Your all-in-one fine-tuning toolkit. β Supports 100+ models β CLI + WebUI (beginner friendly) β LoRA QLoRA full & frozen FT (28 bit) β Built-in datasets training monitors & exports https://github.com/hiyouga/LlamaFactory https://github.com/hiyouga/LlamaFactory"
X Link 2026-02-04T18:56Z 28.3K followers, [----] engagements
"Teenager: "I cleaned my room." You: "Why is there water on the ceiling" Teenager: "You didn't say HOW to clean it." Agent: "Task completed." You: "Why did you call the payment API [--] times" Agent: "You didn't say how MANY times." The instructions were clear to you. The interpretation was creative. This is why task completion metrics lie. They tell you the room is clean. They don't tell you about the ceiling. DeepEval traces the full decision path: Every tool call Every reasoning step Every loop and backtrack You see exactly how the agent "cleaned the room." Then you can actually fix the"
X Link 2026-02-06T09:52Z 28.2K followers, [----] engagements
"Elon's boldest prediction: "In [--] months the most economically compelling place to put AI will be space." Not [--] years. Not [--] years. [--] months. His math: Solar panels are 5x more effective in space No day/night cycle no clouds no atmosphere No batteries needed No permits required "It's always sunny in space." The only place you can truly scale is space. Once you think in terms of what percentage of the Sun's power you're harnessing you realize you can't scale on Earth"
X Link 2026-02-07T20:53Z 28.4K followers, [--] engagements
"Elon on the irony of AI company names: "Midjourney is not mid." "Stability AI is unstable." "OpenAI is closed." "Anthropic Misanthropic." Why did he name it X "It's a name you can't invert. Largely irony-proof. By design." He thinks simulation theory is real. And whoever's running us loves ironic outcomes. "The most ironic outcome is the most likely." https://twitter.com/i/web/status/2020239455987331567 https://twitter.com/i/web/status/2020239455987331567"
X Link 2026-02-07T20:53Z 28.4K followers, [--] engagements
"If you're a Software Developer you should understand why is the Go-To Database for modern apps. π . π The founders faced challenges building large-scale web applications with existing databases. As the internet grew with more dynamic websites and apps the old database tools couldn't keep up. MongoDB was designed to fill this gap. It offered the flexibility scalability and ease of use that developers needed for the new web. MongoDB is primarily written in C++ but utilizes JavaScript for its shell and Python for some tools and drivers. π ' - At its core MongoDB = NoSQL document-oriented"
X Link 2025-07-19T08:21Z 29.9K followers, [----] engagements
"- MongoDB was born out of necessity the founders needed a better database to handle modern web apps. - MongoDB doesnt support traditional SQL joins but it offers $lookup in the aggregation pipeline for basic join functionality. - MongoDB isnt just a database its a core part of how modern apps scale. π Free Newsletter - http://thecuriousmak.substack.com https://twitter.com/i/web/status/1946486227940803005 http://thecuriousmak.substack.com https://twitter.com/i/web/status/1946486227940803005"
X Link 2025-07-19T08:24Z 29.9K followers, [---] engagements
"Why "Delete" doesn't actually Delete (the tombstone trap) In Log-Structured Merge (LSM) databases like Cassandra ScyllaDB or RocksDB files are immutable. Once written they cannot be modified. So how do you delete a record You write a new one. 1./ The Tombstone To delete User123 the database writes a new record with a special marker: Key: User123 Value: TOMBSTONE A Tombstone is effectively a note that says: "This key is dead as of 10:05 AM." [--]. /The Read Path When you query data the database reads both the old record and the new marker: User123: "Alice" (Timestamp: 10:00) User123: TOMBSTONE"
X Link 2025-11-20T12:11Z 30.1K followers, [----] engagements
"Robotics Course Check here: https://huggingface.co/learn/robotics-course/unit0/1 https://huggingface.co/learn/robotics-course/unit0/1"
X Link 2025-12-16T08:14Z 29.5K followers, [----] engagements
"smol Course Check here: https://huggingface.co/learn/smol-course/unit0/1 https://huggingface.co/learn/smol-course/unit0/1"
X Link 2025-12-16T08:14Z 29.5K followers, [----] engagements
"Computer Vision Course Check here: https://huggingface.co/learn/computer-vision-course/unit0/welcome/welcome https://huggingface.co/learn/computer-vision-course/unit0/welcome/welcome"
X Link 2025-12-16T08:14Z 29.5K followers, [----] engagements
Limited data mode. Full metrics available with subscription: lunarcrush.com/pricing
/creator/twitter::techNmak