[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @_avichawla Avi Chawla Avi Chawla posts on X about llm, token, protocol, inference the most. They currently have XXXXXX followers and XXX posts still getting attention that total XXXXXXX engagements in the last XX hours. ### Engagements: XXXXXXX [#](/creator/twitter::1175166450832687104/interactions)  - X Week XXXXXXXXX +148% - X Month XXXXXXXXX +93% - X Months XXXXXXXXXX +538% - X Year XXXXXXXXXX +53,625% ### Mentions: XX [#](/creator/twitter::1175166450832687104/posts_active)  ### Followers: XXXXXX [#](/creator/twitter::1175166450832687104/followers)  - X Week XXXXXX +5.30% - X Month XXXXXX +20% - X Months XXXXXX +253% - X Year XXXXXX +2,487% ### CreatorRank: XXXXXX [#](/creator/twitter::1175166450832687104/influencer_rank)  ### Social Influence [#](/creator/twitter::1175166450832687104/influence) --- **Social category influence** [technology brands](/list/technology-brands) [stocks](/list/stocks) [social networks](/list/social-networks) **Social topic influence** [llm](/topic/llm) #25, [token](/topic/token) #80, [protocol](/topic/protocol) #442, [inference](/topic/inference), [mcp server](/topic/mcp-server), [if you](/topic/if-you), [python](/topic/python), [ollama](/topic/ollama), [10x](/topic/10x), [ibm](/topic/ibm) **Top assets mentioned** [IBM (IBM)](/topic/ibm) [Alphabet Inc Class A (GOOGL)](/topic/$googl) ### Top Social Posts [#](/creator/twitter::1175166450832687104/posts) --- Top posts by engagements in the last XX hours "Containerized versions of 450+ MCP servers in a single repo - No manual setupjust pull the image. - Safe to run in isolated containers unlike scripts. - Auto-updated daily. Easiest and safest way to use MCP servers with Agents"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1933411537798271018) 2025-06-13 06:30:06 UTC 42.1K followers, 149.7K engagements "1 Pre-training This stage teaches the LLM the basics of language by training it on massive corpora to predict the next token. This way it absorbs grammar world facts etc. But its not good at conversation because when prompted it just continues the text. Check this ๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1947184672234221726) 2025-07-21 06:39:37 UTC 41.6K followers, 21.9K engagements "Naive RAG vs. Agentic RAG clearly explained (with visuals):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1943196006600053238) 2025-07-10 06:30:05 UTC 42.2K followers, 290.5K engagements "2 Router pattern - A human defines the paths/functions that exist in the flow. - The LLM makes basic decisions on which function or path it can take. Check this visual๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1948994289733959797) 2025-07-26 06:30:23 UTC 42.2K followers, 4618 engagements "4 stages of training LLMs from scratch clearly explained (with visuals):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1947184607277019582) 2025-07-21 06:39:21 UTC 42.2K followers, 735.5K engagements "0 Randomly initialized LLM At this point the model knows nothing. You ask it What is an LLM and get gibberish like try peter hand and hello 448Sn. It hasnt seen any data yet and possesses just random weights. Check this ๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1947184651602456910) 2025-07-21 06:39:32 UTC 42.1K followers, 27.9K engagements "3 vLLM vLLM is a fast and easy-to-use library for LLM inference and serving. It provides state-of-the-art serving throughput With a few lines of code you can locally run LLMs as an OpenAI-compatible server. Check this out๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1927251156239016373) 2025-05-27 06:30:56 UTC 42.1K followers, 6192 engagements "That said KV cache also takes a lot of memory. Llama3-70B has: - total layers = XX - hidden size = 8k - max output size = 4k Here: - Every token takes up XXX MB in KV cache. - 4k tokens will take up XXXX GB. More users more memory. I'll cover KV optimization soon"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1949356861809213522) 2025-07-27 06:31:07 UTC 42.2K followers, 8108 engagements "Let's build an MCP-powered financial analyst (100% local):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1945008827071676738) 2025-07-15 06:33:35 UTC 42.1K followers, 330.8K engagements "That's a wrap If you found it insightful reshare it with your network. Find me @_avichawla Every day I share tutorials and insights on DS ML LLMs and RAGs"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1948272325373436117) 2025-07-24 06:41:34 UTC 42.2K followers, 19.4K engagements "I have tested 100+ MCP servers in the last X months Here are X must-use MCP servers for all developers (open-source):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1926892606417444988) 2025-05-26 06:46:11 UTC 42.1K followers, 733.5K engagements "KV caching in LLMs clearly explained (with visuals):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1949356585677210078) 2025-07-27 06:30:01 UTC 42.2K followers, 389.9K engagements "Building front-end Agentic apps just got 10x easier (open-source) If you're building apps where Agents are part of the interface not just running in the background AG-UI protocol has become the standard. For context: - MCP connects agents to tools - A2A connects agents to other agents - AG-UI connects agents to users It defines a common interface between Agents and the UI layer. AG-UI itself is Agent framework agnostic and it lets you: - stream token-level updates - show tool progress in real time - share mutable state - and pause for human input The visual below summarizes the latest"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1947907050950037579) 2025-07-23 06:30:05 UTC 42.2K followers, 32.9K engagements "KV caching is a technique used to speed up LLM inference. Before understanding the internal details look at the inference speed difference in the video: - with KV caching X seconds - without KV caching XX seconds (5x slower) Let's dive in"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1949356658658005482) 2025-07-27 06:30:19 UTC 42.2K followers, 19.3K engagements "After MCP A2A & AG-UI there's another Agent protocol. It's fully open-source and launched by IBM Research. Here's a complete breakdown (with code):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1946095899261972952) 2025-07-18 06:33:13 UTC 42.1K followers, 249.3K engagements "MCP & A2A (Agent2Agent) protocol clearly explained (with visuals):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1939210472949195091) 2025-06-29 06:33:00 UTC 42.2K followers, 514.6K engagements "@akshay_pachaar Good breakdown Akshay. I liked Karthaphy's point that "CE is the delicate art of filling the context window with just the right information""  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1949094470928015559) 2025-07-26 13:08:28 UTC 42.1K followers, 1298 engagements "4 LlamaCPP LlamaCPP enables LLM inference with minimal setup and state-of-the-art performance. Here's DeepSeek-R1 running on a Mac Studio ๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1927251175088251357) 2025-05-27 06:31:01 UTC 42.1K followers, 6796 engagements "Today we are covering the X stages of building LLMs from scratch to make them applicable for real-world use cases. We'll cover: - Pre-training - Instruction fine-tuning - Preference fine-tuning - Reasoning fine-tuning The visual summarizes these techniques. Let's dive in"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1947184632061173858) 2025-07-21 06:39:27 UTC 42.2K followers, 35.4K engagements "uv in Python clearly explained (with code):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1940659274981822722) 2025-07-03 06:30:01 UTC 42.2K followers, 513.8K engagements "Well create a research summary generator where: - Agent X drafts a general topic summary (built using CrewAI) - Agent X fact-checks & enhances it using web search (built using Smolagents). Start by installing some dependencies and a local LLM using Ollama. Check this ๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1946095957772546235) 2025-07-18 06:33:27 UTC 41.8K followers, 4729 engagements "I have been training neural networks for X years now. Here are XX ways I actively use to optimize model training:"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1946820714423828701) 2025-07-20 06:33:23 UTC 42.2K followers, 356.8K engagements "Let's compare Qwen X Coder & Sonnet X for code generation:"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1948272019159843025) 2025-07-24 06:40:21 UTC 42.2K followers, 573.1K engagements "Here's how it works: - Build the Agents and host them on ACP servers. - The ACP server receives requests from the ACP Client and forwards them to the Agent. - ACP Client itself can be an Agent to intelligently route requests to the Agents (like MCP Client does). Check this ๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1946095940794011753) 2025-07-18 06:33:23 UTC 41.8K followers, 5842 engagements "To understand KV caching we must know how LLMs output tokens. - Transformer produces hidden states for all tokens. - Hidden states are projected to the vocab space. - Logits of the last token are used to generate the next token. - Repeat for subsequent tokens. Check this๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1949356680871067925) 2025-07-27 06:30:24 UTC 42.2K followers, 16.9K engagements "After MCP A2A & AG-UI there's another Agent protocol (open-source). ACP (Agent Communication Protocol) is a standardized RESTful interface for Agents to discover and coordinate with other Agents regardless of their framework (CrewAI LangChain etc.). Here's how it works: - Build your Agents and host them on ACP servers. - The ACP server will receive requests from the ACP Client and forward them to the Agent. - ACP Client itself can be an Agent to intelligently route requests to the Agents (just like MCP Client does to MCP tools). So essentially just like A2A it lets Agents communicate with"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1940296901112529028) 2025-07-02 06:30:04 UTC 42.1K followers, 119.4K engagements "5 Autonomous pattern The most advanced pattern wherein the LLM generates and executes new code independently effectively acting as an independent AI developer. Here's a visual to understand this๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1948994348651348086) 2025-07-26 06:30:37 UTC 42.2K followers, 2622 engagements "4 stages of LLM training from scratch: - Pre-training - Instruction fine-tuning - Preference fine-tuning - Reasoning fine-tuning Read the explainer thread below to learn more๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1947398757542248869) 2025-07-21 20:50:19 UTC 42.1K followers, 23K engagements "Query 2: Build an MCP server that creates a new Notion page when someone drops a file into a specific Google Drive folder. Sonnet X vs. Qwen X Coder: - Correctness: XXXX vs XXXX - Readability: XXXX vs XXXX - Best practices: XXXX vs XXXX Qwen3 Coder wins again Check this๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1948272294025125954) 2025-07-24 06:41:26 UTC 42.2K followers, 15K engagements "Build production-grade LLM web apps in minutes (open-source) While data scientists and machine learning engineers are fond of using Jupyter to explore data & build models. .an interactive app is better for those who dont care about the code and are interested in results. Taipy is an open-source Python AI & data web application builder. Thus there's no need to learn JavaScript CSS or HTML. You can think of Taipy as a more robust version of Streamlit which is capable of building: - prototypes (like Streamlit) - robust production-ready data apps. The latency difference in practical apps is quite"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1949719038298636744) 2025-07-28 06:30:17 UTC 42.2K followers, 13.5K engagements "The only MCP server you'll ever need MindsDB lets you query data from 200+ sources like Slack Gmail social platforms and more in both SQL and natural language. A federated query engine that comes with a built-in MCP server. XXX% open-source with 33k+ stars"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1944283926622875816) 2025-07-13 06:33:05 UTC 42.2K followers, 210.2K engagements "Let's build an MCP server (100% locally):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1936673005826195841) 2025-06-22 06:30:00 UTC 42.1K followers, 672.4K engagements "3 Tool calling - A human defines a set of tools the LLM can access to complete a task. - LLM decides when to use them and also the arguments for execution. Check this visual๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1948994309480742922) 2025-07-26 06:30:28 UTC 42.2K followers, 3529 engagements "4 Multi-agent pattern A manager agent coordinates multiple sub-agents and decides the next steps iteratively. - A human lays out the hierarchy between agents their roles tools etc. - The LLM controls execution flow deciding what to do next. See this visual๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1948994329193922807) 2025-07-26 06:30:33 UTC 42.2K followers, 3029 engagements "Thus to generate a new token we only need the hidden state of the most recent token. None of the other hidden states are required. Next let's see how the last hidden state is computed within the transformer layer from the attention mechanism"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1949356701410640360) 2025-07-27 06:30:29 UTC 42.2K followers, 14.2K engagements "The ultimate MCP illustrated guidebook (free) 75+ pages that cover: - The MCP fundamentals (explained visually). - XX hands-on projects for AI engineers (covered with code). XXX% hands-on"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1948631882335502368) 2025-07-25 06:30:19 UTC 42.2K followers, 26K engagements "Finally a browser automation framework for Agents that actually works in production (open-source) Typical browser automation tools like Selenium or Playwright require you to hard-code automation. These are brittle since one change in the website can disrupt the full workflow. On the other hand high-level Agents like OpenAI Operator can be unpredictable in production. Stagehand is an open-source framework that bridges the gap between: - brittle traditional automation like Playwright Selenium etc. and - unpredictable full-agent solutions like OpenAI Operator. Key features: - Use AI when you"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1943920829785682182) 2025-07-12 06:30:16 UTC 42.1K followers, 102.1K engagements "Fine-tune 100+ LLMs directly from a UI LLaMA-Factory lets you train and fine-tune open-source LLMs and VLMs without writing any code. Supports 100+ models multimodal fine-tuning PPO DPO experiment tracking and much more XXX% open-source with 50k stars"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1926164620324069843) 2025-05-24 06:33:26 UTC 42.1K followers, 563.1K engagements "Finally a framework to connect any LLM to any MCP server (open-source). mcp-use lets you connect any LLM to any MCP server & build custom MCP Agents without using closed-source apps like Cursor/Claude. Compatible with Ollama LangChain etc. Build XXX% local MCP clients"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1947544742889460034) 2025-07-22 06:30:24 UTC 42.2K followers, 122.1K engagements "Agentic AI systems don't just generate text; they can make decisions call functions and even run autonomous workflows. The visual explains X levels of AI agency starting from simple responders to fully autonomous agents. Let's dive in to learn more"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1948994248260673575) 2025-07-26 06:30:14 UTC 42.2K followers, 7340 engagements "Those were the X stages of training an LLM from scratch. - Start with a randomly initialized model. - Pre-train it on large-scale corpora. - Use instruction fine-tuning to make it follow commands. - Use preference & reasoning fine-tuning to sharpen responses. Check this ๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1947184824760029316) 2025-07-21 06:40:13 UTC 41.6K followers, 7460 engagements "5 levels of Agentic AI systems clearly explained (with visuals):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1948994220079124786) 2025-07-26 06:30:07 UTC 42.2K followers, 145.5K engagements "1 Basic responder - A human guides the entire flow. - The LLM is just a generic responder that receives an input and produces an output. It has little control over the program flow. See this visual๐"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1948994269697716652) 2025-07-26 06:30:19 UTC 42.2K followers, 5946 engagements "10 GitHub repos that will set you up for a career in AI engineering (100% free):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1938127223594356841) 2025-06-26 06:48:33 UTC 42.1K followers, 371.4K engagements "How to sync GPUs in multi-GPU training clearly explained (with visuals):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1943558379760423362) 2025-07-11 06:30:01 UTC 42.2K followers, 311.8K engagements "That's a wrap If you found it insightful reshare it with your network. Find me @_avichawla Every day I share tutorials and insights on DS ML LLMs and RAGs"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1949356873834217549) 2025-07-27 06:31:10 UTC 42.2K followers, 8109 engagements "10 MCP AI Agents and RAG projects for AI Engineers (with code):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1911306413932163338) 2025-04-13 06:32:14 UTC 42.1K followers, 657.2K engagements "How LLMs work clearly explained (with visuals):"  [@_avichawla](/creator/x/_avichawla) on [X](/post/tweet/1942472125484523605) 2025-07-08 06:33:38 UTC 42.2K followers, 733.9K engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Avi Chawla posts on X about llm, token, protocol, inference the most. They currently have XXXXXX followers and XXX posts still getting attention that total XXXXXXX engagements in the last XX hours.
Social category influence technology brands stocks social networks
Social topic influence llm #25, token #80, protocol #442, inference, mcp server, if you, python, ollama, 10x, ibm
Top assets mentioned IBM (IBM) Alphabet Inc Class A (GOOGL)
Top posts by engagements in the last XX hours
"Containerized versions of 450+ MCP servers in a single repo - No manual setupjust pull the image. - Safe to run in isolated containers unlike scripts. - Auto-updated daily. Easiest and safest way to use MCP servers with Agents" @_avichawla on X 2025-06-13 06:30:06 UTC 42.1K followers, 149.7K engagements
"1 Pre-training This stage teaches the LLM the basics of language by training it on massive corpora to predict the next token. This way it absorbs grammar world facts etc. But its not good at conversation because when prompted it just continues the text. Check this ๐" @_avichawla on X 2025-07-21 06:39:37 UTC 41.6K followers, 21.9K engagements
"Naive RAG vs. Agentic RAG clearly explained (with visuals):" @_avichawla on X 2025-07-10 06:30:05 UTC 42.2K followers, 290.5K engagements
"2 Router pattern - A human defines the paths/functions that exist in the flow. - The LLM makes basic decisions on which function or path it can take. Check this visual๐" @_avichawla on X 2025-07-26 06:30:23 UTC 42.2K followers, 4618 engagements
"4 stages of training LLMs from scratch clearly explained (with visuals):" @_avichawla on X 2025-07-21 06:39:21 UTC 42.2K followers, 735.5K engagements
"0 Randomly initialized LLM At this point the model knows nothing. You ask it What is an LLM and get gibberish like try peter hand and hello 448Sn. It hasnt seen any data yet and possesses just random weights. Check this ๐" @_avichawla on X 2025-07-21 06:39:32 UTC 42.1K followers, 27.9K engagements
"3 vLLM vLLM is a fast and easy-to-use library for LLM inference and serving. It provides state-of-the-art serving throughput With a few lines of code you can locally run LLMs as an OpenAI-compatible server. Check this out๐" @_avichawla on X 2025-05-27 06:30:56 UTC 42.1K followers, 6192 engagements
"That said KV cache also takes a lot of memory. Llama3-70B has: - total layers = XX - hidden size = 8k - max output size = 4k Here: - Every token takes up XXX MB in KV cache. - 4k tokens will take up XXXX GB. More users more memory. I'll cover KV optimization soon" @_avichawla on X 2025-07-27 06:31:07 UTC 42.2K followers, 8108 engagements
"Let's build an MCP-powered financial analyst (100% local):" @_avichawla on X 2025-07-15 06:33:35 UTC 42.1K followers, 330.8K engagements
"That's a wrap If you found it insightful reshare it with your network. Find me @_avichawla Every day I share tutorials and insights on DS ML LLMs and RAGs" @_avichawla on X 2025-07-24 06:41:34 UTC 42.2K followers, 19.4K engagements
"I have tested 100+ MCP servers in the last X months Here are X must-use MCP servers for all developers (open-source):" @_avichawla on X 2025-05-26 06:46:11 UTC 42.1K followers, 733.5K engagements
"KV caching in LLMs clearly explained (with visuals):" @_avichawla on X 2025-07-27 06:30:01 UTC 42.2K followers, 389.9K engagements
"Building front-end Agentic apps just got 10x easier (open-source) If you're building apps where Agents are part of the interface not just running in the background AG-UI protocol has become the standard. For context: - MCP connects agents to tools - A2A connects agents to other agents - AG-UI connects agents to users It defines a common interface between Agents and the UI layer. AG-UI itself is Agent framework agnostic and it lets you: - stream token-level updates - show tool progress in real time - share mutable state - and pause for human input The visual below summarizes the latest" @_avichawla on X 2025-07-23 06:30:05 UTC 42.2K followers, 32.9K engagements
"KV caching is a technique used to speed up LLM inference. Before understanding the internal details look at the inference speed difference in the video: - with KV caching X seconds - without KV caching XX seconds (5x slower) Let's dive in" @_avichawla on X 2025-07-27 06:30:19 UTC 42.2K followers, 19.3K engagements
"After MCP A2A & AG-UI there's another Agent protocol. It's fully open-source and launched by IBM Research. Here's a complete breakdown (with code):" @_avichawla on X 2025-07-18 06:33:13 UTC 42.1K followers, 249.3K engagements
"MCP & A2A (Agent2Agent) protocol clearly explained (with visuals):" @_avichawla on X 2025-06-29 06:33:00 UTC 42.2K followers, 514.6K engagements
"@akshay_pachaar Good breakdown Akshay. I liked Karthaphy's point that "CE is the delicate art of filling the context window with just the right information"" @_avichawla on X 2025-07-26 13:08:28 UTC 42.1K followers, 1298 engagements
"4 LlamaCPP LlamaCPP enables LLM inference with minimal setup and state-of-the-art performance. Here's DeepSeek-R1 running on a Mac Studio ๐" @_avichawla on X 2025-05-27 06:31:01 UTC 42.1K followers, 6796 engagements
"Today we are covering the X stages of building LLMs from scratch to make them applicable for real-world use cases. We'll cover: - Pre-training - Instruction fine-tuning - Preference fine-tuning - Reasoning fine-tuning The visual summarizes these techniques. Let's dive in" @_avichawla on X 2025-07-21 06:39:27 UTC 42.2K followers, 35.4K engagements
"uv in Python clearly explained (with code):" @_avichawla on X 2025-07-03 06:30:01 UTC 42.2K followers, 513.8K engagements
"Well create a research summary generator where: - Agent X drafts a general topic summary (built using CrewAI) - Agent X fact-checks & enhances it using web search (built using Smolagents). Start by installing some dependencies and a local LLM using Ollama. Check this ๐" @_avichawla on X 2025-07-18 06:33:27 UTC 41.8K followers, 4729 engagements
"I have been training neural networks for X years now. Here are XX ways I actively use to optimize model training:" @_avichawla on X 2025-07-20 06:33:23 UTC 42.2K followers, 356.8K engagements
"Let's compare Qwen X Coder & Sonnet X for code generation:" @_avichawla on X 2025-07-24 06:40:21 UTC 42.2K followers, 573.1K engagements
"Here's how it works: - Build the Agents and host them on ACP servers. - The ACP server receives requests from the ACP Client and forwards them to the Agent. - ACP Client itself can be an Agent to intelligently route requests to the Agents (like MCP Client does). Check this ๐" @_avichawla on X 2025-07-18 06:33:23 UTC 41.8K followers, 5842 engagements
"To understand KV caching we must know how LLMs output tokens. - Transformer produces hidden states for all tokens. - Hidden states are projected to the vocab space. - Logits of the last token are used to generate the next token. - Repeat for subsequent tokens. Check this๐" @_avichawla on X 2025-07-27 06:30:24 UTC 42.2K followers, 16.9K engagements
"After MCP A2A & AG-UI there's another Agent protocol (open-source). ACP (Agent Communication Protocol) is a standardized RESTful interface for Agents to discover and coordinate with other Agents regardless of their framework (CrewAI LangChain etc.). Here's how it works: - Build your Agents and host them on ACP servers. - The ACP server will receive requests from the ACP Client and forward them to the Agent. - ACP Client itself can be an Agent to intelligently route requests to the Agents (just like MCP Client does to MCP tools). So essentially just like A2A it lets Agents communicate with" @_avichawla on X 2025-07-02 06:30:04 UTC 42.1K followers, 119.4K engagements
"5 Autonomous pattern The most advanced pattern wherein the LLM generates and executes new code independently effectively acting as an independent AI developer. Here's a visual to understand this๐" @_avichawla on X 2025-07-26 06:30:37 UTC 42.2K followers, 2622 engagements
"4 stages of LLM training from scratch: - Pre-training - Instruction fine-tuning - Preference fine-tuning - Reasoning fine-tuning Read the explainer thread below to learn more๐" @_avichawla on X 2025-07-21 20:50:19 UTC 42.1K followers, 23K engagements
"Query 2: Build an MCP server that creates a new Notion page when someone drops a file into a specific Google Drive folder. Sonnet X vs. Qwen X Coder: - Correctness: XXXX vs XXXX - Readability: XXXX vs XXXX - Best practices: XXXX vs XXXX Qwen3 Coder wins again Check this๐" @_avichawla on X 2025-07-24 06:41:26 UTC 42.2K followers, 15K engagements
"Build production-grade LLM web apps in minutes (open-source) While data scientists and machine learning engineers are fond of using Jupyter to explore data & build models. .an interactive app is better for those who dont care about the code and are interested in results. Taipy is an open-source Python AI & data web application builder. Thus there's no need to learn JavaScript CSS or HTML. You can think of Taipy as a more robust version of Streamlit which is capable of building: - prototypes (like Streamlit) - robust production-ready data apps. The latency difference in practical apps is quite" @_avichawla on X 2025-07-28 06:30:17 UTC 42.2K followers, 13.5K engagements
"The only MCP server you'll ever need MindsDB lets you query data from 200+ sources like Slack Gmail social platforms and more in both SQL and natural language. A federated query engine that comes with a built-in MCP server. XXX% open-source with 33k+ stars" @_avichawla on X 2025-07-13 06:33:05 UTC 42.2K followers, 210.2K engagements
"Let's build an MCP server (100% locally):" @_avichawla on X 2025-06-22 06:30:00 UTC 42.1K followers, 672.4K engagements
"3 Tool calling - A human defines a set of tools the LLM can access to complete a task. - LLM decides when to use them and also the arguments for execution. Check this visual๐" @_avichawla on X 2025-07-26 06:30:28 UTC 42.2K followers, 3529 engagements
"4 Multi-agent pattern A manager agent coordinates multiple sub-agents and decides the next steps iteratively. - A human lays out the hierarchy between agents their roles tools etc. - The LLM controls execution flow deciding what to do next. See this visual๐" @_avichawla on X 2025-07-26 06:30:33 UTC 42.2K followers, 3029 engagements
"Thus to generate a new token we only need the hidden state of the most recent token. None of the other hidden states are required. Next let's see how the last hidden state is computed within the transformer layer from the attention mechanism" @_avichawla on X 2025-07-27 06:30:29 UTC 42.2K followers, 14.2K engagements
"The ultimate MCP illustrated guidebook (free) 75+ pages that cover: - The MCP fundamentals (explained visually). - XX hands-on projects for AI engineers (covered with code). XXX% hands-on" @_avichawla on X 2025-07-25 06:30:19 UTC 42.2K followers, 26K engagements
"Finally a browser automation framework for Agents that actually works in production (open-source) Typical browser automation tools like Selenium or Playwright require you to hard-code automation. These are brittle since one change in the website can disrupt the full workflow. On the other hand high-level Agents like OpenAI Operator can be unpredictable in production. Stagehand is an open-source framework that bridges the gap between: - brittle traditional automation like Playwright Selenium etc. and - unpredictable full-agent solutions like OpenAI Operator. Key features: - Use AI when you" @_avichawla on X 2025-07-12 06:30:16 UTC 42.1K followers, 102.1K engagements
"Fine-tune 100+ LLMs directly from a UI LLaMA-Factory lets you train and fine-tune open-source LLMs and VLMs without writing any code. Supports 100+ models multimodal fine-tuning PPO DPO experiment tracking and much more XXX% open-source with 50k stars" @_avichawla on X 2025-05-24 06:33:26 UTC 42.1K followers, 563.1K engagements
"Finally a framework to connect any LLM to any MCP server (open-source). mcp-use lets you connect any LLM to any MCP server & build custom MCP Agents without using closed-source apps like Cursor/Claude. Compatible with Ollama LangChain etc. Build XXX% local MCP clients" @_avichawla on X 2025-07-22 06:30:24 UTC 42.2K followers, 122.1K engagements
"Agentic AI systems don't just generate text; they can make decisions call functions and even run autonomous workflows. The visual explains X levels of AI agency starting from simple responders to fully autonomous agents. Let's dive in to learn more" @_avichawla on X 2025-07-26 06:30:14 UTC 42.2K followers, 7340 engagements
"Those were the X stages of training an LLM from scratch. - Start with a randomly initialized model. - Pre-train it on large-scale corpora. - Use instruction fine-tuning to make it follow commands. - Use preference & reasoning fine-tuning to sharpen responses. Check this ๐" @_avichawla on X 2025-07-21 06:40:13 UTC 41.6K followers, 7460 engagements
"5 levels of Agentic AI systems clearly explained (with visuals):" @_avichawla on X 2025-07-26 06:30:07 UTC 42.2K followers, 145.5K engagements
"1 Basic responder - A human guides the entire flow. - The LLM is just a generic responder that receives an input and produces an output. It has little control over the program flow. See this visual๐" @_avichawla on X 2025-07-26 06:30:19 UTC 42.2K followers, 5946 engagements
"10 GitHub repos that will set you up for a career in AI engineering (100% free):" @_avichawla on X 2025-06-26 06:48:33 UTC 42.1K followers, 371.4K engagements
"How to sync GPUs in multi-GPU training clearly explained (with visuals):" @_avichawla on X 2025-07-11 06:30:01 UTC 42.2K followers, 311.8K engagements
"That's a wrap If you found it insightful reshare it with your network. Find me @_avichawla Every day I share tutorials and insights on DS ML LLMs and RAGs" @_avichawla on X 2025-07-27 06:31:10 UTC 42.2K followers, 8109 engagements
"10 MCP AI Agents and RAG projects for AI Engineers (with code):" @_avichawla on X 2025-04-13 06:32:14 UTC 42.1K followers, 657.2K engagements
"How LLMs work clearly explained (with visuals):" @_avichawla on X 2025-07-08 06:33:38 UTC 42.2K followers, 733.9K engagements
/creator/twitter::_avichawla