[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  AI Engineer [@aiDotEngineer](/creator/twitter/aiDotEngineer) on x 27.9K followers Created: 2025-07-03 20:53:08 UTC Want to learn more about Context Engineering? Start here. 🆕 12-factor Agents: Patterns of reliable LLM applications next featured talk on our Agent Reliability track is the fan favorite manifesto by @dexhorthy, who, among many other things, coined Context Engineering! Even if LLMs continue to get exponentially more powerful, there will be core engineering techniques that make LLM-powered software more reliable, more scalable, and easier to maintain. Factor 1: Natural Language to Tool Calls Factor 2: Own your prompts Factor 3: Own your context window Factor 4: Tools are just structured outputs Factor 5: Unify execution state and business state Factor 6: Launch/Pause/Resume with simple APIs Factor 7: Contact humans with tool calls Factor 8: Own your control flow Factor 9: Compact Errors into Context Window Factor 10: Small, Focused Agents Factor 11: Trigger from anywhere, meet users where they are Factor 12: Make your agent a stateless reducer  XXXXXXX engagements  **Related Topics** [llm](/topic/llm) [context engineering](/topic/context-engineering) [coins ai](/topic/coins-ai) [Post Link](https://x.com/aiDotEngineer/status/1940876485939564586)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
AI Engineer @aiDotEngineer on x 27.9K followers
Created: 2025-07-03 20:53:08 UTC
Want to learn more about Context Engineering? Start here.
🆕 12-factor Agents: Patterns of reliable LLM applications
next featured talk on our Agent Reliability track is the fan favorite manifesto by @dexhorthy, who, among many other things, coined Context Engineering!
Even if LLMs continue to get exponentially more powerful, there will be core engineering techniques that make LLM-powered software more reliable, more scalable, and easier to maintain.
Factor 1: Natural Language to Tool Calls Factor 2: Own your prompts Factor 3: Own your context window Factor 4: Tools are just structured outputs Factor 5: Unify execution state and business state Factor 6: Launch/Pause/Resume with simple APIs Factor 7: Contact humans with tool calls Factor 8: Own your control flow Factor 9: Compact Errors into Context Window Factor 10: Small, Focused Agents Factor 11: Trigger from anywhere, meet users where they are Factor 12: Make your agent a stateless reducer
XXXXXXX engagements
Related Topics llm context engineering coins ai
/post/tweet::1940876485939564586