[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  edgar [@edgarpavlovsky](/creator/twitter/edgarpavlovsky) on x 27.6K followers Created: 2025-06-23 16:59:10 UTC the chat interface is dead (and why $dark lives in the future) we're designing @darkresearchai's app experience & talking to users about how they research in crypto everybody's thinking about faster horses. we need the Ford Model T. chat will continue to be part of communicating with AI -- there's no good reason for it to go away completely -- but chat-only interfaces are rudimentary compared to what can be. take @karpathy's talk at YC last week: he uses @cursor_ai as a great example of the next evolution in interfaces. cursor has absolutely exploded in the developer world for a couple key reasons (and all of these are built into @darkresearchai's scout ☺️): 1) Apps shouldn't just calls LLMs Calling an LLM (like you do in ChatGPT) is just a part of the experience. When you're researching, there's a ton of information you're organizing. you might want to keep running notes organized by topic, or you might decide to deep dive into different parts of your research at different times. This applies to all deep research experiences -- reports aren't done once they're generated, that's just the first version of it. Research apps should account for this; I'm increasingly convinced this is the future of what all notetaking apps will look like. Technically speaking: apps should understand the state of the entire user experience and intelligently integrate the right parts of it into context windows for LLM calls. 2) Apps should handle model & tool orchestration Different LLMs, embeddings, tool calls, memory -- all of this should be handled for the user behind the scenes. This is where multi-agent systems will thrive, and why we migrated Scout to a multi-agent system. Multi-agent systems allow [orchestration] and [specialization] roles to be split up across different agents within the same process, creating an architecture that (a) can focus on task execution better (b) is more extensible Ultimately: do more, do it better, and don't ask the user to worry about the technical implementation details. 3) Application-specific GUIs are better Chat is a fine interface, but chat is limited -- you never did your research in a chat until chat was the only way you could work with an LLM on research. You'll still talk to an LLM through chat, but your research is really a note-taking exercise -- we're designing around this at @darkresearchai. 4) The most important one: The Autonomy Slider This is probably Cursor's most powerful feature: it allows users to seamlessly choose between working on code manually and working with AI on it, with multiple levels of AI-powered coding on the spectrum. Research is going to look the same way, with some different tweaks. You might: (a) Want to write notes manually (fastest way to jot something down) (b) Leverage data enrichment to augment your notes (what if you just want to expand on a specific block of bullet points you jotted down earlier?) (c) Have AI write entire reports for you You need this flexibility, all in one place, without thinking about it. This is the experience Cursor gives developers, and it's the experience Scout will give researchers. Scout's magic has really been coming together this last week as we've added a frontend to its experience and started to refine the visuals. I'm super excited for you to start playing with it. Let me know what you want to see in the UX in the comments! PS: The cursor screenshot below is actual Scout backend code 👀 see any easter eggs that catch your eye?  XXXXXX engagements  **Related Topics** [coins ai](/topic/coins-ai) [ford](/topic/ford) [$dark](/topic/$dark) [coins gaming](/topic/coins-gaming) [Post Link](https://x.com/edgarpavlovsky/status/1937193727954723235)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
edgar @edgarpavlovsky on x 27.6K followers
Created: 2025-06-23 16:59:10 UTC
the chat interface is dead
(and why $dark lives in the future)
we're designing @darkresearchai's app experience & talking to users about how they research in crypto
everybody's thinking about faster horses. we need the Ford Model T.
chat will continue to be part of communicating with AI -- there's no good reason for it to go away completely -- but chat-only interfaces are rudimentary compared to what can be.
take @karpathy's talk at YC last week: he uses @cursor_ai as a great example of the next evolution in interfaces. cursor has absolutely exploded in the developer world for a couple key reasons (and all of these are built into @darkresearchai's scout ☺️):
Calling an LLM (like you do in ChatGPT) is just a part of the experience. When you're researching, there's a ton of information you're organizing. you might want to keep running notes organized by topic, or you might decide to deep dive into different parts of your research at different times.
This applies to all deep research experiences -- reports aren't done once they're generated, that's just the first version of it. Research apps should account for this; I'm increasingly convinced this is the future of what all notetaking apps will look like.
Technically speaking: apps should understand the state of the entire user experience and intelligently integrate the right parts of it into context windows for LLM calls.
Different LLMs, embeddings, tool calls, memory -- all of this should be handled for the user behind the scenes. This is where multi-agent systems will thrive, and why we migrated Scout to a multi-agent system. Multi-agent systems allow [orchestration] and [specialization] roles to be split up across different agents within the same process, creating an architecture that
(a) can focus on task execution better (b) is more extensible
Ultimately: do more, do it better, and don't ask the user to worry about the technical implementation details.
Chat is a fine interface, but chat is limited -- you never did your research in a chat until chat was the only way you could work with an LLM on research. You'll still talk to an LLM through chat, but your research is really a note-taking exercise -- we're designing around this at @darkresearchai.
This is probably Cursor's most powerful feature: it allows users to seamlessly choose between working on code manually and working with AI on it, with multiple levels of AI-powered coding on the spectrum.
Research is going to look the same way, with some different tweaks. You might:
(a) Want to write notes manually (fastest way to jot something down) (b) Leverage data enrichment to augment your notes (what if you just want to expand on a specific block of bullet points you jotted down earlier?) (c) Have AI write entire reports for you
You need this flexibility, all in one place, without thinking about it. This is the experience Cursor gives developers, and it's the experience Scout will give researchers.
Scout's magic has really been coming together this last week as we've added a frontend to its experience and started to refine the visuals.
I'm super excited for you to start playing with it. Let me know what you want to see in the UX in the comments!
PS: The cursor screenshot below is actual Scout backend code 👀 see any easter eggs that catch your eye?
XXXXXX engagements
Related Topics coins ai ford $dark coins gaming
/post/tweet::1937193727954723235