Dark | Light
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

![ycombinator Avatar](https://lunarcrush.com/gi/w:24/cr:twitter::113130846.png) Y Combinator [@ycombinator](/creator/twitter/ycombinator) on x 1.5M followers
Created: 2025-05-30 14:00:42 UTC

At first, prompting seemed to be a temporary workaround for getting the most out of large language models. But over time, it's become critical to the way we interact with AI.

On the @LightconePod, Garry, Harj, Diana, and Jared break down what they've learned from working with hundreds of founders building with LLMs: why prompting still matters, where it breaks down, and how teams are making it more reliable in production.

They share real examples of prompts that failed, how companies are testing for quality, and what the best teams are doing to make LLM outputs useful and predictable.

0:58 - Parahelp’s prompt example
4:59 - Different types of prompts
6:51 - Metaprompting 
7:58 - Using examples
12:10 - Some tricks for longer prompts
14:18 - Findings on evals
17:25 - Every founder has become a forward-deployed engineer (FDE)
23:18 - Vertical AI agents are closing big deals with the FDE model 
26:13 - The personalities of the different LLMs
27:26 - Lessons from rubrics
29:47 - Kaizen and the art of communication

![](https://pbs.twimg.com/amplify_video_thumb/1928446683228364800/img/osLAm2YmtefMxUyY.jpg)

XXXXXXX engagements

![Engagements Line Chart](https://lunarcrush.com/gi/w:600/p:tweet::1928451506904400236/c:line.svg)

**Related Topics**
[coins ai](/topic/coins-ai)
[y combinator](/topic/y-combinator)

[Post Link](https://x.com/ycombinator/status/1928451506904400236)

[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]

ycombinator Avatar Y Combinator @ycombinator on x 1.5M followers Created: 2025-05-30 14:00:42 UTC

At first, prompting seemed to be a temporary workaround for getting the most out of large language models. But over time, it's become critical to the way we interact with AI.

On the @LightconePod, Garry, Harj, Diana, and Jared break down what they've learned from working with hundreds of founders building with LLMs: why prompting still matters, where it breaks down, and how teams are making it more reliable in production.

They share real examples of prompts that failed, how companies are testing for quality, and what the best teams are doing to make LLM outputs useful and predictable.

0:58 - Parahelp’s prompt example 4:59 - Different types of prompts 6:51 - Metaprompting 7:58 - Using examples 12:10 - Some tricks for longer prompts 14:18 - Findings on evals 17:25 - Every founder has become a forward-deployed engineer (FDE) 23:18 - Vertical AI agents are closing big deals with the FDE model 26:13 - The personalities of the different LLMs 27:26 - Lessons from rubrics 29:47 - Kaizen and the art of communication

XXXXXXX engagements

Engagements Line Chart

Related Topics coins ai y combinator

Post Link

post/tweet::1928451506904400236
/post/tweet::1928451506904400236