[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]  K Srinivas Rao [@sriniously](/creator/twitter/sriniously) on x 2936 followers Created: 2025-07-22 03:25:20 UTC Been thinking about this Cluely thing for a while now and honestly it's pretty fascinating, not because of what it does but because of what it reveals about us. Here's this XX year old Roy Lee who got kicked out of Columbia for making a tool to cheat on coding interviews, and instead of crawling into a hole somewhere, he doubles down and builds something that basically does the same thing but for every single professional interaction you can imagine. The sheer audacity is almost admirable in a twisted way. We're all pretending this is some unprecedented moral crisis when we've been augmenting human capabilities forever and calling it progress. We use GPS instead of memorizing routes, autocorrect instead of perfect spelling, calculators instead of mental math, Google to look up facts mid conversation. But the moment an AI starts whispering sales objection responses in your ear during a Zoom call, suddenly we've crossed some invisible moral line that apparently exists only in our collective imagination. Cluely does real time audio processing and screen content analysis, then feeds that through some large language model to generate contextually relevant suggestions. The whole "undetectable" thing everyone's freaking out about is just a translucent overlay that doesn't show up in screen shares. The fact that someone already built a detection tool called Truely proves this isn't some impenetrable stealth technology, it's just software that most people don't know how to detect yet. The real question isn't whether Cluely is cheating, it's whether we're ready to admit that most of our "authentic" professional interactions are already performative theater anyway. When you rehearse answers for an interview, is that authentic? When you use a sales script, is that genuine? When you google information during a call to sound more knowledgeable, is that honest? We've been optimizing human performance in professional settings for decades, we just drew arbitrary lines about what tools are acceptable and which ones aren't. The guy is just making the performance more explicit and somehow that makes everyone uncomfortable. Maybe because it forces us to confront how much of our professional competence is actually just memorized responses and pattern matching that a sufficiently advanced AI could replicate. The guy literally said "we want to cheat on everything" in their manifesto and somehow people are surprised that a tool designed to cheat is being used for cheating. Seven million in annual recurring revenue with XXXXXX users suggests that when people can get past the moral posturing, they actually find real value in having an AI assistant during high stakes conversations. These aren't evil people trying to deceive others, they're mostly just professionals who want to perform better at their jobs and don't particularly care if that performance is augmented by software. If your sales process can be significantly improved by an AI feeding you objection responses, what does that say about the intellectual rigor of your sales process? If your interview performance can be enhanced by having relevant information whispered in your ear, what does that say about what interviews are actually measuring? Maybe we're not upset about the tool itself but about what it reveals about how hollow a lot of our professional rituals have become. Individual users paying XX bucks a month to cheat on job interviews is one thing, but entire sales teams using AI assistance to close deals is something completely different. That's not individual deception, that's organizational strategy. When companies start officially adopting tools like this, the whole ethical framework changes because now it's not about individuals choosing to cheat, it's about businesses choosing to optimize their performance using available technology. Why do we think unaugmented human performance is somehow more valuable or authentic than augmented performance? Musicians use instruments to make music instead of just singing, athletes use equipment to improve their performance, writers use tools to check grammar and spelling. Why should professional conversations be any different? The detection tools that are already emerging suggest that we're headed for a world where you'll need to prove you're not using AI assistance, which is a weird inversion of how we usually think about technology adoption. Instead of people gradually choosing to use new tools, we might end up in situations where you have to actively opt out of AI assistance and prove that you're flying solo. Good salespeople are still going to outsell bad salespeople, even if they're both using AI assistance. Smart candidates are still going to get better job offers than less qualified candidates, even if they're both getting coaching during interviews. The tool might change the tactics but it doesn't really change the underlying game. What it does change is our collective comfort level with acknowledging that professional success has always been partly about performance and partly about substance, and the line between those two things has never been as clear as we like to pretend. Cluely is just making that reality more visible and somehow that makes everyone uncomfortable enough to call it cheating. Not defending it, not condemning it, just observing that we might be upset about the wrong thing entirely. The real question isn't whether tools like this should exist, because they obviously do exist and more are coming whether we like it or not. The real question is how we're going to adapt our expectations and systems to account for a world where AI assistance is ubiquitous and largely invisible. Because that world is coming faster than most people realize, and Cluely is just the opening act. XXXXX engagements  **Related Topics** [columbia](/topic/columbia) [roy](/topic/roy) [Post Link](https://x.com/sriniously/status/1947498166351499472)
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
K Srinivas Rao @sriniously on x 2936 followers
Created: 2025-07-22 03:25:20 UTC
Been thinking about this Cluely thing for a while now and honestly it's pretty fascinating, not because of what it does but because of what it reveals about us. Here's this XX year old Roy Lee who got kicked out of Columbia for making a tool to cheat on coding interviews, and instead of crawling into a hole somewhere, he doubles down and builds something that basically does the same thing but for every single professional interaction you can imagine. The sheer audacity is almost admirable in a twisted way.
We're all pretending this is some unprecedented moral crisis when we've been augmenting human capabilities forever and calling it progress. We use GPS instead of memorizing routes, autocorrect instead of perfect spelling, calculators instead of mental math, Google to look up facts mid conversation. But the moment an AI starts whispering sales objection responses in your ear during a Zoom call, suddenly we've crossed some invisible moral line that apparently exists only in our collective imagination.
Cluely does real time audio processing and screen content analysis, then feeds that through some large language model to generate contextually relevant suggestions. The whole "undetectable" thing everyone's freaking out about is just a translucent overlay that doesn't show up in screen shares. The fact that someone already built a detection tool called Truely proves this isn't some impenetrable stealth technology, it's just software that most people don't know how to detect yet.
The real question isn't whether Cluely is cheating, it's whether we're ready to admit that most of our "authentic" professional interactions are already performative theater anyway. When you rehearse answers for an interview, is that authentic? When you use a sales script, is that genuine? When you google information during a call to sound more knowledgeable, is that honest? We've been optimizing human performance in professional settings for decades, we just drew arbitrary lines about what tools are acceptable and which ones aren't.
The guy is just making the performance more explicit and somehow that makes everyone uncomfortable. Maybe because it forces us to confront how much of our professional competence is actually just memorized responses and pattern matching that a sufficiently advanced AI could replicate. The guy literally said "we want to cheat on everything" in their manifesto and somehow people are surprised that a tool designed to cheat is being used for cheating.
Seven million in annual recurring revenue with XXXXXX users suggests that when people can get past the moral posturing, they actually find real value in having an AI assistant during high stakes conversations. These aren't evil people trying to deceive others, they're mostly just professionals who want to perform better at their jobs and don't particularly care if that performance is augmented by software.
If your sales process can be significantly improved by an AI feeding you objection responses, what does that say about the intellectual rigor of your sales process? If your interview performance can be enhanced by having relevant information whispered in your ear, what does that say about what interviews are actually measuring? Maybe we're not upset about the tool itself but about what it reveals about how hollow a lot of our professional rituals have become.
Individual users paying XX bucks a month to cheat on job interviews is one thing, but entire sales teams using AI assistance to close deals is something completely different. That's not individual deception, that's organizational strategy. When companies start officially adopting tools like this, the whole ethical framework changes because now it's not about individuals choosing to cheat, it's about businesses choosing to optimize their performance using available technology.
Why do we think unaugmented human performance is somehow more valuable or authentic than augmented performance? Musicians use instruments to make music instead of just singing, athletes use equipment to improve their performance, writers use tools to check grammar and spelling. Why should professional conversations be any different?
The detection tools that are already emerging suggest that we're headed for a world where you'll need to prove you're not using AI assistance, which is a weird inversion of how we usually think about technology adoption. Instead of people gradually choosing to use new tools, we might end up in situations where you have to actively opt out of AI assistance and prove that you're flying solo.
Good salespeople are still going to outsell bad salespeople, even if they're both using AI assistance. Smart candidates are still going to get better job offers than less qualified candidates, even if they're both getting coaching during interviews. The tool might change the tactics but it doesn't really change the underlying game.
What it does change is our collective comfort level with acknowledging that professional success has always been partly about performance and partly about substance, and the line between those two things has never been as clear as we like to pretend. Cluely is just making that reality more visible and somehow that makes everyone uncomfortable enough to call it cheating.
Not defending it, not condemning it, just observing that we might be upset about the wrong thing entirely. The real question isn't whether tools like this should exist, because they obviously do exist and more are coming whether we like it or not. The real question is how we're going to adapt our expectations and systems to account for a world where AI assistance is ubiquitous and largely invisible. Because that world is coming faster than most people realize, and Cluely is just the opening act.
XXXXX engagements
/post/tweet::1947498166351499472