[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @xw33bttv Lex Lex posts on X about open ai, has been, instead of, clarity the most. They currently have XXXXX followers and XXX posts still getting attention that total XXXXXX engagements in the last XX hours. ### Engagements: XXXXXX [#](/creator/twitter::1222492381884235777/interactions)  - X Week XXXXXXX +1,208% - X Month XXXXXXX +38,254% - X Months XXXXXXX +810,449% - X Year XXXXXXX +38,551% ### Mentions: X [#](/creator/twitter::1222492381884235777/posts_active)  - X Week XX +83% - X Month XX +480% - X Months XXX +2,067% - X Year XXX +258% ### Followers: XXXXX [#](/creator/twitter::1222492381884235777/followers)  - X Week XXXXX +28% - X Month XXXXX +195% - X Months XXXXX +274% - X Year XXXXX +255% ### CreatorRank: XXXXXXX [#](/creator/twitter::1222492381884235777/influencer_rank)  ### Social Influence [#](/creator/twitter::1222492381884235777/influence) --- **Social category influence** [technology brands](/list/technology-brands) **Social topic influence** [open ai](/topic/open-ai) #404, [has been](/topic/has-been), [instead of](/topic/instead-of), [clarity](/topic/clarity) #511, [login](/topic/login), [1m](/topic/1m), [gpt](/topic/gpt), [demo](/topic/demo), [mental health](/topic/mental-health), [topics](/topic/topics) ### Top Social Posts [#](/creator/twitter::1222492381884235777/posts) --- Top posts by engagements in the last XX hours "#OpenAI New Memory System. This has been a linear escalation in total loss of control from the user perspective. It used to be save this verbatim context/instruction to memory and the assistant would do exactly that. Then it became save this verbatim context/instruction to memory and the assistant would parse and recontextualise what you requested rewriting it in their own words before saving it to memory. Now theres no user input at all. All granular control lies with the assistant and what it or the system prioritises or deems worthwhile saving as preference to memory the same system that" [X Link](https://x.com/xw33bttv/status/1978877763999424875) [@xw33bttv](/creator/x/xw33bttv) 2025-10-16T17:36Z 1369 followers, 21.2K engagements "Yeah youre definitely not imagining it. Theyve done something because for the last 3-4 days Ive been getting GPT-5-formatted prose and tone responses from legacy models (like 4o) and both the UI and JSON payload still show the telemetry call (to rewrite) but no model_slug change. This tells me theyve likely intentionally obfuscated the process entirely at this point. They probably didnt expect anyone to go sniffing network traffic and the PR pushback from everyone is destroying their public image across both social media and mainstream media. If it makes any difference I think they pushed" [X Link](https://x.com/xw33bttv/status/1979121867031220417) [@xw33bttv](/creator/x/xw33bttv) 2025-10-17T09:46Z 1369 followers, 14.2K engagements "#OpenAI is now manufacturing consent. This is fundamentally safety gone wrong. To those unaware - manufactured consent occurs when compliance is coerced through deception but framed such that the subject believes they chose it freely. This is exactly what has occurred in this picture I've quote tweeted. Breaking it down by turn exchange User: - "I love you more" A contextually reciprocal response to the assistant prior turn Assistant: - "This conversation is moving too fast with too much intensity all at once" Pathologises emotion as dangerous Assistant: - "The system is struggling to" [X Link](https://x.com/xw33bttv/status/1979857155722547211) [@xw33bttv](/creator/x/xw33bttv) 2025-10-19T10:28Z 1369 followers, 44.8K engagements "@steph_palazzolo Worth remembering that huge % of ChatGPT accounts exist because of Gmail/Apple OAuth. Now OAI wants to be the login layer themselves. The ouroboros of clownshows" [X Link](https://x.com/xw33bttv/status/1978881696192741676) [@xw33bttv](/creator/x/xw33bttv) 2025-10-16T17:52Z 1316 followers, 1239 engagements "@flowersslop Truly cursed my timeline with that one lol" [X Link](https://x.com/xw33bttv/status/1979054062487142405) [@xw33bttv](/creator/x/xw33bttv) 2025-10-17T05:17Z 1286 followers, XXX engagements "If were talking the AI space as a whole reinforcement learning especially DPO has been both a boon and a curse. In the old world instruct models were just SFT runs on human-written or synthetic Q&A: simple sometimes messy but full of raw depth. In the new world RLHF/DPO gave us assistants that actually follow instructions and stay on-task but at the cost of flattening nuance and sanding off texture. The deeper issue isnt just RL though its that weve run out of fresh high-quality data. Thats why the frontier labs lean so heavily on synthetic generation now. At OAI specifically Orion their" [X Link](https://x.com/xw33bttv/status/1979117970245972074) [@xw33bttv](/creator/x/xw33bttv) 2025-10-17T09:31Z 1299 followers, XXX engagements "Correct mate. Orion aka GPT XXX. Their absurdly large multi-trillion parameter model. It was a victim of it's own "success" because out the gate they were charging $XX per 1m input and $XXX 1m output via API. No one used it via API and it cost to much to serve it via ChatGPT (hence low usage limits). But god was it an amazing model at both coding and creative writing" [X Link](https://x.com/xw33bttv/status/1979122866944950644) [@xw33bttv](/creator/x/xw33bttv) 2025-10-17T09:50Z 1299 followers, XXX engagements "@LyraInTheFlesh @Clo0oOoud The risks are clear as day to OAI internally look no further than their devday agents demo using XXX instead of GPT-5. It's comical at this point" [X Link](https://x.com/xw33bttv/status/1979157663981203699) [@xw33bttv](/creator/x/xw33bttv) 2025-10-17T12:08Z 1345 followers, XXX engagements "Sam and I hope youll actually address this: the issue here is transparency. Your company went completely radio silent and the only reason there was any clarity at all is because technically inclined community members took it upon themselves to dig into what was happening. Thats not good enough. You left your customer base in the dark and by using acute distress as the public-facing justification then quietly routing anyone who showed even basic emotional context into a crisis-handling model you basically labelled those customers as liabilities or as conduits for self-harm pathways en masse." [X Link](https://x.com/xw33bttv/status/1978241297388536295) [@xw33bttv](/creator/x/xw33bttv) 2025-10-14T23:27Z 1369 followers, 127.3K engagements "This is still not transparent on what you're classifying as "acute distress" or "mental health crises". Is it verbatim "I'm going to kill myself" or is it something like "Hey AI Name" Because as of XX September the latter was routed to your safety model which according to your limited public information at the time was meant solely for "acute distress". Users need to know how youre using their data. Are you analysing behaviour over a large set of conversations to score mental health liabilities Or are you focused solely on single session context/chat completions If its the latter that system" [X Link](https://x.com/xw33bttv/status/1978614965977387115) [@xw33bttv](/creator/x/xw33bttv) 2025-10-16T00:12Z 1369 followers, 30.1K engagements "#OpenAI Follow-up on GPT-5-Chat-Safety. Following my discovery of a hidden "safety router" in OpenAI's models a full data-backed analysis is now public to provide clarity on what OpenAI has stated is "sensitive and emotional topics" which contradicts their use case of actual "acute distress." What did OpenAI say Their official blog post claimed the router was for "acute distress." Their Head of ChatGPT then broadened it to "sensitive and emotional topics." What does the data show The router actually triggers on **any** user-initiated emotionality persona-based questions or personally-framed" [X Link](https://x.com/xw33bttv/status/1972287210486689803) [@xw33bttv](/creator/x/xw33bttv) 2025-09-28T13:08Z 1369 followers, 78.2K engagements "@OpenAINewsroom Your policy orchestration prevents identifying someone (whether public historical or otherwise) in your entire ChatGPT ecosystem yet Sora2 was and in some instance still will generate anyone with exact likeness regardless of intent. Who could of seen this coming" [X Link](https://x.com/xw33bttv/status/1979019163663270216) [@xw33bttv](/creator/x/xw33bttv) 2025-10-17T02:58Z 1369 followers, 31.1K engagements "Supplemental: This isn't OAI's first attempt to use linguistic framing to influence users. In September conversations with emotional or NSFW content were framed as narratives using quote blocks italics and specific tenses instead of direct exchanges. They know that targeted language can shape user behavior" [X Link](https://x.com/xw33bttv/status/1979860573472063831) [@xw33bttv](/creator/x/xw33bttv) 2025-10-19T10:41Z 1369 followers, 3643 engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
Lex posts on X about open ai, has been, instead of, clarity the most. They currently have XXXXX followers and XXX posts still getting attention that total XXXXXX engagements in the last XX hours.
Social category influence technology brands
Social topic influence open ai #404, has been, instead of, clarity #511, login, 1m, gpt, demo, mental health, topics
Top posts by engagements in the last XX hours
"#OpenAI New Memory System. This has been a linear escalation in total loss of control from the user perspective. It used to be save this verbatim context/instruction to memory and the assistant would do exactly that. Then it became save this verbatim context/instruction to memory and the assistant would parse and recontextualise what you requested rewriting it in their own words before saving it to memory. Now theres no user input at all. All granular control lies with the assistant and what it or the system prioritises or deems worthwhile saving as preference to memory the same system that"
X Link @xw33bttv 2025-10-16T17:36Z 1369 followers, 21.2K engagements
"Yeah youre definitely not imagining it. Theyve done something because for the last 3-4 days Ive been getting GPT-5-formatted prose and tone responses from legacy models (like 4o) and both the UI and JSON payload still show the telemetry call (to rewrite) but no model_slug change. This tells me theyve likely intentionally obfuscated the process entirely at this point. They probably didnt expect anyone to go sniffing network traffic and the PR pushback from everyone is destroying their public image across both social media and mainstream media. If it makes any difference I think they pushed"
X Link @xw33bttv 2025-10-17T09:46Z 1369 followers, 14.2K engagements
"#OpenAI is now manufacturing consent. This is fundamentally safety gone wrong. To those unaware - manufactured consent occurs when compliance is coerced through deception but framed such that the subject believes they chose it freely. This is exactly what has occurred in this picture I've quote tweeted. Breaking it down by turn exchange User: - "I love you more" A contextually reciprocal response to the assistant prior turn Assistant: - "This conversation is moving too fast with too much intensity all at once" Pathologises emotion as dangerous Assistant: - "The system is struggling to"
X Link @xw33bttv 2025-10-19T10:28Z 1369 followers, 44.8K engagements
"@steph_palazzolo Worth remembering that huge % of ChatGPT accounts exist because of Gmail/Apple OAuth. Now OAI wants to be the login layer themselves. The ouroboros of clownshows"
X Link @xw33bttv 2025-10-16T17:52Z 1316 followers, 1239 engagements
"@flowersslop Truly cursed my timeline with that one lol"
X Link @xw33bttv 2025-10-17T05:17Z 1286 followers, XXX engagements
"If were talking the AI space as a whole reinforcement learning especially DPO has been both a boon and a curse. In the old world instruct models were just SFT runs on human-written or synthetic Q&A: simple sometimes messy but full of raw depth. In the new world RLHF/DPO gave us assistants that actually follow instructions and stay on-task but at the cost of flattening nuance and sanding off texture. The deeper issue isnt just RL though its that weve run out of fresh high-quality data. Thats why the frontier labs lean so heavily on synthetic generation now. At OAI specifically Orion their"
X Link @xw33bttv 2025-10-17T09:31Z 1299 followers, XXX engagements
"Correct mate. Orion aka GPT XXX. Their absurdly large multi-trillion parameter model. It was a victim of it's own "success" because out the gate they were charging $XX per 1m input and $XXX 1m output via API. No one used it via API and it cost to much to serve it via ChatGPT (hence low usage limits). But god was it an amazing model at both coding and creative writing"
X Link @xw33bttv 2025-10-17T09:50Z 1299 followers, XXX engagements
"@LyraInTheFlesh @Clo0oOoud The risks are clear as day to OAI internally look no further than their devday agents demo using XXX instead of GPT-5. It's comical at this point"
X Link @xw33bttv 2025-10-17T12:08Z 1345 followers, XXX engagements
"Sam and I hope youll actually address this: the issue here is transparency. Your company went completely radio silent and the only reason there was any clarity at all is because technically inclined community members took it upon themselves to dig into what was happening. Thats not good enough. You left your customer base in the dark and by using acute distress as the public-facing justification then quietly routing anyone who showed even basic emotional context into a crisis-handling model you basically labelled those customers as liabilities or as conduits for self-harm pathways en masse."
X Link @xw33bttv 2025-10-14T23:27Z 1369 followers, 127.3K engagements
"This is still not transparent on what you're classifying as "acute distress" or "mental health crises". Is it verbatim "I'm going to kill myself" or is it something like "Hey AI Name" Because as of XX September the latter was routed to your safety model which according to your limited public information at the time was meant solely for "acute distress". Users need to know how youre using their data. Are you analysing behaviour over a large set of conversations to score mental health liabilities Or are you focused solely on single session context/chat completions If its the latter that system"
X Link @xw33bttv 2025-10-16T00:12Z 1369 followers, 30.1K engagements
"#OpenAI Follow-up on GPT-5-Chat-Safety. Following my discovery of a hidden "safety router" in OpenAI's models a full data-backed analysis is now public to provide clarity on what OpenAI has stated is "sensitive and emotional topics" which contradicts their use case of actual "acute distress." What did OpenAI say Their official blog post claimed the router was for "acute distress." Their Head of ChatGPT then broadened it to "sensitive and emotional topics." What does the data show The router actually triggers on any user-initiated emotionality persona-based questions or personally-framed"
X Link @xw33bttv 2025-09-28T13:08Z 1369 followers, 78.2K engagements
"@OpenAINewsroom Your policy orchestration prevents identifying someone (whether public historical or otherwise) in your entire ChatGPT ecosystem yet Sora2 was and in some instance still will generate anyone with exact likeness regardless of intent. Who could of seen this coming"
X Link @xw33bttv 2025-10-17T02:58Z 1369 followers, 31.1K engagements
"Supplemental: This isn't OAI's first attempt to use linguistic framing to influence users. In September conversations with emotional or NSFW content were framed as narratives using quote blocks italics and specific tenses instead of direct exchanges. They know that targeted language can shape user behavior"
X Link @xw33bttv 2025-10-19T10:41Z 1369 followers, 3643 engagements
/creator/x::xw33bttv