[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.] #  @brian_jabarian Brian Jabarian Brian Jabarian posts on X about ai, gain, for all, at least the most. They currently have XXXXX followers and XX posts still getting attention that total X engagements in the last XX hours. ### Engagements: X [#](/creator/twitter::1475717721136451584/interactions)  - X Year XXXXXXX +2,242% ### Mentions: X [#](/creator/twitter::1475717721136451584/posts_active)  ### Followers: XXXXX [#](/creator/twitter::1475717721136451584/followers)  - X Year XXXXX +46% ### CreatorRank: undefined [#](/creator/twitter::1475717721136451584/influencer_rank)  ### Social Influence **Social category influence** [finance](/list/finance) **Social topic influence** [ai](/topic/ai), [gain](/topic/gain), [for all](/topic/for-all), [at least](/topic/at-least), [matter](/topic/matter), [inference](/topic/inference) ### Top Social Posts Top posts by engagements in the last XX hours "@Teleperformance 7/ We study three information structures: (a) Firm ignores choice (b) Firm uses choice applicants dont anticipate it (c) Full equilibrium: both sides know choice will be used" [X Link](https://x.com/brian_jabarian/status/1998603085447356731) 2025-12-10T03:58Z 8968 followers, XXX engagements "@Teleperformance 15/ (ii) Automation Even when AI interviewers match or exceed human performance optimal design = keep both performing the same task but differently. Welfare depends on how human work is redefined and the human-automation system is structured not on substitution alone" [X Link](https://x.com/brian_jabarian/status/1998603094246953143) 2025-12-10T03:58Z 8966 followers, XXX engagements "What happens when firms let applicants choose between human and AI interviewers In equilibrium that choice becomes a signal: high-ability benefit low-ability lose. Under some information structures uniform AI assignment beats technology autonomy. New paper w/ P. Reshidi" [X Link](https://x.com/brian_jabarian/status/1998603076165320932) 2025-12-10T03:58Z 8968 followers, 51.7K engagements "2/ Two questions & findings i) Should firms uniformly assign AI interviewers to candidates or give them the choice of AI vs a human Actually autonomy is not always good for all candidates ii) Does automation of interviews always imply human substitution for that task No" [X Link](https://x.com/brian_jabarian/status/1998603079772504555) 2025-12-10T03:58Z 8966 followers, XXX engagements "3/ Economic Setting Based on large-scale field experiment in partnership with PSG Global Solutions & @Teleperformance 70000 applicants for customer-service jobs are randomly assigned to: Human interviewer AI interviewer Choice: pick human vs AI" [X Link](https://x.com/brian_jabarian/status/1998603080753971616) 2025-12-10T03:58Z 8968 followers, XXX engagements "@Teleperformance 4/ Structural model Applicants possess multiple skills; human and AI screening technologies differ in the precision with which they measure each skill. Choosing human vs AI reveals private information about expected performance" [X Link](https://x.com/brian_jabarian/status/1998603081978704092) 2025-12-10T03:58Z 8968 followers, XXX engagements "@Teleperformance 5/ Key finding: AI Choice as Signal When firms use this signal surplus shifts to firms and high-ability workers and low-ability applicants lose. right to choose to interact with AI or not can have unequal welfare effects" [X Link](https://x.com/brian_jabarian/status/1998603083144708291) 2025-12-10T03:58Z 8968 followers, XXX engagements "@Teleperformance 8/ (a) If the firm ignores choice: Allowing choice raises worker welfare But lowers firm surplus Workers sort into preferred tech but the firm leaves information unused" [X Link](https://x.com/brian_jabarian/status/1998603086462423291) 2025-12-10T03:58Z 8968 followers, XXX engagements "@Teleperformance 9/ (b) If the firm uses choice but applicants dont anticipate it: Firms gain High-ability applicants gain Low-ability applicants lose relative to exogenous assignment" [X Link](https://x.com/brian_jabarian/status/1998603087523524683) 2025-12-10T03:58Z 8968 followers, XXX engagements "@Teleperformance 11/ Estimation & Counterfactuals Using randomized human vs AI arms we estimate model primitives and simulate alternative screening systems" [X Link](https://x.com/brian_jabarian/status/1998603089658442020) 2025-12-10T03:58Z 8968 followers, XXX engagements "@Teleperformance 13/ AI policy implications. In our setting at least two dimensions matter: (i) autonomy vs paternalism (ii) automation vs substitution" [X Link](https://x.com/brian_jabarian/status/1998603091709411802) 2025-12-10T03:58Z 8968 followers, XXX engagements "@Teleperformance 14/ (i) Autonomy Giving candidates a "right to choose can have unequal welfare effects. workers may also need a "right not to be screened on that choice": on paper appealing hard in practice. It may sometimes be preferable to assign AI interviewers uniformly" [X Link](https://x.com/brian_jabarian/status/1998603092988764659) 2025-12-10T03:58Z 8968 followers, XXX engagements "@Teleperformance 6/ Two mechanisms explain this: (i) Comparative-advantage sorting: applicants choose the screener they expect to favor them. (ii) Inference from choice: firms treat choice itself as an extra signal about type" [X Link](https://x.com/brian_jabarian/status/1998603084205789414) 2025-12-10T03:58Z 8968 followers, XXX engagements "@Teleperformance 10/ (c) In full equilibrium: Firms weakly gain High-ability applicants benefit Low-ability applicants again lose Whether choice helps or hurts depends on how its informational content is used" [X Link](https://x.com/brian_jabarian/status/1998603088597258608) 2025-12-10T03:58Z 8968 followers, XXX engagements "@Teleperformance 12/ Second key finding Early results: hybrid human+AI screening in which each component focuses on what it measures best can yield welfare gains relative to human-only or AI-only systems. More on this in the next version (e.g. costs/benefits of each alternative system)" [X Link](https://x.com/brian_jabarian/status/1998603090702872758) 2025-12-10T03:58Z 8967 followers, XXX engagements "@Teleperformance 16/ Paper (preliminary version): Feedback very welcome brian.jabarian@chicagobooth.edu #EconTwitter #EconJobMarket" [X Link](https://x.com/brian_jabarian/status/1998603095572353413) 2025-12-10T03:58Z 8967 followers, XXX engagements
[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
@brian_jabarian Brian JabarianBrian Jabarian posts on X about ai, gain, for all, at least the most. They currently have XXXXX followers and XX posts still getting attention that total X engagements in the last XX hours.
Social category influence finance
Social topic influence ai, gain, for all, at least, matter, inference
Top posts by engagements in the last XX hours
"@Teleperformance 7/ We study three information structures: (a) Firm ignores choice (b) Firm uses choice applicants dont anticipate it (c) Full equilibrium: both sides know choice will be used"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 15/ (ii) Automation Even when AI interviewers match or exceed human performance optimal design = keep both performing the same task but differently. Welfare depends on how human work is redefined and the human-automation system is structured not on substitution alone"
X Link 2025-12-10T03:58Z 8966 followers, XXX engagements
"What happens when firms let applicants choose between human and AI interviewers In equilibrium that choice becomes a signal: high-ability benefit low-ability lose. Under some information structures uniform AI assignment beats technology autonomy. New paper w/ P. Reshidi"
X Link 2025-12-10T03:58Z 8968 followers, 51.7K engagements
"2/ Two questions & findings i) Should firms uniformly assign AI interviewers to candidates or give them the choice of AI vs a human Actually autonomy is not always good for all candidates ii) Does automation of interviews always imply human substitution for that task No"
X Link 2025-12-10T03:58Z 8966 followers, XXX engagements
"3/ Economic Setting Based on large-scale field experiment in partnership with PSG Global Solutions & @Teleperformance 70000 applicants for customer-service jobs are randomly assigned to: Human interviewer AI interviewer Choice: pick human vs AI"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 4/ Structural model Applicants possess multiple skills; human and AI screening technologies differ in the precision with which they measure each skill. Choosing human vs AI reveals private information about expected performance"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 5/ Key finding: AI Choice as Signal When firms use this signal surplus shifts to firms and high-ability workers and low-ability applicants lose. right to choose to interact with AI or not can have unequal welfare effects"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 8/ (a) If the firm ignores choice: Allowing choice raises worker welfare But lowers firm surplus Workers sort into preferred tech but the firm leaves information unused"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 9/ (b) If the firm uses choice but applicants dont anticipate it: Firms gain High-ability applicants gain Low-ability applicants lose relative to exogenous assignment"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 11/ Estimation & Counterfactuals Using randomized human vs AI arms we estimate model primitives and simulate alternative screening systems"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 13/ AI policy implications. In our setting at least two dimensions matter: (i) autonomy vs paternalism (ii) automation vs substitution"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 14/ (i) Autonomy Giving candidates a "right to choose can have unequal welfare effects. workers may also need a "right not to be screened on that choice": on paper appealing hard in practice. It may sometimes be preferable to assign AI interviewers uniformly"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 6/ Two mechanisms explain this: (i) Comparative-advantage sorting: applicants choose the screener they expect to favor them. (ii) Inference from choice: firms treat choice itself as an extra signal about type"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 10/ (c) In full equilibrium: Firms weakly gain High-ability applicants benefit Low-ability applicants again lose Whether choice helps or hurts depends on how its informational content is used"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 12/ Second key finding Early results: hybrid human+AI screening in which each component focuses on what it measures best can yield welfare gains relative to human-only or AI-only systems. More on this in the next version (e.g. costs/benefits of each alternative system)"
X Link 2025-12-10T03:58Z 8967 followers, XXX engagements
"@Teleperformance 16/ Paper (preliminary version): Feedback very welcome brian.jabarian@chicagobooth.edu #EconTwitter #EconJobMarket"
X Link 2025-12-10T03:58Z 8967 followers, XXX engagements
/creator/twitter::brian_jabarian