[GUEST ACCESS MODE: Data is scrambled or limited to provide examples. Make requests using your API key to unlock full data. Check https://lunarcrush.ai/auth for authentication information.]
@brian_jabarian Brian JabarianBrian Jabarian posts on X about ai, gain, inference, at least the most. They currently have XXXXX followers and XX posts still getting attention that total X engagements in the last XX hours.
Social category influence finance XXXX%
Social topic influence ai 50%, gain 12.5%, inference 6.25%, at least 6.25%, matter XXXX%
Top accounts mentioned or mentioned by @teleperformance @chicagoboothedu
Top posts by engagements in the last XX hours
"What happens when firms let applicants choose between human and AI interviewers In equilibrium that choice becomes a signal: high-ability benefit low-ability lose. Under some information structures uniform AI assignment beats technology autonomy. New paper w/ P. Reshidi"
X Link 2025-12-10T03:58Z 8968 followers, 51.7K engagements
"3/ Economic Setting Based on large-scale field experiment in partnership with PSG Global Solutions & @Teleperformance 70000 applicants for customer-service jobs are randomly assigned to: Human interviewer AI interviewer Choice: pick human vs AI"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 4/ Structural model Applicants possess multiple skills; human and AI screening technologies differ in the precision with which they measure each skill. Choosing human vs AI reveals private information about expected performance"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 5/ Key finding: AI Choice as Signal When firms use this signal surplus shifts to firms and high-ability workers and low-ability applicants lose. right to choose to interact with AI or not can have unequal welfare effects"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 6/ Two mechanisms explain this: (i) Comparative-advantage sorting: applicants choose the screener they expect to favor them. (ii) Inference from choice: firms treat choice itself as an extra signal about type"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 7/ We study three information structures: (a) Firm ignores choice (b) Firm uses choice applicants dont anticipate it (c) Full equilibrium: both sides know choice will be used"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 8/ (a) If the firm ignores choice: Allowing choice raises worker welfare But lowers firm surplus Workers sort into preferred tech but the firm leaves information unused"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 9/ (b) If the firm uses choice but applicants dont anticipate it: Firms gain High-ability applicants gain Low-ability applicants lose relative to exogenous assignment"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 10/ (c) In full equilibrium: Firms weakly gain High-ability applicants benefit Low-ability applicants again lose Whether choice helps or hurts depends on how its informational content is used"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 11/ Estimation & Counterfactuals Using randomized human vs AI arms we estimate model primitives and simulate alternative screening systems"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 12/ Second key finding Early results: hybrid human+AI screening in which each component focuses on what it measures best can yield welfare gains relative to human-only or AI-only systems. More on this in the next version (e.g. costs/benefits of each alternative system)"
X Link 2025-12-10T03:58Z 8967 followers, XXX engagements
"@Teleperformance 13/ AI policy implications. In our setting at least two dimensions matter: (i) autonomy vs paternalism (ii) automation vs substitution"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 14/ (i) Autonomy Giving candidates a "right to choose can have unequal welfare effects. workers may also need a "right not to be screened on that choice": on paper appealing hard in practice. It may sometimes be preferable to assign AI interviewers uniformly"
X Link 2025-12-10T03:58Z 8968 followers, XXX engagements
"@Teleperformance 15/ (ii) Automation Even when AI interviewers match or exceed human performance optimal design = keep both performing the same task but differently. Welfare depends on how human work is redefined and the human-automation system is structured not on substitution alone"
X Link 2025-12-10T03:58Z 8966 followers, XXX engagements
"@Teleperformance 16/ Paper (preliminary version): Feedback very welcome brian.jabarian@chicagobooth.edu #EconTwitter #EconJobMarket"
X Link 2025-12-10T03:58Z 8966 followers, XXX engagements