Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation. On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers. “Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs,” write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke’s Fuqua School of Business.
The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled “Evidence of a social evaluation penalty for using AI,” reveal a consistent pattern of bias against those who receive help from AI. What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn’t limited to specific groups.
I have a coworker who didn’t learn English until his mid-20s, and it was his 3rd language. He’s very hard to understand and is functionally illiterate in English, which is unfortunate because most of our job is done through email or chat. Sometime will send him an email or chat with a request and he will respond with “Call you” and then immediately call them. They hate it because they have a hard time understanding him, and they never get anything in writing from him.
I suggested that he start using a company provided LLM to take what he wants to write and have it rewrite it for him (or he can write in it one of the other languages he knows better and have it translated). He’s started doing this and his performance at work has completely turned around. He is a shining example for how an LLM can be properly used.
Then, there’s the VPs in the company who send out emails that have been obviously completely written by an LLM. And they brag about asking an LLM for ideas on how to handle certain situations, or the direction that the department needs to head in. They have outsourced their brains and think it was a brilliant move. They are the ones who deserve scorn.
How is that first example better than traditional Google Translate?
Removed by mod
Because with Google Translate he would be sending privileged company information to Google. With our LLM it all stays in-house. And he does typically write his replies in his broken English and the LLM fixes it to make it more readable, which helps him improve his written English skills.