27 Jun Survey: Most Employees Are Uneasy with AI Monitoring Them
A Pew Research Center survey finds the public largely wary of employers using artificial intelligence (AI) tools to monitor and evaluate workers. Only small shares favor common AI surveillance practices like tracking movements or computer activities.
Chief among them is the lack of human nuance in evaluating applicants. As one respondent argued, AI “can’t factor in unquantifiable intangibles,” revealing if someone will be a good colleague. Others stressed AI could perpetuate rather than prevent biases.
If AI monitoring proliferated, over 80% predict employees would feel excessively watched. Most disapprove of gathered AI data guiding promotions or terminations. Women and racial/ethnic minorities are often more uncertain on AI’s impacts.
About 74% call racial/ethnic bias in evaluations a problem. Of them, 46% feel increased AI could help versus just 13% saying it would hurt. But 40% expect no difference, though minorities are more optimistic.
Global mobility practices for AI
Best practices include auditing for biases, assigning oversight roles, and communicating AI’s presence clearly to candidates. Though some spot potential in ethical AI recruiting, most Americans remain skeptical absent proper precautions.
Companies must hear these concerns rather than dismiss them as technophobia. Justifiable anxieties exist around AI judged as inflexible and inhuman. But updating practices with care may gradually build acceptance. Should HR and global mobility professionals embrace it to focus more on the human aspects of their roles?
Maintaining human primacy in all talent decisions is key. AI should complement, not control the process. Tools flagging strong resumes provide value, freeing up recruiters to have richer engagement with highlighted candidates. If AI handles tedious tasks like screening, people can focus on what computers currently cannot – true bonding and culture fit.
Synthesizing the best of human insight
The goal is synthesizing the best of human insight and AI efficiency, merging care and capability. AI may someday simulate human strengths like empathy. But for now, only humans provide the nuance needed for crucial calls. Rather than hand over the reins, the ideal is AI thoughtfully assisting people who lead the way.
Widespread AI adoption is inevitable, but implementation matters hugely. These tools should aim to uplift workers, not oppress them. Honing AI to bring out the best in humans meets this aim. With prudence, tomorrow’s recruiters can thrive with compassionate AI as their aide.
In sum, most Americans are hesitant about AI monitoring tools, foresee more downsides than upsides, and oppose AI guiding decisions about careers. Concerns center on loss of privacy, humanity and agency.
AI scrutinizing everything?
Across industries, AI systems scrutinize everything from truckers’ driving to call center workers’ speech patterns. Sensors, cameras and algorithms record and evaluate once-invisible daily actions, squeezing out inefficiencies.
Proponents argue such extensive monitoring flagging unproductive workers motivates staff. But studies reveal damaging consequences like soaring stress, risky behaviors and eroded loyalty.
Another survey by staffing firm Robert Half found 68% of professionals worry AI surveillance will increase pressure, anxiety and burnout. Of those monitored currently, one-third believe it’s made work more grueling.
When each minute logged and mouse click tracked feeds punitive AI assessments, workers overexert to avoid unfavorable data. The resulting mental toll takes a vicious performance toll, research shows.
Staff may also take dangerous shortcuts chasing AI productivity targets. Truck drivers race to meet route quotas, raising accident risks. Call center staff hurriedly terminate calls to improve AI talk time metrics.
The Robert Half study found half of workers would do unethical things like lie to algorithmic managers. Dehumanized employees also grow detached, with turnover spiking at firms deploying intrusive AI monitoring.
Is responsible AI integration possible?
So AI certainly holds potential to enhance workflows if thoughtfully implemented. But many current systems equate humanity with inefficiency, designing surveillance that strips workplace dignity.
Responsible AI integration is said to be possible with care. Some firms now train algorithms on employee surveys to ensure accurate performance measurement aligned with human realities.
Others give staff opportunities to correct unfair AI assessments before they impact reviews or status. Even one tech giant even abandoned a productivity management AI system after employee complaints of relentless pressure.
The road ahead entails collaboration, not imposition. As research shows, when workers participate in AI implementation rather than simply succumbing to it, they work more enthusiastically alongside “colleague algorithms.”
Workplaces should also train managers to audit algorithms for bias before rollout and continually post-launch. And transparency around AI’s presence and purpose builds trust.
Finally, leaders must listen when issues inevitably arise, ready to recalibrate systems to empower employees. AI should unlock potential, not provoke peril.
The path forward entails ethical design and transparent use of AI systems, integrating employee feedback to avoid overreach. If people believe AI improves rather than impedes their work, acceptance could grow. But for now caution prevails. (Dennis Clemente)