Here’s a scary thought: Your job security could be in the hands of AI. 

That’s according to a new study from career site Resumebuilder.com, that finds that more managers are relying on tools like ChatGPT to make hiring and firing decisions.

Managers across the U.S. are are increasingly outsourcing personnel-related matters to a range of AI tools, despite their not being well-versed in how to use the technology, according to the survey of more than 1,300 people in manager-level positions across different organizations.

The survey found that while one-third of people in charge of employees’ career trajectories have no formal training in using AI tools, 65% use it to make work-related decisions. Even more managers appear to be leaning heavily on AI when deciding who to hire, fire or promote, according to the survey. Ninety-four percent of managers said they turn to AI tools when tasked with determining who should be promoted or earn a raise, or even be laid off. 

The growing reliance among managers on AI tools for personnel-related decisions is ethically at odds with tasks that are often viewed as falling under the purview of human resources departments. But companies are quickly integrating AI into day-to-day operations, and urging workers to use it. 

“The guidance managers are getting from their CEOs over and over again, is that this technology is coming, and you better starting using it,” Axios Business reporter Erica Pandey told CBS News. “And a lot of what managers are doing are these critical decisions of hiring and firing, and raises and promotions. So it makes sense that they’re starting to wade into the use there.” 

To be sure, there are risks associated with using generative AI to determine who climbs the corporate ladder and who loses their job, especially if those using the technology don’t understand it well. 

“AI is only as good as the data you feed it,” Pandey said. “A lot of folks don’t know how much data you need to give it. And beyond that … this is a very sensitive decision; it involves someone’s life and livelihood. These are decisions that still need human input — at least a human checking the work.”

In other words, problems arise when AI is increasingly determining staffing decisions with little input from human managers.

“The fact that AI could be in some cases making these decisions start to finish — you think about a manager just asking ChatGPT, ‘Hey, who should I lay off? How many people should I lay off?’ That, I think is really scary,” Pandey said. 

Companies could also find themselves exposed to discrimination lawsuits. 

“Report after report has told us that AI is biased. It’s as biased as the person using it. So you could see a lot of hairy legal territory for companies,” Pandey said. 

AI could also struggle to make sound personnel decisions when a worker’s success is measured qualitatively, versus quantitatively.

“If there aren’t hard numbers there, it’s very subjective,” Pandey said. “It very much needs human deliberation. Probably the deliberation of much more than one human, also.”



Source link

Share.
Leave A Reply

Exit mobile version