68% of Leaders Say Employee AI Use Should Require Company Permission: Weekly Stat


Generative AI technology’s impact on corporate finance, particularly the everyday duties of CFOs, is still up in the air. As the hype has launched an entire industry and as consulting agencies build events around it, one thing is certain  lots of executives are curious about the impact tools like ChatGPT will have on their business.

But guidelines around generative AI usage at work have yet to be drawn in many companies, so business leaders may aim to oversee the technology’s use closely. 

In July 2023, Tech.co asked 86 business leaders and decision-makers about using AI tools at work. According to data, a majority (68%) of those surveyed said they believe AI usage should be granted by the company, not used at will by employees. 

The Hot Potato of Responsibility

CFOs and other leaders concerned about AI have a solid argument. The technology is still prone to errors, especially the free version of ChatGPT, which only has data up to 2021. If business leaders grant permission, and the AI tool makes an error, nearly a third (32%) of respondents believe the employee would be at fault.

Another third (33%) of respondents said the responsibility must be split between the employee and manager. Just over a quarter (26%) said AI, the employee, and the manager would all be equally responsible.

While experts opine that finance employees need to go from doers to operators as AI’s role in business technology increases, few talk about the still murky area of where the responsibility lies if an AI-induced error takes place and how an AI tool can be held responsible by those doing the enforcing.

Where AI Use is Acceptable

Much like how the Regional Transportation Authority CFO Kevin Bueso deemed AI valuable in his workflow, Tech.co data found many leaders believe AI is acceptable in internal communications. More than eight in 10 (82%) said they believe using AI to help write a response to employees is ethical. Less than one in 10 (9%) said this practice was unethical.

Transparency, an idea that many CFOs use as a foundation of leadership, is not something some leaders believe needs to occur regarding this instance of AI usage. Nearly a fifth (19%) of respondents said there was no need to disclose to other employees when they used AI in their communications.  

Roadmap to Implementation

While the hype around AI is still prevalent, its widespread implementation will be a much slower process. There are still many questions about the technology’s defined limits, the necessity of regulation, and even premature deterioration. CFOs and other business leaders will have plenty of time to strategize on where to deploy it.

AI-inspired offerings, as they will come in droves, should be scrutinized with the same level of rigor used to screen other technologies. As some of the largest asset managers are taking steps to legitimize blockchain-inspired securities, these two subsets of technology may compete, blend, or both.  



Source link

greg@ainewsbeat.com

Learn More →