98% of Large Tech Execs Have Paused Internal GenAI Initiatives: Report


This audio is auto-generated. Please let us know if you have feedback.

Generative AI’s impact in business, especially in finance, is just beginning to make headway. While boards have recently said using GenAI within finance isn’t a top priority, creating an environment in which it can be implemented down the line appears to be. Risks such as cybersecurity, employee morale, cost, and copyright and legal issues have all made many decision makers balk at their organization’s AI initiatives due to a lack of guidance and structure around implementation.  

New data from Wakefield Research and PagerDuty, an operations management platform, says that some of the largest companies in the U.S. have halted their AI initiatives in order to design an infrastructure for the technology first. According to the data, nearly all (98%) of the 100 Fortune 1000 technology executives surveyed said their company has put GenAI on pause in order to establish guidelines and policies around it.

Risk Factors

CFOs continue to take on more responsibility in a variety of areas, particularly cybersecurity, and GenAI implementation presents a new kind of risk. While there was a unanimous feeling of security risks around GenAI (100% said they had GenAI security concerns), the areas of concern varied.

Copyright and legal exposure risks were the top choice (51%), largely due to the ways in which GenAI sources its data, off user inputs and publicly available data sets. As GenAI developers are now running out of data sets to build their models, companies who are looking to keep their data secure may be best suited to allocate time and resources on those efforts before putting their organization’s data into a GenAI or large language model (LLM).

Risks around the guidelines are also prevalent, even for those who have some form of GenAI guidelines internally. Nearly three in 10 (29%) said they have guidelines in place already, with many of those still putting efforts on pause to enhance their approach. Guidelines don’t guarantee use, however. Only a slim majority (51%) said they would actually implement GenAI tools after proper guidelines were in place.

Whose Responsibility is Responsibility? 

Nearly two-thirds (64%) of organizations are already using some sort of GenAI product in multiple parts of the organization. Another 42% are having internal discussions about experimentation or pilot programs around AI products. However, these advances in implementation come with a “blame game,” as surveyors called it, of responsibility of the good and bad outcomes of GenAI tools.

Nearly half (49%) said there are too many unanswered questions or concerns around the technology. Budget restrictions resulting in poor implementation were also a major concern around outcome and responsibility bearing.

Inaccuracies are also a concern for nearly two in five potentially responsible tech leaders (38%). However, executives expect more issues with GenAI due to poor data quality and algorithms than they expect coming from human error (69% and 31%, respectively).

Business and Labor Impacts

Impacts to the business involve both labor and bottom line concerns. Just under half (48%) said they are concerned about it impacting their employees’ original thinking, while 42% report concern around bad customer experiences. Just under four in 10 (37%) said they worry about overall revenue loss after implementation.

Leaders continue to look at this technology as a supplement to labor, not a replacement. Nearly four in 10 (38%) executives said they believe GenAI will replace their own job before any of their colleagues’ positions. However, in the long term, expectations are much different. 85% said they expect at least some jobs in their organization to be impacted within the next decade.

Recent data from CFO’s email newsletter The Daily Balance also found a mixed response from leaders around how these new technologies will impact their workforce. CFO data found 51% of CFOs said they have already used AI to replace labor done by humans, or plan on doing so within the next twelve months. 


The PagerDuty Survey was conducted by Wakefield Research. It featured 100 Fortune 1000 Executives VP+ with roles in information/computer technology, digital technology, artifical intelligence, customer experience, or privacy. All positions reported to the CIO of their organization. It was conducted between February 8 and February 19, 2024, using an email invitation and an online survey. Findings were published on March 27, 2024.



Source link

greg@ainewsbeat.com

Learn More →