It’s time to regulate AI like cars and drugs, top Microsoft exec says


With Congress in a flurry of hearings on AI this week, Smith is among the tech leaders expected at a closed-door session with Senate Majority Leader Chuck Schumer on Wednesday, along with Elon Musk of Tesla, Mark Zuckerberg of Meta, Sam Altman of OpenAI and Satya Nadella, also from Microsoft.

“A licensing regime is fundamentally about ensuring a certain baseline of safety, of capability,” Smith said. “We have to prove that we can drive before we get a license. If we drive recklessly, we can lose it. You can apply those same concepts, especially to AI uses that will implicate safety.”

Smith testified Tuesday in the Senate in support of a regulatory framework proposed by Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.) that would create a licensing entity for sophisticated or potentially dangerous AI models. The framework also calls for companies to be held accountable when their AI models “breach privacy, violate civil rights, or otherwise cause cognizable harms.”

The Microsoft executive told POLITICO Tech that only risky AI systems should require licenses, and that obtaining a license should not be overly complicated, time consuming or expensive. But he expects Microsoft and OpenAI, in which Microsoft is a major investor, to be subject to the licensing rules, and the company is “prepared to practice what we preach.”

He told POLITICO Tech that it is a “sensible approach” that Congress should take as a “first step” and then “keep adding to it quickly.”

“We’re an industry that has been slow to come to the realization that law and regulation can make a market better, if it’s pursued in a practical and balanced way,” Smith said. “But today, when I talk to people in the tech sector, they are thinking, I’ll just say, with maybe a little more experience and hopefully even a little more wisdom than we had a decade ago.”

Smith acknowledged Congress was unlikely to pass any major AI legislation this year, but said he was optimistic that regulation would come eventually — perhaps next year or “beyond.”

Smith contends other products that pose a possible safety risk have long been regulated — motor vehicles, prescription drugs and food, to name a few — and he believes more tech executives have accepted the idea that artificial intelligence will need to adhere to similar levels of oversight.

In the case of AI regulation, Smith made the case for industry to play a large role in writing the rules.

Earlier this year, Microsoft was one of seven companies that signed voluntary safety guidelines released by the White House. Eight other tech companies signed onto the pledge Tuesday, including Adobe, IBM and Palantir. Smith suggested that the process offers a model for other government entities to approach AI regulation.

The White House gave Microsoft, Google, Anthropic and OpenAI a month following a meeting in May to suggest their own guidelines, Smith said. Administration officials then reviewed their pitches and told them where they needed to do more. (Commerce Secretary Gina Raimondo asked for tougher testing requirements, for instance, Smith said on POLITICO Tech.) The final guidelines were signed in July.

“Use the industry to offer an initial view of what is possible so that it is practical. But don’t take our first word as the last word,” Smith said. “I think it’s right that people in government push us to go farther. That’s what happened at the White House. I think we’ll see something similar in Congress and in other capitals around the world.”

Annie Rees contributed to this report.

To hear the full interview with Smith and other tech leaders, subscribe to POLITICO Tech on Apple, Spotify, Google or wherever you get your podcasts.



Source link

greg@ainewsbeat.com

Learn More →