What to expect from the coming year in AI


I also had plenty of time to reflect on the past year. There are so many more of you reading The Algorithm than when we first started this newsletter, and for that I am eternally grateful. Thank you for joining me on this wild AI ride. Here’s a cheerleading pug as a little present! 

So what can we expect in 2024? All signs point to there being immense pressure on AI companies to show that generative AI can make money and that Silicon Valley can produce the “killer app” for AI. Big Tech, generative AI’s biggest cheerleaders, is betting big on customized chatbots, which will allow anyone to become a generative-AI app engineer, with no coding skills needed. Things are already moving fast: OpenAI is reportedly set to launch its GPT app store as early as this week. We’ll also see cool new developments in AI-generated video, a whole lot more AI-powered election misinformation, and robots that multitask. My colleague Will Douglas Heaven and I shared our four predictions for AI in 2024 last week—read the full story here

This year will also be another huge year for AI regulation around the world. In 2023 the first sweeping AI law was agreed upon in the European Union, Senate hearings and executive orders unfolded in the US, and China introduced specific rules for things like recommender algorithms. If last year lawmakers agreed on a vision, 2024 will be the year policies start to morph into concrete action. Together with my colleagues Tate Ryan-Mosley and Zeyi Yang, I’ve written a piece that walks you through what to expect in AI regulation in the coming year. Read it here

But even as the generative-AI revolution unfolds at a breakneck pace, there are still some big unresolved questions that urgently need answering, writes Will. He highlights problems around bias, copyright, and the high cost of building AI, among other issues. Read more here

My addition to the list would be generative models’ huge security vulnerabilities. Large language models, the AI tech that powers applications such as ChatGPT, are really easy to hack. For example, AI assistants or chatbots that can browse the internet are very susceptible to an attack called indirect prompt injection, which allows outsiders to control the bot by sneaking in invisible prompts that make the bots behave in the way the attacker wants them to. This could make them powerful tools for phishing and scamming, as I wrote back in April. Researchers have also successfully managed to poison AI data sets with corrupt data, which can break AI models for good. (Of course, it’s not always a malicious actor trying to do this. Using a new tool called Nightshade, artists can add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.) 

Despite these vulnerabilities, tech companies are in a race to roll out AI-powered products, such as assistants or chatbots that can browse the web. It’s fairly easy for hackers to manipulate AI systems by poisoning them with dodgy data, so it’s only a matter of time until we see an AI system being hacked in this way. That’s why I was pleased to see NIST, the US technology standards agency, raise awareness about these problems and offer mitigation techniques in a new guidance published at the end of last week. Unfortunately, there is currently no reliable fix for these security problems, and much more research is needed to understand them better.

AI’s role in our societies and lives will only grow bigger as tech companies integrate it into the software we all depend on daily, despite these flaws. As regulation catches up, keeping an open, critical mind when it comes to AI is more important than ever.

Deeper Learning

How machine learning might unlock earthquake prediction



Source link

greg@ainewsbeat.com

Learn More →