IBM rolls out new generative AI features and models

Fighting for relevance in the growing — and ultra-competitive — AI space, IBM this week introduced new generative AI models and capabilities across its recently-launched Watsonx data science platform.

The new models, called the Granite series models, appear to be standard large language models (LLMs) along the lines of OpenAI’s GPT-4 and ChatGPT, capable of summarizing, analyzing and generating text. IBM provided very little in the way of details about Granite, making it impossible to compare the models to rival LLMs — including IBM’s own. But the company claims that it’ll reveal the data used to train the Granite series models, as well as the steps used to filter and process that data, ahead of the models’ availability in Q3 2022.

We’ll hold the company to that.

In the meantime, Tarun Chopra, IBM’s VP of product management for data and AI, filled in some of the blanks via an email interview:

“These new IBM Granite series models have been developed using … curated, enterprise-quality data rather than publicly scraped data,” Chopra said. “The series has subsets that are specialized within different domains. For example, we have a model that’s trained on finance data, and this allows AI builders to use a much smaller model that can be as performative as a larger general model. They can also support most enterprise NLP tasks, such as summarization, content generation and insight extraction.”

Elsewhere, in — the component of Watsonx that lets customers test, deploy and monitor models post-deployment — IBM is rolling out Tuning Studio, a tool that allows users to tailor generative AI models to their data.

Using Tuning Studio, IBM Watsonx customers can fine-tune models to new tasks with as few as 100 to 1,000 examples. Once users specify a task and provide labeled examples in the required data format, they can deploy the model via an API from the IBM Cloud.

Also set to debut soon in is a synthetic data generator for tabular data — the collections of rows and columns found in relational databases. IBM claims in a press release that, by generating synthetic data from custom data schemas and internal data sets, companies can can use the generator to extract insights for AI model training and fine tuning with “reduced risk.”

It’s not clear what’s meant by “reduced risk,” exactly, given the pitfalls of training AI with synthetic data. (We’ve asked for clarification.) But make of that as you will.

IBM is also launching new generative AI capabilities in, the company’s data store that allows users to access data while applying query engines, governance, automation and integrations with existing databases and tools. Starting in Q4 2023 as part of a tech preview, customers will be able to “discover, augment, visualize and refine” data for AI through a self-service, chatbot-like tool.

IBM, once again, was light on the specifics. But I’m picturing an experience akin to ChatGPT, albeit data visualization- and transformation-focused.

Here’s what Chopra had to say:

“The generative AI capabilities becoming available in later this month will allow users to simplify and accelerate the way they interact with their data … To help illustrate how this experience might play out, say a user is looking for specific data. By using the AI chat assistant interface in, the conversational model can generate a text response alongside API calls and parameters to support that user’s request. It’s also possible to import external data using the same interface, and have AI model will perform semantic data enrichment.”

Around the same time — Q4 2023 — will gain a vector database capability to support for retrieval-augmented generation (RAG), IBM says. RAG is an AI framework for improving the quality of LLM-generated responses by grounding the model on external knowledge sources — useful, obviously, for IBM’s enterprise cliente.

In other big news, IBM is embarking on the technical preview for Watsonx.governance, a toolkit that — in the company’s rather vague words — provides mechanisms to protect customer privacy, detect model bias and drift and help organizations meet ethics standards. And starting next week, IBM will launch Intelligent Remediation, which the company says will leverage generative AI models to assist IT teams with summarizing incidents and suggesting workflows to help implement solutions.

“As demonstrated by the ongoing evolution of the watsonx platform within just a few months since launch, we’re here to support clients through the entire AI lifecycle” IBM SVP of products Dinesh Nirmal said in a press release. “As a transformation partner, IBM is collaborating with clients to help them scale AI in a secure, trustworthy way — from helping to institute foundational elements of their data strategies to tuning models for their specific business uses cases to helping them govern models beyond that.”

Certainly, IBM is under pressure to prove that it can make a dent in the crowded AI field.

In the company’s second fiscal quarter, IBM reported revenue that missed analyst expectations as the company suffered from a bigger-than-expected slowdown in its infrastructure business segment. Revenue contracted to $15.48 billion, down 0.4% year-over-year, just below the analyst consensus for Q2 sales of $15.58 billion.

During the earnings call, IBM’s CEO, Arvind Krishna, repeatedly emphasized the importance of AI to IBM’s future growth — and asserted that businesses are signing up at a healthy pace to use IBM’s hybrid cloud and AI tech, including Watsonx. Over 150 corporate customers were using Watsonx as of July, when it began rolling out, Krishna said — including Samsung and Citi.

“We continue to respond to the needs of our clients who seek trusted, enterprise AI solutions, and we are particularly excited about the response to the recently launched Watsonx AI platform. Finally, we remain confident in our revenue and free cash flow growth expectations for the full year,” Krishna said during the earnings call, per

Source link

Learn More →