SINGAPORE - IBM is focused on deploying artificial intelligence services for businesses and government agencies even as competitors eye wider markets that include consumers, officials of the tech giant said.
During its annual Think conference in Singapore, IBM unveiled several new services, including its watsonx.ai with generative AI capabilities. These can be tapped by companies and government agencies that may have various concerns over AI, the company said.
While rivals like Google and Microsoft are rolling out AI for enterprises and consumers alike with their Duet and Copilot services, IBM is staying focused on providing its services to enterprises, company officials said.
“We don’t mine the internet like ChatGPT does,” said Parul Mishra, Product Management Lead of IBM’s Digital Labor solution.
Mishra said unlike the more popular generative AI chatbots on the internet, IBM’s AI services are focused on its clients and how to boost their businesses with its technology.
“It’s curated. It’s built on our platforms so we feel confident about the data, and the customer can feel confident about the data. And that data is typically the customer’s data,” Mishra continued.
IBM said that like any technology going through rapid development, AI can be hazardous, especially in business settings.
“AI can be just plain wrong, hallucinating or producing toxic results,” IBM said in a statement.
AI MINUS HALLUCINATIONS, BIASES
There are big differences between generative AI meant for the general public and AI that is geared toward enterprises and governments, experts said during the conference.
Generative AI in businesses and governments can’t afford to have hallucinations and biases, among other things, AI experts said.
“Businesses need data accuracy, data security, data veracity which means truthfulness of data,” said Raju Chellum, Chief Editor of AI Ethics & Governance Body of Knowledge of the Singapore Computer Society.
Generative AI like ChatGPT are sometimes observed to “hallucinate” or confidently assert information that it just made up.
Hallucinations can happen when AI is still being “trained”, according to Mishra.
"While you're training it, you want it to hallucinate because you want to catch that early on and then retrain it for it to not have those deviations,” Mishra said.
“But in practice, no, we don't allow that," added Mishra.
“We go through a rigorous process of cleansing, filtering and classification of data,” said Sriram Raghavan, Vice President at IBM Research for AI.
Another concern about generative AI is that it can develop biases because of the information used to train it.
The company said fairness needs to be built into an AI system and biases should be detected during data acquisition, and during the building, deploying and monitoring of AI systems.
“Incorrect or biased actions based on faulty data or assumptions can result in lawsuits and customer, stakeholder, stockholder and employee mistrust,” IBM said.
The company said a recent found that three out of four (75 percent) CEOs believe the organization with the most advanced generative AI wins. The study also said 43 percent of CEOs say their enterprises are already using generative AI to inform strategic decisions.