AI poses 'extinction' risk, say experts | ABS-CBN

Welcome, Kapamilya! We use cookies to improve your browsing experience. Continuing to use this site means you agree to our use of cookies. Tell me more!
AI poses 'extinction' risk, say experts
AI poses 'extinction' risk, say experts
Joseph Boyle,
Agence France-Presse
Published May 30, 2023 09:01 PM PHT

PARIS - Global leaders should be working to reduce "the risk of extinction" from artificial intelligence technology, a group of industry chiefs and experts warned on Tuesday.
PARIS - Global leaders should be working to reduce "the risk of extinction" from artificial intelligence technology, a group of industry chiefs and experts warned on Tuesday.
A one-line statement signed by dozens of specialists, including Sam Altman whose firm OpenAI created the ChatGPT bot, said tackling the risks from AI should be "a global priority alongside other societal-scale risks such as pandemics and nuclear war".
A one-line statement signed by dozens of specialists, including Sam Altman whose firm OpenAI created the ChatGPT bot, said tackling the risks from AI should be "a global priority alongside other societal-scale risks such as pandemics and nuclear war".
ChatGPT burst into the spotlight late last year, demonstrating an ability to generate essays, poems and conversations from the briefest of prompts.
ChatGPT burst into the spotlight late last year, demonstrating an ability to generate essays, poems and conversations from the briefest of prompts.
The program's wild success sparked a gold rush with billions of dollars of investment into the field, but critics and insiders have raised the alarm.
The program's wild success sparked a gold rush with billions of dollars of investment into the field, but critics and insiders have raised the alarm.
ADVERTISEMENT
Common worries include the possibility that chatbots could flood the web with disinformation, that biased algorithms will churn out racist material, or that AI-powered automation could lay waste to entire industries.
Common worries include the possibility that chatbots could flood the web with disinformation, that biased algorithms will churn out racist material, or that AI-powered automation could lay waste to entire industries.
The latest statement, housed on the website of US-based non-profit Center for AI Safety, gave no detail of the potential existential threat posed by AI.
The latest statement, housed on the website of US-based non-profit Center for AI Safety, gave no detail of the potential existential threat posed by AI.
The center said the "succinct statement" was meant to open up a discussion on the dangers of the technology.
The center said the "succinct statement" was meant to open up a discussion on the dangers of the technology.
Several of the signatories, including Geoffrey Hinton, who created some of the technology underlying AI systems and is known as one of the godfathers of the industry, have made similar warnings in the past.
Several of the signatories, including Geoffrey Hinton, who created some of the technology underlying AI systems and is known as one of the godfathers of the industry, have made similar warnings in the past.
Their biggest worry has been the rise of so-called artificial general intelligence (AGI) -- a loosely defined concept for a moment when machines become capable of performing wide-ranging functions and can develop their own programming.
Their biggest worry has been the rise of so-called artificial general intelligence (AGI) -- a loosely defined concept for a moment when machines become capable of performing wide-ranging functions and can develop their own programming.
The fear is that humans would no longer have control over superintelligent machines, which experts have warned could have disastrous consequences for the species and the planet.
The fear is that humans would no longer have control over superintelligent machines, which experts have warned could have disastrous consequences for the species and the planet.
Dozens of academics and specialists from companies including Google and Microsoft -- both leaders in the AI field -- signed the statement.
Dozens of academics and specialists from companies including Google and Microsoft -- both leaders in the AI field -- signed the statement.
It comes two months after Tesla boss Elon Musk and hundreds of others issued an open letter calling for a pause in the development of such technology until it could be shown to be safe.
It comes two months after Tesla boss Elon Musk and hundreds of others issued an open letter calling for a pause in the development of such technology until it could be shown to be safe.
However, Musk's letter sparked widespread criticism that dire warnings of societal collapse were hugely exaggerated and often reflected the talking points of AI boosters.
However, Musk's letter sparked widespread criticism that dire warnings of societal collapse were hugely exaggerated and often reflected the talking points of AI boosters.
US academic Emily Bender, who co-wrote an influential papers criticizing AI, said the March letter, signed by hundreds of notable figures, was "dripping with AI hype".
US academic Emily Bender, who co-wrote an influential papers criticizing AI, said the March letter, signed by hundreds of notable figures, was "dripping with AI hype".
Bender and other critics have slammed AI firms for refusing to publish the sources of their data or reveal how it is processed -- the so-called "black box" problem.
Bender and other critics have slammed AI firms for refusing to publish the sources of their data or reveal how it is processed -- the so-called "black box" problem.
Among the criticism is that the algorithms could be trained on racist, sexist or politically biased material.
Among the criticism is that the algorithms could be trained on racist, sexist or politically biased material.
Altman, who is currently touring the world in a bid to help shape the global conversation around AI, has hinted several times at the global threat posed by the technology his firm is developing.
Altman, who is currently touring the world in a bid to help shape the global conversation around AI, has hinted several times at the global threat posed by the technology his firm is developing.
"If something goes wrong with AI, no gas mask is going to help you," he told a small group of journalists in Paris last Friday.
"If something goes wrong with AI, no gas mask is going to help you," he told a small group of journalists in Paris last Friday.
But he defended his firm's refusal to publish the source data, saying critics really just wanted to know if the models were biased.
But he defended his firm's refusal to publish the source data, saying critics really just wanted to know if the models were biased.
"How it does on a racial bias test is what matters there," he said, adding that the latest model was "surprisingly non-biased".
"How it does on a racial bias test is what matters there," he said, adding that the latest model was "surprisingly non-biased".
RELATED VIDEO
ADVERTISEMENT
ADVERTISEMENT