A new novel written like Ernest Hemingway? Fake photos of Donald Trump being arrested? Or a new Gerhard Richter painting?
All are now possible with AI systems such as ChatGPT, Stable Diffusion or Aiva that write text, create desired images or compose music — all in a matter of minutes and startlingly perfect enough to leave the world astonished. And worried.
"With AI, creative services that previously could only be provided by highly qualified specialists can be mass-produced," warns Robert Exner, founder of the content creation agency Fundwort in Hanover. "AI systems thus undermine the value of human-creative thought and work," says Exner, who sees his livelihood threatened — and wants to defend it.
"AI but fair": Under this slogan, 15 organizations from the German creative industry have published a policy paper on the subject of artificial intelligence (AI).
In it, the associations from the fields of copywriting, editing, journalism, graphics, illustration, photography and art are calling for the protection of their works against unauthorized use. Copyright law urgently needs to be strengthened, says the paper co-initiated by Exner, so that creatives can continue to reap the rewards of their work.
AI needs training material
In fact, algorithm-based AI systems cannot produce text, images, or music without suitable training material. "In order to provide learning systems with the necessary data, developers use our works without being asked, without consent, and without compensation," Exner told DW. "This self-service mentality at our expense is unacceptable!"
That's how philosopher Vincent Müller sees it, in principle anyway. Müller conducts research at the University of Erlangen-Nuremberg in the still-developing discipline of "Philosophy and Ethics of Artificial Intelligence."
"Of course, this is data that is subject to copyright," Müller says. It is true that AI systems would not simply reproduce the data. Rather, they would learn something from existing material that they then use when creating something. But who then owns the copyright on this? "If you make something new out of things you got for free that has economic value then that's a social problem," says Müller.
Lack of AI regulations
The biggest problem is probably the lack of rules. The German Cultural Council, the umbrella organization of German cultural associations, recently called for new regulation. Now the creative industries are also calling for the protection of intellectual property in the digital realm, effective laws on copyright, and data protection.
"We expect politicians to stand up for the roughly 1.8 million people employed in Germany's cultural and creative industries," says Hanover-based Robert Exner.
Even if is not yet clear to everyone, artificial intelligence has long since entered our everyday lives. Publishers use AI to check manuscripts for their bestseller potential. News editors use AI writing programs. AI translates languages or speech into text. Insurance companies calculate damage risks with AI, internet sites target their visitors with the right advertising thanks to AI.
"What matters is who benefits from AI and whether this benefit is more negative or positive for society as a whole," explains AI ethicist Vincent Müller. In other words, whether the rights of all those involved are safeguarded.
In Müller's estimation, the use of artificial intelligence will foreseeably lead to a cultural upheaval. "The culture upheaval will be that more and more decisions will be made by automated systems."
It may not be problematic when it comes to the automated dispatch of a parking ticket, he says. "But there will be more and more decisions like that. And we're going to have to think about what decisions we want to leave to machine systems and where we want to use machine systems to help us."
The extent to how problematic automated decisions can be was demonstrated in 2022 by "childcare benefit affair" in the Netherlands. To create risk profiles of individuals applying for childcare benefits, the Dutch Tax and Customs Administration used algorithms in which "foreign sounding names" and "dual nationality" were used as indicators of potential fraud. As a result, thousands of low and middle-income families were subjected to scrutiny, falsely accused of fraud and asked to pay back benefits which they had legally obtained. These algorithms that led to racial profiling plunged thousands into financial dire straits.
The lesson, says Müller: "We need a social debate about what we want to do and what we don't want to do."
EU defines initial legal framework
Practically everyone agrees that it will hardly work without laws. Vincent Müller agrees. He points to an initiative by the EU Commission to regulate automated decision-making systems. The Brussels proposal contains a list of "high-risk" applications that would then require approval.
The real-time use of biometric systems to identify people in public spaces, for example, is to be limited to a few exceptions, such as to combat terrorism. Social credit systems, such as the one already being tested in China to enforce good behavior, should be banned outright.
But will this alone engender more trust in AIs whose advance worries an increasing number of people? "AI is changing the psychological relationship between humans and machines," says philosopher and AI researcher Müller. "Normally, we think of the machine as an object of limited autonomy that is ultimately controlled by humans after all." That changes, he says, when the machine is given greater autonomy. "Because then the possibilities to intervene also change."
Who really understands their car?
Vincent Müller has observed that the fear of losing control is compounded by something else: Many people see the AI-controlled machine as an inscrutable black box. Of course, this is also the case with many other technologies — hardly anyone today knows, for example, the inner workings of a car. "But if a computer decides you can't get a loan, that's completely inscrutable to begin with."
Concerns about the risks of artificial intelligence are also worrying developers and investors. In a dramatic appeal, renowned experts in the AI and tech industries, including Tesla CEO Elon Musk, recently called for a six-month pause in AI development. The time must be used to create a set of rules for this fairly new technology, according to an open letter from the non-profit organization Future of Life.
"Powerful AI systems should not be developed until we are confident that their impact is positive and their risks are manageable." Besides Musk, more than 1,000 people signed the manifesto, including Emad Mostaque, head of AI firm Stability AI, Apple founder Steve Wozniak and several developers from Google's AI subsidiary DeepMind.
Programs become a black box
However, these technologies are now so advanced that even the developers can no longer fully understand or effectively control their programs, the appeal says. As a result, information channels could be flooded with propaganda and untruths, and jobs that fulfill people could be rationalized away. For this reason, all developers working on next-generation artificial intelligence programs should stop their work in a publicly verifiable manner. If this does not happen immediately, states should impose a moratorium, they demand.
The call for rules brings together AI experts and Germany's creative professionals. The latter are demanding protection and remuneration because they fear digital exploitation. Whether this can be stopped at all remains to be seen. Until then, Chat GPT and others will give us plenty of reason for astonishment.
This article was originally written in German.