From copyright battles to fears of deepfakes derailing elections, here's what to watch out for in the world of AI.Artificial intelligence has gone mainstream.

Also Read | North Korea To Launch Three More Military Spy Satellites, Build More Nuclear Weapons Next Year.

Long the stuff of science fiction and blue-sky research, ​AI technologies like the ChatGPT and Bard chatbots have become everyday tools used by millions of people. And yet, experts say, we've only seen a glimpse of what's to come.

Also Read | US Forces Shoot Down Two Anti-Ship Ballistic Missiles in Red Sea, Kills Gunmen in Attack by Yemen's Houthi Rebels.

"AI has reached its iPhone moment," said Léa Steinacker, chief innovation officer at startup ada Learning and author of a forthcoming book on artificial intelligence, referring to the introduction of Apple's smartphone in 2007, which popularized mobile internet access on phones.

Similarly, "applications like ChatGPT and others have brought AI tools to end users," Steinacker told DW. "And that will affect society as a whole."

Will deepfakes help derail elections?

So-called "generative" AI programs now allow anyone to create convincing texts and images from scratch in a matter of seconds. This has made it easier and cheaper than ever to produce "deepfake" content, in which people appear to say or do things they never did.

As major elections approach in 2024, from the US presidential race to the European Parliament elections, experts say we could see a surge in deepfakes aimed at swaying public opinion or inciting unrest ahead of the vote.

"Trust in the EU electoral process will critically depend on our capacity to rely on cybersecure infrastructures and on the integrity and availability of information," warned Juhan Lepassaar, executive director of the EU's cybersecurity agency, when his office released a threat report in mid-October.

How much of an impact deepfakes will have will also largely depend on the efforts of social media companies to combat them. Several platforms, such as Google's YouTube and Meta's Facebook and Instagram, have implemented policies to flag AI-generated content, and the coming year will be the first major test of whether they work.

Who owns AI-generated content?

To develop "generative" AI tools, companies train the underlying models by feeding them vast amounts of texts or images sourced from the internet. So far, they have utilized these resources without obtaining explicit consent from the original creators — writers, illustrators, or photographers.

But rights holders are fighting back against what they see as violations of their copyrights.

Recently, the New York Times announced that it was suing OpenAI and Microsoft, the companies behind ChatGPT, accusing them of using millions of the newspaper's articles. San Francisco-based OpenAI is also being sued by a group of prominent American novelists, including John Grisham and Jonathan Franzen, for using their works.

Several other lawsuits are pending. For example, the photo agency Getty Images is suing the AI company Stability AI, which is behind the Stable Diffusion image creation system, for analyzing its photos.

The first rulings in these cases could come in 2024 — and they could set precedents for how existing copyright laws and practices need to be updated for the age of AI.

Who holds the power over AII?

As AI technology becomes more sophisticated, it is becoming harder and more expensive for companies to develop and train the underlying models. Digital rights activists warn that this development is concentrating more and more cutting-edge expertise in the hands of a few powerful companies

"This concentration of power in terms of infrastructure, computing power and data in the hands of a few tech companies illustrates a long-standing problem in the tech space," Fanny Hidvegi, Brussels-based director of European policy and advocacy at the nonprofit Access Now, told DW.

As the technology becomes an indispensable part of people's lives, a few private companies will influence how AI will reshape society, she warned.

How to enforce AI laws?

Against this backdrop, experts agree that — just as cars need to be equipped with seat belts — artificial intelligence technology needs to be governed by rules.

In December 2023, after years of negotiations,the EU agreed on its AI Act, the world's first comprehensive set of specific laws for artificial intelligence.

Now, all eyes will be on regulators in Brussels to see if they will walk the walk and enforce the new rules. It's fair to expect heated discussions about whether and how the rules need to be adjusted.

"The devil is in the details," said Léa Steinacker, "and in the EU, as in the US, we can expect drawn-out debates over the actual practicalities of these new laws."

Edited by Rina Goldenberg

(The above story first appeared on LatestLY on Dec 31, 2023 06:10 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).