Vogon Today

Selected News from the Galaxy

StartMag

Will Artificial Intelligence Destroy States? Report Nyt

Will Artificial Intelligence Destroy States? Report Nyt

Although the European Union has moved ahead of many to regulate artificial intelligence, it is unable – like many other countries, including the United States – to keep up with the speed of this technology and this could have serious repercussions . The New York Times article

When European Union leaders unveiled a 125-page bill to regulate artificial intelligence in April 2021, they hailed it as a global model for managing the technology. The NYT writes.

EU legislators have received contributions from thousands of AI experts for three years, when the topic had not yet been addressed in other countries. The result was a “historic” and “future-proof” policy, said Margrethe Vestager, head of digital policy for the 27-nation bloc.

THE ARRIVAL OF CHATGPT HAS DISPLACED THE EU (AND NOT ONLY)

Then came ChatGPT. The human-like chatbot, which went viral last year by generating its own responses to requests, has stunned EU policymakers. The type of AI that powered ChatGPT was not mentioned in the draft law and was not the focus of policy discussions. Lawmakers and their aides bombarded with calls and messages to fill the gap, while tech executives warned that too aggressive regulation could put Europe at an economic disadvantage.

Even today, European legislators are debating what to do, putting the law at risk. “We will always lag behind the speed of technology,” said Svenja Hahn, a member of the EU Parliament who helped draft the AI ​​law.

LEGISLATORS ARE ALREADY LOSING TO AI

Lawmakers and regulators in Brussels, Washington and elsewhere are losing the battle to regulate AI and racing to catch up, as fears grow that this powerful technology could automate jobs, increase the spread of disinformation and finally develop its own type of intelligence. Nations have moved quickly to address the potential dangers of AI, but European officials have been caught off guard by the technology's evolution, while US lawmakers openly admit they barely understand how it works.

The result was a myriad of responses. President Biden issued an executive order in October on the effects of AI on national security, as lawmakers debate what measures to take, if any. Japan is drawing up non-binding guidelines for the technology, while China has imposed restrictions on some types of AI. Britain has said existing laws are adequate to regulate the technology. Saudi Arabia and the United Arab Emirates are investing public money in research.

BECAUSE IT'S IMPOSSIBLE TO KEEP UP

Underpinning the fragmented actions is a fundamental misalignment. AI systems are advancing so quickly and unpredictably that lawmakers and regulators cannot keep up. This gap has been exacerbated by an AI knowledge deficit in governments, labyrinthine bureaucracies, and fears that too many rules could inadvertently limit the benefits of the technology.

Even in Europe, perhaps the most aggressive technology regulator in the world, AI has perplexed politicians.

THE EU AI ACT IS ALREADY PREHISTORY

The European Union has pushed ahead with its new law, the AI ​​Act, despite controversy over how to handle manufacturers of the latest artificial intelligence systems. The final agreement, expected on Wednesday, could limit some risky uses of the technology and create transparency requirements on how the underlying systems work. But even if it is approved, it is not expected to come into force for 18 months – a lifetime in AI development – ​​and it is unclear how it will be enforced.

“The jury is still out on whether or not this technology can be regulated,” said Andrea Renda, a senior researcher at the Center for European Policy Studies, a think tank in Brussels. “There is a risk that this text will end up being prehistoric.”

AI COMPANIES RUNNING RELEASE

The absence of rules has left a void. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced artificial intelligence systems. Many companies, which prefer non-binding codes of conduct that help speed development, are lobbying to soften proposed regulations and pit governments against each other.

Without united action soon, some officials warn, governments could fall further behind AI makers and their breakthroughs. “No one, not even the creators of these systems, knows what they will be able to do,” said Matt Clifford, an adviser to British Prime Minister Rishi Sunak, who chaired a 28-country AI safety summit last month. “The urgency comes from the question of whether governments are equipped to address and mitigate risks.”

THE EU'S FIRST (EMPTY) INITIATIVES

In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. European Union officials had selected them to advise on this technology, which was attracting attention for powering driverless cars and facial recognition systems.

The group discussed whether there were already enough European rules to protect the technology and considered potential ethical guidelines, said Nathalie Smuha, a legal scholar in Belgium who coordinated the group.

But when they discussed the possible effects of AI – including the threat of facial recognition technology to people's privacy – they realized that "there were all these legal loopholes, and what happens if people don't follow these lines driving?” he said.

In 2019, the group released a 52-page report with 33 recommendations, including greater oversight of AI tools that could harm individuals and society.

The report has made the rounds in the EU's narrow political circle. Ursula von der Leyen, president of the European Commission, has included the topic among the priorities of her digital agenda. A group of 10 people was tasked with developing the group's ideas and drafting a law. Another committee of the European Parliament, the EU's co-legislative branch, has held nearly 50 hearings and meetings to examine the effects of AI on cybersecurity, agriculture, diplomacy and energy.

THE PROPOSALS…

In 2020, European policymakers decided that the best approach was to focus on how AI is used and not the underlying technology. AI is not inherently good or bad, they said, but depends on how it is applied.

So when the AI ​​bill was introduced in 2021, it focused on “high-risk” uses of the technology, including law enforcement, school admissions and hiring. It has largely avoided regulating the AI ​​models that power them, unless they are listed as dangerous.

Under the proposal, organizations offering risky AI tools must meet certain requirements to ensure that the systems are safe before being used. AI software that creates manipulated videos and “deepfake” images must declare that people are seeing AI-generated content. Other uses have been banned or restricted, such as live facial recognition software. Violators could be fined 6% of their global turnover.

…AND ITS FLAWLESS

Some experts have warned that the draft law does not sufficiently take future developments in AI into account. “They sent me a draft and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who has advised the European Commission. “Anything not on their list of high-risk applications would not be considered, and the list excluded ChatGPT and most artificial intelligence systems.”

EU leaders were undeterred. “Europe may not have been a leader in the last wave of digitalisation, but it has everything it takes to lead the next one,” Vestager said as she presented the policy at a press conference in Brussels.

NOW IT'S EVERYTHING IN DISCUSSION AGAIN…

Nineteen months later ChatGPT arrived. The European Council had just decided to regulate general purpose AI models, but the new chatbot has stirred up the debate. It revealed a “blind spot” in the bloc's decision-making on the technology, said Dragos Tudorache, a member of the European Parliament who argued before ChatGPT's release that new models need to be covered by the law.

These general-purpose AI systems not only power chatbots, but can learn to perform many tasks by analyzing data collected from the Internet and other sources.

…AND NOT EVERYONE AGREES

EU officials were divided on how to respond. Some have been wary of adding too many new rules, especially as Europe has struggled to grow its own tech companies. Others wanted stricter limits. “We want to be careful not to underestimate, but also not to over-regulate and not over-regulate things that are not yet clear,” said Tudorache, who is also one of the lead negotiators of the AI ​​Act.

In October, the governments of France, Germany and Italy, the European Union's three largest economies, said they opposed strict regulation of general-purpose AI models, for fear of hindering their homegrown tech start-ups. Other members of the European Parliament said the law would be useless without addressing the technology. Divisions over the use of facial recognition technology also persist.

Politicians were still working on compromises as negotiations over the law's language entered their final stage this week. A spokesperson for the European Commission said the AI ​​law was “flexible to future developments and conducive to innovation”.

(Excerpt from the foreign press review edited by eprcomunicazione )


This is a machine translation from Italian language of a post published on Start Magazine at the URL https://www.startmag.it/innovazione/lintelligenza-artificiale-distruggera-gli-stati-report-nyt/ on Sat, 09 Dec 2023 06:35:09 +0000.