World News

From China to Brazil: This is how AI is regulated worldwide

Artificial intelligence has quickly moved from computer science textbooks to the mainstream, spawning delights like reproducing celebrity voices and chatbots ready to entertain intricate conversations.

But technology, which is machines trained to perform intelligent tasks, also threatens to disrupt societal norms, entire industries, and the fortunes of tech companies. It has great potential to transform everything from diagnosing patients to predicting weather patterns — but it could also put millions of people out of work or even surpass human intelligence, some experts say.

The Pew Research Center published this last week a survey A majority of Americans – 52 percent – said they were more concerned than excited about the growing use of artificial intelligence, including concerns about privacy and human control over the new technologies.

A Curious Guide to Artificial Intelligence

The proliferation of generative AI models like ChatGPT, Bard, and Bing, all open to the public, has brought artificial intelligence to the fore this year. Now governments from China to Brazil to Israel are also trying to figure out how to harness the transformative power of AI, while curbing its worst excesses and establishing rules for its use in everyday life.

Some countries, including Israel and Japan, have responded to the rapid growth by clarifying existing data, privacy and copyright protections – paving the way for the use of copyrighted content to train AI. Others, like the United Arab Emirates, have issued vague and sweeping proclamations on AI strategy or created working groups on AI best practices and published draft legislation for public scrutiny and deliberation.

Others have still adopted a wait-and-see attitude even as industry leaders, including OpenAI, creator of the viral chatbot ChatGPT, have been pushing for international cooperation on regulation and inspection. In a statement In May, the company’s CEO and his two co-founders warned of the “possibility of existential risks” associated with superintelligence, a hypothetical entity whose intelligence would surpass human cognitive capacity.

“Stopping it would require something like a global surveillance regime, and even that is guaranteed not to work,” the statement said.

However, there are few concrete laws worldwide that specifically target AI regulation. Here are some ways lawmakers in different countries are trying to answer the questions surrounding its use.

Brazil has presented a draft AI law, the culmination of three years of proposed (and stalled) legislation on the subject. The document — released late last year as part of a 900-page Senate committee report on AI — carefully describes the rights of users who interact with AI systems and provides guidelines for categorizing different types of AI based on that risk they pose to society.

Because the law focuses on users’ rights, AI vendors are required to provide users with information about their AI products. Users have the right to know that they are interacting with an AI – but also a right to an explanation of how an AI made a particular decision or recommendation. Users can also challenge AI decisions or request human intervention, especially when the AI ​​decision is likely to have a significant impact on the user, such as systems related to self-driving cars, settings, credit ratings, or biometric identification.

AI developers must also conduct risk assessments before bringing an AI product to market. The highest risk rating applies to all AI systems that employ “subliminal” techniques or exploit users in a way that harms their health or safety; these are strictly forbidden. The draft AI law also outlines potential “high-risk” AI implementations, including AI used in healthcare, biometric identification, and credit assessment, among others. Risk assessments for “high-risk” AI products are to be published in a government database.

All AI developers are liable for damages caused by their AI systems. However, there is an even higher standard of liability for developers of high-risk products.

China has published a draft regulation for generative AI and is on the way Seek public input on the new rules. Unlike most other countries, however, China’s draft states that generative AI must reflect “core socialist values”.

In the current version, the draft regulations state that developers are “responsible” for the results generated by their AI, so a translation of the document from Stanford University’s DigiChina Project. There are also limitations in obtaining training data. Developers are legally liable if their training data infringes the intellectual property of others. The regulation also stipulates that AI services must be designed to generate only “true and accurate” content.

These proposed rules build on existing laws on deepfakes, recommendation algorithms and data security, giving China a head start over other countries drafting new laws from the ground up. This was also announced by the country’s Internet regulator Face recognition limitations technology in August.

China has set dramatic goals for its tech and AI industries: In the Next Generation Artificial Intelligence Development Plan, an ambitious 2017 Chinese government document, the authors write that “China’s AI theories, technologies and applications by 2030…reach world-leading levels.”

Will China overtake the US in AI? Probably not. Here’s why.

In June, the European Parliament voted to pass the so-called “AI law”. Similar to Brazil’s draft law, the AI ​​law categorizes AI in three ways: unacceptable, high and limited risk.

AI systems that are considered a “threat” to society are considered unacceptable. (The European Parliament offers as an example “voice-activated toys that encourage dangerous behavior in children.”) Such systems are prohibited under the AI ​​Act. High-risk AI must be approved by European authorities before launch and also throughout the product lifecycle. These include, but are not limited to, AI products related to law enforcement, border management, and employment control.

AI systems that present limited risk must be labeled as such so users can make informed decisions about their interactions with the AI. Otherwise, these products usually escape official testing.

The law has yet to be approved by the European Council, although parliamentary lawmakers hope the process will be completed later this year.

Europe is advancing on AI regulation, challenging the might of the tech giants

In 2022, the Israeli Ministry of Innovation, Science and Technology published a Draft Policy on AI regulation. The document’s authors describe it as a “moral and business-oriented compass for any company, organization or government agency engaged in artificial intelligence” and emphasize its focus on “responsible innovation”.

The Israeli draft policy states that the development and use of AI “respect the rule of law, fundamental rights and the public interest, and in particular [maintain] Human dignity and privacy.” Elsewhere, it is vaguely stated that “reasonable measures, consistent with recognized professional concepts, must be taken” to ensure that AI products are safe to use.

More broadly, the draft directive encourages self-regulation and a “soft” approach to government intervention in AI development. Rather than proposing one-size-fits-all, industry-wide legislation, the document encourages sector-specific regulators to consider highly tailored interventions where necessary, and calls on the government to strive for compatibility with global AI best practices.

In March, Italy briefly banned ChatGPT, citing concerns about how and how much user data is being collected by the chatbot.

Since then, Italy has committed around $33 million to support workers at risk of being left behind by digital transformation – including but not limited to AI. About a third of that sum is used to train workers whose jobs could become redundant as a result of automation. The remaining funds will go to digital skills training for the unemployed or inactive, hoping to help them enter the labor market.

As AI transforms jobs, Italy is looking to help workers reskill

Japan, like Israel, takes a “soft-law” approach to AI regulation: the country has no regulations governing specific types of AI use. Instead, Japan has opted to wait and see how AI evolves, aiming not to stifle innovation.

So far, AI developers in Japan have relied on related laws – such as data protection – as guidelines. For example, in 2018 Japanese lawmakers revised the country’s copyright law, allowing the use of copyrighted content for data analysis. Since then, lawmakers have clarified that the revision also applies to AI training data, clearing the way for AI companies to train their algorithms on other companies’ intellectual property. (Israel has adopted this same approach.)

Regulation is not at the forefront of the AI ​​approach in every country.

In the United Arab Emirates National Strategy for Artificial IntelligenceFor example, few paragraphs are granted to the country’s regulatory ambitions. In summary, an artificial intelligence and blockchain council will “review national approaches to issues such as data management, ethics and cybersecurity” and will monitor and integrate global best practices for AI.

The remainder of the 46-page document is dedicated to boosting AI development in the UAE by attracting AI talent and integrating the technology into key sectors such as energy, tourism and healthcare. This strategy, the document’s summary said, is in line with the UAE’s efforts to become “the best country in the world by 2071”.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button