AI regulations – a global round up

Artificial Intelligence is advancing at an exponential rate and rapidly changing the world around us in all sectors. Being cognizant of the risks associated, the countries around the world have started drafting and implementing regulatory frameworks to ensure
that AI systems are safe, secure, and trustworthy. Leading technology firms, governments and policy makers have also called for a standardized global framework for the development of advanced AI systems in a trustworthy manner. This article talks about the
different emerging regulations across various geographies, the impending challenges and how it could shape up the investment sector among others.

Our perspective on the AI regulations

  Some of the common themes emerging across the AI regulations are requirements around fairness, accountability and transparency in the use of AI. The policymakers and other regulators are calling for providers to offer responsible AI for the protection
of an individual’s fundamental rights while fostering a healthy environment for innovation. Although the U.S, and U.K have come up with draft regulations, the EU is far ahead with finalizing the regulations in Dec’23 which could come into force by 2025. While
the EU has adopted a risk-based approach which is centralized, broad and prescriptive, the U.K has adopted a more de-centralized model by relying on existing regulators to come up with specific sector-based regulations as the UK government wants to avoid further
confusion by creating a new cross sector regulator. The UK plans to adopt a more outcome-based risk approach rather than applying default risks to the underlying technologies.

  The US has also taken a decentralized approach with several federal agencies coming up with their own principles for their respective sectors e.g. FTC, CFPB etc. The SEC is also likely to come up with principles for financial recommendation algorithms
for the best interests of the investors. While the EU standards are very broad and stringent, the US agencies are lagging behind. The EU has also focused on e-commerce, social media and online platforms providing broad coverage of sectors while the US is yet
to legislate on these issues.

Regulations around the world – A snapshot

A quick snapshot of the current state of regulations across the globe shows that majority of the countries are still in the process of framing the regulations or likely in the draft state with the exception of Europe and China. Some of the countries like
Mexico do not have any draft regulations so far or plan to have one in place in the near future. We look at the summary of the proposed regulations for the key geographies.


The United States has put forth multiple guidelines for a trustworthy AI, such as releasing the AI Bill of Rights in October 2022 and the US Administration’s Executive Order on the safe, secure and trustworthy use of Artificial Intelligence. The US administration
has recently passed an executive order with emphasis on the below initiatives.

  • AI development and research: The order directs federal agencies to invest in AI research and development, and to make their AI research more accessible to the public.
  • AI use in government: The order sets standards for the use of AI in government, including requirements for transparency, accountability, and fairness.
  • AI workforce development: The order directs federal agencies to develop and implement programs to train and upskill the American workforce in AI.
  • AI international cooperation: The order establishes a new National AI Initiative Office to coordinate international cooperation on AI.

 The order also includes a number of specific initiatives, such as:

  • Creating an AI Bill of Rights: The order directs the National Institute of Standards and Technology to develop an AI Bill of Rights, which will outline the rights and protections of individuals in the context of AI.
  • Establishing a Center for Excellence in AI Cybersecurity: The order establishes a new Center of Excellence in AI Cybersecurity to develop and promote best practices for securing AI systems.
  • Launching a National AI Research Cloud: The order launches a new National AI Research Cloud, which will provide researchers with access to powerful computing resources to develop and test AI algorithms.

  The AI Bill of Rights is a set of guidelines for the responsible design and use of artificial intelligence, created by the White House Office of Science and Technology Policy (OSTP) amid an ongoing global push to establish more regulations to govern AI.
Officially called the blueprint for an AI Bill of Rights, the document, published in October of 2022, is the result of a collaboration between the OSTP, academics, human rights groups, the general public and even large companies like Microsoft and Google.


 The Artificial Intelligence Data Act (AIDA), which is being introduced by the Canadian government alongside the Consumer Privacy Protection Act (CPPA) as part of Bill C-27, the Digital Charter Implementation Act, 2022, is Canada’s first attempt at
regulating artificial intelligence (AI). The Quebec and the Ontario provinces are also looking at developing frameworks for building trustworthy AI. They are likely to come into force by 2024.


The EU Artificial Intelligence act was first published by the European Commission in April 2021 and adopted by the Council of the European Union in Dec’22. The EU approved the Artificial Intelligence Act in Dec’23 and it’s likely to come into force from
2025.Some of the key takeaways from the draft regulations are discussed below.

The current regulations are applicable for any AI systems or applications being used in the EU irrespective of whether the firm operates in the EU or outside of the EU. The regulation seeks to adopt a risk-based approach and classifies AI use by risk level
(unacceptable, high, limited, and minimal or no risk) and imposes audit, documentation, and process requirements on AI system developers and deployers. Companies developing or deploying AI systems will therefore need to document and review use cases to identify
the appropriate risk classification.

  • The AI act prohibits “Unacceptable risk” systems including biometrics and facial recognition systems in public places subject to usage of the same. The systems are considered high-risk if they pose a “significant risk” to an individual’s health, safety,
    or fundamental rights.
  • High-risk AI systems would be subject to pre-deployment conformity assessments, availability of appropriate compliance documentation, traceability of results, transparency, human oversight, accuracy and security.
  • Stricter transparency obligations are proposed for generative AI, a subcategory of foundation models, requiring that providers of such systems inform users when content is AI-generated.
  • Parliament’s proposal increases the potential penalties for violating the AI Act.  Breaching a prohibited practice would be subject to penalties of up to €40 million, or 7% of a company’s annual global revenue, whichever is higher, up from €30 million,
    or 6% of global annual revenue.


  The UK Government launched its white paper in March’23 which defines five guiding principles for AI – safety, transparency, fairness, accountability, and contestability. The approach is based on an agile and pro innovation model that leverages the capabilities
and skills of existing regulators, as opposed to a new AI-specific, cross-sector regulator.


         China’s top three regulations on AI are – 2021 regulation on recommendation algorithms, the 2022 rules for deep synthesis (synthetically generated content), and the 2023 draft rules on generative AI. These regulations target recommendation algorithms
for disseminating content, synthetically generated images and video, and generative AI systems like OpenAI’s ChatGPT. The rules create new requirements for how algorithms are built and deployed, as well as for what information AI developers must disclose to
the government and the public. The draft generative AI regulation mandates required providers to submit a filing to the existing algorithm registry. It also included several new requirements on training data and generated content that may prove extremely difficult
for providers to meet.

Implementation Challenges

 The conflicting set of rules across geographies poses a huge challenge for the global AI providers to adopt the regulations of the respective jurisdictions, while they could be deployed in various other geographies.  In addition, the implementation of the
AI guidelines is not as rapid as the emergence of various AI models which may render the legislation obsolete. With the exception of the EU, the challenge to enforce such frameworks remains as most of the other geographies have made the adoption voluntary.

 There is no clear global consensus on the definition of AI systems on a global scale, for e.g. The EU AI act requires the providers to disclose any copyrighted material used for developing the solutions and the holder can opt out of the copyrighted data
from the training datasets, making EU a less desirable choice for the AI vendors. The other geographies do not have any specific regulations for copyrighted material being used as training datasets and is the prerogative of the individual providers to be prudent
of copyrighted material.

Given the cross disciplinary impact of the regulations, it requires consensus from various domains such as law, ethics, IT, finance, and other experts to agree on common provisions and framework.

Regulations designed to address specific AI risks may have other unforced consequences, which could hinder potential AI innovations, research or beneficial products. Striking a balance between fostering innovation and safeguarding against risks is very hard
for governments as it could restrict AI startups and funding as well.

Next Steps

The EU AI act has already been approved by the parliament and is likely to become the global standard for AI regulation by 2025, while the US and UK have a voluntary set of guidelines and framework with no centralized view across sectors and are lagging
behind EU in terms of enforcement dates. Each of the regulations (US, EU, and China) reflects their own societal structure and national priorities. As a result, this would create a more complex regulatory environment for businesses to inter operate across
geographies. Transparency, explainability and risk categorization (at the organizational, use case, and model/data level) are key to complying with emerging regulatory frameworks.

 In the financial services space, some of the common AI/ML technology is used in investment research, robo advisors, risk evaluations, AML checks etc. These services facilitated through AI and machine learning allow financial institutions to offer tailored
and diverse products to their customers at a cost-efficient manner. The growing adoption of Generative AI also comes with the rising concern for financial risks. Although the EU has stringent measures and penalties in place for investment protection, the SEC
and FCA are taking a cautious approach to devise the guidelines.

Preparedness of the firms to adopt the regulation despite the challenges is a crucial step towards global compliance. Firms’ readiness to comply with the regulations should begin with a consolidated AI asset catalogue listing the various cross geography
regulations, impacted users and products, risk categorization of the various AI applications used internally etc. It is imperative that firms build a well-defined AI workbench comprising of a holistic framework with various components embedded like Data Management,
Innovation, Governance, Compliance with policies etc. thus showcasing the trustworthiness of the AI systems involved.

It remains to be seen if there will be an industry wide collaboration among the various policy makers, major technology firms and other stakeholders to draft a standardized global AI regulation. The G7 leaders have also called for discussions on generative
AI, with the aim of establishing global standards and regulations for responsible use of technology.


Source link

This post originally appeared on TechToday.