AI Regulations

Undoubtedly, AI will continue to revolutionize society and businesses in the coming decades. However, it remains uncertain whether the world’s countries can agree on how technology should be implemented for the greatest possible societal benefit. As stronger forms of AI continue to emerge across a wider range of use cases, securing AI alignment at the international level could be one of the most significant challenges of the 21st century. Every country has efforts to regulate AI or to somehow make sure that the legal or social system is keeping up with the technology, but there is no consensus yet as to how the fast changing field should be governed.

EU AI Act

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that are used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI. Recent speculation suggests that the EU Parliament is considering adding artificial general intelligence (AGI) to the category of “high risk” AI systems. It may be that AGI will default to the transparency category, but that its ultimate uses in products could be high risk, i.e., the determination will be based on final use and not by the model itself.

Other Regulations of AI

US & Canada

  • The Copyright Office clarified its practices for examining and registering works that contain material generated by the use of AI technology.

  • New York City schools have banned ChatGPT, which generates human-like writing of essays, amid fears that students could use it to cheat. According to the city’s education department, the tool will be forbidden across all devices and networks in New York’s public schools. Most of Education departments around the globe are doing the same.

  • GenAI, like ChatGPT, is an innovative and massively disruptive technology. U.S. regulators are pushing for increased laws and provisions to ensure consumer, economic and international safety.

  • Canada released a companion document to its Artificial Intelligence and Data Act (AIDA), which states businesses would be held accountable for the creation and enforcement of appropriate internal governance processes and policies to achieve compliance with the AIDA.

UK

Italy

The Italian privacy regulator ordered a ban on ChatGPT over alleged privacy violations. The national data protection authority will immediately block OpenAI from processing the data of Italian users.

Singapore

The government of Singapore has released their “AI Verify” toolkit, which seeks to provide companies with a technical tool that verifies if their system complies with “internationally accepted AI ethics principles.

India

The government has published the National Strategy for AI with the objective of developing an ecosystem for the research and adoption of AI. The technologies related to GenAI are still evolving; currently, there is no specific regulation for GenAI.

China

Users are prohibited from AI to engage in activities that endanger national security, damage public interest or are illegal. Providers of GenAI are required to verify users using mobile phone numbers, IDs or other forms of documentation. Service providers must audit AI-generated content and user prompts manually or through technical means.

Emergence of GenAI and approaches to AI governance

As beneficial as GenAI has the potential to be, its growth does raise legal, moral and ethical questions. Some of the biggest concerns are:

Copyright: With GenAI producing unlimited amounts of content, especially art pieces, the internet will soon be filled with paintings that are unrecognizable from the originals. This also raises the issue of GenAI replacing humans when it comes to many creative workforces, such as freelancers or commercial artists who work in publishing, entertainment and even advertising. This is already a concern in practice, and organizations should be aware of long-term effects when using these platforms for code and application development.

Unreliable Content: As GenAI models are being fed large datasets — such as articles, books and websites — there is a huge chance that the information they are being given is biased, and that makes it hard to filter credible content completely. With this, democratized use of models can easily create deepfakes, reinforce machine learning bias and share misleading content across platforms.

Scams: The internet is filled with scammers and people who are trying to steal your data and money, and GenAI can be used by such people to cause damage to users or, at the very least, circulate spammy news online.

Control on Biases in Each Jurisdiction to Train GenAI

GenAI algorithms need large amounts of training data so they can perform their tasks with high accuracy. However, it is challenging for generative techiques to generate entirely new content; they can only combine what they picked up in new and different ways and give a fresh output. With underlying algorithms being hard to control, GenAI models are not always stable, and they can produce unexpected outcomes.

Generative models create new data instances that resemble the training data, so the key becomes controlling the data instances used for training. Activities and use of the model must serve a specific purpose (in the absence of a granularly defined purpose, it’s not possible to determine which outcomes were intended vs. unintended) to enhance control of the biases.

And the bounds of what is intended are dictated in part by the jurisdiction of operation, especially given what is and is not desired regarding societal effect of outcome per the different regulations. The creation of synthetic data to replace identifiable data can proactively reduce privacy and confidentiality risks, such as in application testing and training of other proprietary models.

Metatopology with Digitally Distinct National on Edge

All our Major Cloud Providers (AWS, Microsoft & Google) focus on more reach, more geographic dispersion and more digital works-from-anywhere seamlessness. So complete isolation by jurisdiction of these would not be simple as they were not envisioned for locally divided use. As regulations conflate the market, it will lead to eventual technological decoupling and there is growing need to focus on composable or modular architecture and edge operations in each jurisdiction of operation.

Metatopology is a blend of several techniques which include edge computing, edge analytics and privacy-enhancing computational techniques. Based on the regulation timelines above, organizations will need to plan to pursue privacy-enhancing computation techniques for business intelligence and analytics in the next 12 months. The focal point for this metatopology is a concept called the digitally distinct national (DDN), an individual with whom an enterprise must engage. It is important to note the DDN’s private and/or professional computing environment emerges from nationally deterministic network effects and their personally descriptive data is regulated by national laws.

Rather than the edge computing model being built with devices as the endpoint, this metatopology applies to people as the focal point of the edge operation. An edge topology that can accommodate a DDN is one that allows multinationals to deal with the challenges of both employees and customers impacted by the laws and technologies of their home country, even though they may not reside there. Think of the edge operations as self-contained sandboxes that comprise certified and compliant applications of a specific country. Each jurisdiction will have its own unique sandbox as the regulations and restrictions vary from one nation to another. This sandbox allows digitally distinct end users or nationals to be connected to their originating work country yet remain compliant to laws and regulations under its jurisdiction in another country. A DDN will be locally compliant but globally consistent and connected.

Previous
Previous

Generative AI in Healthcare

Next
Next

Conscious Risk Framework for Governance, Ethics and Trustworthiness of AI