loader animation
back

Ethical AI 101: Establishing Governance Mechanisms

This post is the fourth entry in our series on Ethical AI principles.

In the previous post, we discussed the 5th principle of the Ethical AI concept. This involved understanding the issues related to bias in AI, its causes, and the biggest challenges in fixing AI bias. We also looked at how deep learning models and AI systems can be made bias-free.

In this post, we will discuss the 8th principle of Ethical AI, which calls for “establishing global governance mechanisms.” While the Ethical AI Principle 8 only covers global AI governance mechanisms, we’ll now attempt to look at the principle from a local organizational perspective as well.

We will discuss the seven critical requirements for trustworthy AI systems and how international standards can be used as a tool for AI governance. But before we do that, we will state AI governance, the challenge related to it, and the ways to overcome it. Let’s begin.

What Is AI Governance?

A guide and legal framework for governing AI, AI governance ensures that the aim of researching and developing machine learning (ML) technologies is always to help humans use AI and ML systems in a fair way that is free from any bias.

In addition to the above, another goal of AI governance is bridging the gap that currently exists between ethics and accountability in technological advancement. This is to prevent any potential violations of AI use and deal with matters related to ‘the right to information.’

The need for AI governance has increased with the implementation of AI systems across many sectors, like healthcare, education, transportation, public safety, and economics. AI governance focuses mainly on three aspects relating to AI: autonomy, data quality, and justice.

AI governance aims to find answers to questions relating to AI safety, identify relevant institutional and legal structures, clarify the role of ethical and moral intuitions with regards to AI, and establish personal data access and control.

The overall purpose of AI governance is to establish the level of impact algorithms can have on every aspect of our life and identify the people responsible for monitoring it.

Understanding the AI Governance Challenge

There are many benefits associated with the development and use of artificial intelligence (AI) systems. In fact, these systems have the potential to improve labor productivity and transform businesses in such a way that it enhances the quality of life. Perhaps, this is the reason why AI is being adopted across many sectors today.

According to a recent report by McKinsey, the contribution of artificial intelligence (AI) to the global GDP could be as high as $13 trillion by 2030. This is the reason the competition for AI supremacy among countries is heating up, with China and the United States leading the way.

While many developed nations have created policies and frameworks for AI investment and development, the governance mechanisms and models needed to maximize the potential of AI and manage its risks are still missing.

While most governments consider AI governance a matter not worthy of their time, there is an increased concern that AI technologies could lead to a host of problems, chiefly ethical ones, if they are not implemented in a responsible way and if the data within them is not managed properly.

The good news is that some governments and international organizations have finally started to realize the seriousness of the matter. These governments and organizations have come up with ethical principles that outline how AI technologies need to be developed and used; the purpose of this is to minimize the risks that are brought upon by a lack of AI governance.

How to Govern AI

Properly governing artificial intelligence (AI) systems is easier said than done. However, this does not mean that effective AI governance is beyond the realm of possibility. Mark Latonero, a professor at the University of Southern California, provides some recommendations on how to govern AI in his research report, “Governing Artificial Intelligence: Upholding Human Rights & Dignity.”

In his report, Latonero makes several recommendations to the stakeholders regarding AI governance. One of these recommendations is that companies should work towards communicating with local civil society groups effectively. He asks companies to focus on areas with a poor human rights record, especially. Latonero believes that this will ultimately lead to Human Rights Impact Assessments at every development stage of the AI system.

For governments, a good way to start would be to acknowledge their human rights obligations towards citizens. Once this happens, there is a greater chance of governments giving priority to the protection of their citizen’s fundamental rights with respect to policies that relate to AI.

In addition to the above, governments around the world should cooperate with each other to ensure that AI is developed and used in a way that does not compromise human rights. The UN, with its 193 member states, is the perfect platform for this.

However, since human rights principles are not technical specifications, considerable work will be needed from all stakeholders for achieving the above-mentioned goals. This means that engineers, lawyers, social scientists, and computer scientists will also have to come together to ensure that human rights are a key consideration in the design, development, and use of AI systems.

In addition to the above-mentioned experts, an effort will be needed from academia as well. They will need to explore the value of human rights law, humanitarian law, and ethics, their limitations, and the interactions between them as it relates to AI technology.

The job of academia is to research and analyze the different trade-offs with regards to human rights in specific circumstances related to AI. On the other hand, the impact of AI on the ground needs to be explored by social scientists.

Singapore is the first country that has taken all of these recommendations into account to come up with an AI governance framework. The framework for AI governance changes relevant ethical principles into practices that can be implemented when deploying artificial intelligence (AI) systems in an organization. The purpose of this is to implement an operational model to put these ethical principles in action. Following are the two main objectives of Singapore’s AI governance framework:

  • Create an Assessment Guide that helps organizations determine if they have developed AI systems in line with the relevant ethical and governance practices and measures
  • Promote the adoption of AI by increasing the trust and confidence of consumers in providing their personal data for AI

The AI governance framework should be a model for other governments as well as private entities to ensure that AI systems are developed and used as per the relevant ethical principles and governance mechanisms.

7 Key Requirements for Trustworthy AI

Ensuring trustworthy AI should be the basis for developing any AI governance mechanism. What is a trustworthy AI? It is an AI system that is:

  • Robust
  • Ethical
  • Lawful

It isn’t enough for an AI system to be trustworthy from a technical perspective. It also needs to be trustworthy from a social and environmental perspective. Another key requirement for trustworthy AI is that an AI system should respect ethical values and principles, making them the cornerstone for developing and using AI. Last but not least, AI systems must be developed and used in a way that does not violate any applicable laws or regulations.

These are some of the basic qualities that make up a trustworthy AI. However, the key requirements for trustworthy AI are different from these basic qualities. These requirements are explained below.

1. Human Agency and Oversight

Human beings must be empowered by AI systems; this is a major quality of AI that makes it trustworthy. Basically, AI should allow human users to make informed decisions while fostering their basic rights. Another important thing to ensure is that the appropriate oversight mechanisms are in place. This can be achieved through human-in-command, human-on-the-loop, and human-in-the-loop approaches.

2. Technical Robustness and Safety

Another requirement for trustworthy AI is that AI systems must be secure and resilient. Safety in AI means that there must be a backup plan in case something goes wrong. Additionally, the AI must be reliable, accurate, and reproducible. This can help to prevent or minimize unintentional harm caused by it, which will make it trustworthy.

3. Privacy and Data Governance

Respect for data protection and privacy is one of the key qualities of a trustworthy AI. Additionally, a trustworthy AI ensures adequate mechanisms for data governance; these mechanisms are provisioned after ensuring data quality and integrity as well as its legitimate access.

4. Transparency

Transparency of AI systems, data, and business models is needed to ensure a trustworthy AI. This can be achieved with mechanisms that are designed to ensure traceability. Another important step is explaining AI systems and the decisions made by them in a language that can be easily understood by the user or stakeholder. It is also important to inform them about the capabilities and limitations of the AI system.

5. Diversity, Non-Discrimination, and Fairness

Since this can lead to many negative implications, AI systems must not have any bias. Examples of some AI biases that can have negative implications on the society are the aggravation of discrimination and the exclusion of vulnerable groups.

Not only should AI systems foster diversity by being accessible to all, but they should also involve all key stakeholders throughout their entire lifecycle.

6. Social and Environment Wellbeing

The first requirement for trustworthy AI puts great emphasis on this issue. It is important to develop AI systems for the benefit of all people and not just a select few. When we say all people, we not only include those present today but our future generations as well.

Therefore, AI systems must be developed to be sustainable and environmentally-friendly. In addition to humans, other living beings in the environment must be considered to understand the true social and social impact of AI development and use.

7. Accountability

The seventh and final requirement for trustworthy AI is accountability.  To ensure the accountability of AI, governments and organizations need to put mechanisms that are dedicated to this purpose. This accountability is not just limited to the AI system. Instead, it extends to the outcomes produced by the system. A key role in these accountability processes is played by auditability; this is because auditability allows for data, algorithms, and design processes to be assessed.

By fulfilling the above-mentioned requirements, governments and organizations can ensure a trustworthy AI.

Using International Standards as a Tool for AI Governance

For many decades, the governing bodies responsible for the implementation and adoption of international standards have played a key role in getting private entities to adopt and abide by the governance standards.

Some examples of these governing bodies are IEEE, IEC, and ISO; these governing bodies don’t work like a dictatorship. Instead, they develop standards using a consensus process that involves consulting the government, businesses, academia, and other key stakeholders.

In the past, international standards for governance have ensured worldwide interoperability. Today, international AI governance standards are helping to encourage governments to get into the act. An example of this is the US Executive Order on AI that prioritizes the development of AI governance standards. Additionally, the US government is developing an engagement strategy for these standards.

Unlike the past, China has also agreed to take part in the AI governance efforts. In 2018, a standards strategy that prioritized the development of international standards was published by the Chinese government. In the same year, a standards sub-committee for Artificial Intelligence (SC 42) was created by the ISO and IEC’s Joint Technical Committee for IT.

The leadership of this sub-committee was contested by both the US and China, ultimately leading to a compromise. A Huawei-based US employee chairs the SC 42, and the secretariat of the committee is based in the US. However, Beijing was the site of the first-ever meeting of the committee. Currently, standards development is underway at SC42. At the same time, a series of standards are being developed by the IEEE to address algorithmic bias, transparency, and fail-safe design.

Final Word

Establishing governance mechanisms is needed not just to ensure a trustworthy AI system, but it also needed for getting accurate, reliable outcomes from the systems that are free from all kinds of bias.

In this post, we discussed what is AI governance, the AI governance challenge, how to govern AI, the seven key requirements for trustworthy AI, and how international standards can be used as an AI governance tool.

This article wraps up our Ethical AI 101 series. Please don’t hesitate to contact us if you have any questions on this topic or need help with delivering successful AI projects that adhere to 10 principles of ethical AI.

Get in touch to learn how our AI powered solutions
can solve your business problem.