+1 908-359-8416
Hillsborough, New Jersey, USA
info@vbeyonddigital.com
Ethical AI Development

The 6 key principles of ethical AI development

Rapid development in AI raises significant questions for law and society. To ensure that AI designed for enterprises abides by mutually beneficial ethical AI rules.

We are entering deeper into the digital age of business, where process automation using Artificial Intelligence (AI) has witnessed widespread applications across industries. Although AI in enterprises has been quite prominent since the turn of the 21st century, rapid technological advancement has made AI even more sophisticated with the ability to perform a variety of complex business tasks with high accuracy and efficiency.

According to a study conducted by SEMRush, 86% of CEOs claimed that AI is a mainstream technology in their organization in 2021. Especially during the pandemic, AI technologies such as chatbots responded to 85% of customer service interactions, thus helping thousands of businesses survive during a period of market unpredictability. It is quite evident, then, that the use of AI in enterprises will keep growing in the next decade.

AI and ethics

Despite the general excitement and intrigue surrounding the technology, rapid developments in AI raise signi­ficant questions with regard to law and society. The law answers some of these questions but cannot anticipate every technological breakthrough, given the speed of technological progress. Consequently, compliance with the law is mandatory but inadequately meets all concerns regarding technology and society.

Although the literal meaning of ethics is self-evident, there is a need for a common understanding of “ethics” when it comes to technology. According to a report by the Alan Turing Institute titled “Understanding Arti­ficial Ethics and Safety,” “AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.”

Establishing an ethical framework for AI starts with explaining the risks and opportunities regarding the design and use of technology. The core objective of ethics in AI is not to limit its scope and underuse technology, but to create an ecosystem where opportunities are maximized and risk minimized.

With that being said, here are the core factors that an ethical AI framework should consider:

  • The potential impact of the project on stakeholders and communities.
  • Anticipating the discriminatory effects of AI projects on individuals and social groups to assess if the project is fair and non-discriminatory.
  • Identifying and minimizing biases in the dataset that may influence the model’s output. Developers should try to perform this check at every stage of the design process.
  • Enhancing the level of public trust in the project through guarantees for transparency, accuracy, clarity, reliability, safety, and security on a best-efforts basis. Explaining the AI project to stakeholders and communities comprehensively, including the decision-making process and redressal mechanisms, to the extent possible.

Now, let’s look at 6 key principles that every AI system should abide by to provide an unbiased, efficient, and safety-oriented performance every time.

The 6 key principles of ethical AI development

  1. Transparency

Complex algorithmic systems are often considered unpredictable because you can see the inputs and outputs, but not the process behind the results. Transparency is a design choice that can help take a look at backend functionality, increase accountability, and enhance public trust in AI decision-making.

Transparency provides insight into the functioning of a system and is the ­first step towards making AI explainable, redressable, appealable, and accountable. There are multiple levels of transparency.

Example

A loan applicant complains to the bank about the rejection of her loan application to start a business. The bank then states that the AI-enabled system has considered her loan application and rejected it. The applicant responds that she does not know how AI works, has never defaulted on her bills and that it is her first time applying for a loan.

In this scenario, a transparent AI design allows the applicant to challenge the automated decision based on pre-defined parameters. For example, the AI might have considered attributes like age, gender, or marital status and made the decision in the above case.

Transparency should allow the bank and the applicant to revisit the attributes considered by the AI system.

To sum it up

  • Developers should establish an easy-to-explain model for the decision-making process.
  • AI must have a definite list of attributes and their respective weightage should affect decision-making.
  • AI should have a comprehensive map of intervention points to make necessary changes more quickly and precisely.
  • A proper redressal mechanism for users and beneficiaries is non-negotiable.

  1. Accountability

AI is non-human; therefore, it is impossible to hold AI accountable for its decisions within the current legal framework. However, developers can identify intervention points at the stage of algorithm design, AI system training, and data collection. Accountability in AI decision-making means it is possible to trace AI outcomes/decisions to an individual or organization or to a step in the AI design process.

The purpose of an accountable AI design is to provide users and beneficiaries with a redressal mechanism and to solve problems internally as they arise. Because there is no legal clarity on holding AI accountable, accountability as an ethical AI principle is the obligation to enable redressal and address design.

Example

The bank tells the loan applicant that their AI-enabled system considered her loan application and rejected it according to pre-set metrics. However, when the applicant wants to appeal the decision, the banker is unable to guide her to the appropriate authority.

In such a scenario, accountable design and processes would help users seek an explanation for why they were denied a loan and apply for a review of the decision. Accountability would also help bankers maintain customer relations by guiding customers to the appropriate redressal point.

To sum it up

Enhancing accountability requires a strategy that draws a hierarchy of responsibility depending on the harm or failure caused by the AI system. To ensure accountability, developers should consider developing the following protocols:

  • A dedicated diagnostic tool to verify that the data was collected legally.
  • A map of the system and individuals responsible for each function.
  • Redressal mechanisms for beneficiaries of the system.

  1. Mitigating bias

In the context of automated decision-making by AI-enabled systems, the “bias” is the outcome that is structurally less favorable to individuals from a certain group without a rationale justifying this distinction. Without checks and balances, bias amplifies within an AI-enabled system and leads to discriminatory outcomes for a group, despite a developer or designer not having intended it.

Bias can stem from using attributes such as gender, race, or caste as training data without checks, and in some cases, without tracking how the AI system is processing data. Bias also significantly diminishes the accuracy of an AI system.

­And to top it all, there is no legal framework to mitigate such harm. Thus, it is important for developers to voluntarily trace, treat, and mitigate harm ethically to prevent unintended outcomes.

Example

The applicant says that a male friend of hers with the same credit rating and similar economic background has been granted a loan. ­The bank tells her that their AI-enabled system considered and rejected her loan application. Upon further investigation, it is revealed that in the bank’s 100 years of existence, the percentage of loans granted to unmarried women is less than 1%.

To overcome this situation appropriately, the first step here is to trace the source of the bias. In most instances, bias in AI is an amplification of biased training data or biased design. Once the source has been identified, multiple practices exist that can mitigate this bias. It is important to be cognizant of structural biases that exist in society to be able to easily identify the reason behind biased outcomes.

To sum it up

  • Regular audits must be conducted for data and labels to ensure diversity in the data and attributes used.
  • Awareness exercises should be conducted on historical and contextual biases with discussions on technical protocols for bias mitigation.
  • Establishing technical methods to ensure fairness such as re-weighting, relabeling, and data transformation, in order to eliminate any unanticipated correlation.
  • Conducting timely outcome measurements and comparing results across identities to ensure equally accurate outcomes.

  1. Fairness

Continuing with the previous point, fairness is one of the most relevant and difficult elements of ethical AI development. It is a normative concept and has numerous definitions. Arvind Narayanan at the Association for Computing Machinery Fairness, Accountability, and Transparency (ACM FAccT) Conference in 2018 presented some of these definitions of fairness. Broadly, these definitions affirm that developers must actively include checks and balances during algorithmic design to ensure that there is no individual or group discrimination in the outcomes of the AI process.

Example

Further investigation in the applicant’s loan appeal denial reveals that the AI-enabled system denies loans to all unmarried women. The start-up that supplied the AI technology is sure that this is the optimal outcome, and the bank should accept the decision suggested by the AI.

However, it is logically incorrect to say that an outcome is accurate when it is ignoring the reality that unmarried women have not been granted loans previously by the bank. Here, there is a need to reconcile the accuracy rate of the AI-enabled system with social realities.

The thumb rule is that an AI-enabled system should not produce disproportionately accurate or inaccurate results for one group compared to other groups.

To sum it up

A complicated issue with many nuances and definitions, fairness cannot be separated from the context in which AI is applied. Therefore, it is difficult to have overarching principles for determining fairness across a wide range of AI applications.

Thus, ensuring fairness requires the combination of the following practices:

  • A map of anti-discrimination safeguards in law and functions of the system that should be mindful of these measures.
  • A policy on fairness in AI outcomes and a mandate to have the same range of accuracy across demographic groups.
  • Fairness checks at the envisioning, pre-mortem, and product-greenlighting stages, or at the pre-processing, in-processing, and post-processing stages.

  1. Security

Safety and security while using an AI system are non-negotiable for preserving public trust in AI. As technology advances, so do the threats to security; the response to such threats should also grow accordingly.

The need for secure networks is not overstated, and more than 50% of organizations in India suffered from a cybersecurity breach in 2020, including larger organizations like BigBasket and Flipkart. This emphasizes the need for start-ups to be especially mindful of security, as this could be a factor in their valuation, trust factor, and general robustness.

Example

The loan applicant’s form stated that the purpose of the loan was to start a fertilizer business. The next day, she starts receiving calls and email advertisements for purchasing wholesale fertilizers, rubber gloves, etc. When she approaches the bank, the bank tells her that they do not share personal information with third parties. On further diagnosis, it is revealed that someone has breached the AI-enabled system and stolen data, including sensitive data.

In such a scenario, when AI is deployed in settings that are of significant public interest, ensuring secure networks also has a geopolitical angle to it. Security, as an ethical imperative, mitigates the damage caused by such instances to some extent.

To sum it up

An ideal AI system should be robust, secure, and safe throughout its entire lifecycle. To maintain top-notch security for AI infrastructure, developers should consider the following factors:

  • Ensure traceability, including in relation to datasets, processes, and decisions made during the AI system lifecycle.
  • Analyze the AI-enabled system’s outcomes and responses to ensure contextual and state-of-the-art AI.
  • Based on the context, their roles, and their ability to act, developers should apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis.
  • Be mindful of risks related to AI-enabled systems, including privacy, digital security, safety, and bias.

  1. Privacy

Privacy in AI refers to individual autonomy regarding sharing information about oneself with others and the public at large. In India, the right to privacy is a fundamental right under Article 21 of the Indian Constitution. The Supreme Court recognized the right to privacy as a part of the fundamental right to life and liberty in 2017.

Privacy interacts with AI when large amounts of customer and vendor data are fed into algorithms to generate insights without the knowledge of data principles. Privacy is also violated at the stage of data collection if appropriate consent is not obtained before collecting and aggregating data for model training.

As you can see, privacy in AI is quite nuanced.

Example

After stating in her loan application that the purpose of the loan is to start a fertilizer business, the following day, the applicant starts receiving calls and email advertisements for purchasing wholesale fertilizers, rubber gloves, etc. On further investigation, it is revealed that the AI-enabled system that determines whether or not loans should be granted is selling personal information to third parties.

As defined above, privacy is the right to be left alone. In this scenario, this means that a loan applicant should be left alone and not be disturbed by advertisements that she has not subscribed to.

To sum it up

From a legal and ethical perspective, privacy and data protection are crucial for any AI business. Additionally, sensitive customer information such as financial, health, genetic, or children’s data often requires better protection compared to other personal data.

AI developers should be mindful of the seriousness and nuances of data privacy by considering the following factors:

  • Introducing methods for reducing the need to train data and minimizing data use.
  • Establishing methods that uphold data protection without reducing the basic dataset.
  • Designing measures to track consent for the data that is used and redressal or review mechanisms to provide data autonomy to data principals.
  • Adhering to data protection regulations and best implementation practices.