Big Issues- Problems with Accountability in AI Results and Process

Summary: To promote justice and due process, the use of Artificial Intelligence (AI) in law should be maintained or regulated to: (1) allow “fair” access for all; (2) establish at least one AI algorithm trained to be objective, neutral, and factually correct to inform and allow adjudicative bodies, individuals, and society to use as a standard or reference model; (3) contain “guard rails” that limit or define the inputs and information that AI may use or consider especially in a legal matter; (4) respect the individual’s privacy rights with selective opt-out options; and (5) be accountable for the basis of its responses.

Accountability and Explainable AI Is not yet Standard

Can AI be held accountable for the basis of its answer? ChatGPT 4-13-23

AI algorithms are designed to make decisions based on data and programming, but they are not capable of independent thought or intent. Therefore, AI cannot be held accountable in the same way that a human can be held accountable for their actions.

However, the creators and developers of the AI system can be held accountable for the basis of its answer. If the AI system is designed or programmed in a way that leads to biased or discriminatory decisions, the responsibility for those decisions ultimately lies with the humans who created the system.

In some cases, it may be possible to trace the basis of an AI decision back to a specific algorithm or dataset, and to hold the creators or operators of that algorithm or dataset accountable for any errors or biases.

It is important for AI systems to be transparent in their decision-making processes and for the developers to be held accountable for any biases or errors that may be present. This can help to ensure that AI is used in a fair and ethical manner and can promote public trust in AI technology.

Tell me how AI traces the basis of a decision

AI algorithms typically make decisions based on complex mathematical models that are designed to identify patterns and correlations in data. These models are often based on large datasets that have been labeled and categorized by humans.

When an AI algorithm makes a decision, it typically generates a score or probability for each potential outcome. The basis of this decision can often be traced back to the specific inputs and data used by the algorithm.

In some cases, it may be possible to analyze the output of an AI algorithm to understand how it arrived at a particular decision. This can involve using techniques such as sensitivity analysis, which involves systematically varying the input data to see how it affects the output of the algorithm.

Another approach is to use techniques such as “explainable AI” (XAI) to make AI decision-making more transparent and understandable. XAI involves designing AI systems that can provide explanations for their decisions, allowing users to understand the reasoning behind the output of the algorithm.

Overall, tracing the basis of an AI decision requires a combination of technical expertise and careful analysis of the input data and algorithms used by the system. While this can be a complex process, it is essential for ensuring that AI is used in a fair and ethical manner.

Please tell me more about explainable AI

Explainable AI (XAI) is a field of research focused on developing AI systems that can provide clear and understandable explanations for their decisions and actions. The goal of XAI is to increase transparency and accountability in AI systems, enabling users to understand the reasoning behind the output of an algorithm.

One of the challenges of traditional “black box” AI systems is that it can be difficult for users to understand how the system arrived at a particular decision. This is particularly problematic in domains where the consequences of a decision are significant, such as healthcare, finance, and legal decision-making.

XAI approaches aim to address this issue by designing AI systems that are more transparent and interpretable. This can involve using techniques such as decision trees, rule-based systems, and Bayesian networks to create more interpretable models.

XAI approaches can also involve using visualizations, such as heat maps or graphs, to help users understand how an AI system is making decisions. For example, in a medical diagnosis system, a heat map might be used to show which parts of an image are most important in making a diagnosis.

Overall, XAI is an important area of research for ensuring that AI systems are transparent, interpretable, and accountable. By making AI more explainable, we can increase public trust in AI technology and ensure that it is being used in a fair and ethical manner.

Accountability

Issue: Currently we do not know how AI reaches a decision.

Unlike a standard computer program, large Language models (LLM) are not programmed to follow logical steps in sequence that a human has coded it.

Simplified Proposed Explanation of Why AI results Are not Easily Explainable.

LLMs are fed massive amounts of text (i.e. all of the internet). LLM look at how frequently specific words (“United States of …”) are written near other specific words (“America…”). In this example, United is often used adjacent to “States”, within four words of “America.” But less frequently, United States is also used within four words of Tara (https://en.wikipedia.org/wiki/United_States_of_Tara).

In this simple example, a user asks the LLM to complete the phrase “unites states of ___?”

The model looks at the partial phrase or prompt that includes the words “United States of” and then compares it to its database.

Because “United States of America” appears far more frequently than ” United States of Tara” in the training text, then if the LLM will more likely use “America” as f its answer to complete “United States of…?” This is then reinforced or trained to be correct.

These LLM do not process or think in serial logical steps. is not a logic tree like a standard computer program.

The LLM are fast in part because they can analyze the frequency of nearby words “all at once”.

Mathematically, one can determine what words appear near each other whether you are scanning the in a text backwards, or forwards, starting in the middle and going both ways at once (as long as you keep track of which way you were going and where you started)

You do not need to “read” or “understand” the text (or follow the mental process of the writer). Instead, you digest the frequency of words on a whole page, whole book, whole archive at a time. . The “train of thought” does not matter to the AI.

AI runs multiple process in parallel. Multiple computers can analyze different parts of the book and then align their statistics.

AI Has Different Independent “Logic”

But because AI does not follow a logic that humans recognize, we cannot reverse engineer the AI’s logic to see where it erred.

Importantly, AI likely has a different “logic” or thought process than humans. So when I say the AI makes an “error”, I mean that the AI’s result is not aligned with our human values of what we want the answer to be. (It may be correct under the AI’s logic)

Accountable to Human Values

Even if its process differs, AI has to be accountable to and consistent with human logic because humans should know how a result was reached in order to know if it is correct.