Summary: To promote justice and due process, the use of Artificial Intelligence (AI) in law should be maintained or regulated to: (1) allow “fair” access for all; (2) establish at least one AI algorithm trained to be objective, neutral, and factually correct to inform and allow adjudicative bodies, individuals, and society to use as a standard or reference model; (3) contain “guard rails” that limit or define the inputs and information that AI may use or consider especially in a legal matter; (4) respect the individual’s privacy rights with selective opt-out options; and (5) be accountable for the basis of its responses.

Home

AI will revolutionize law, the practice of law, and how citizens expect the law to function in society.

In particular, AI will further challenge notions of fairness, impartiality, neutrality, and equal justice before the law.

AIjure Addresses The Need for (1) Fair Access, (2) Objective Neutrality; (3) Guard Rails, and (4) Accountability in AI

AI will likely greatly change the balance of power, and further concentrate power and resources in those who control the technology. AI’s current structure is dominated by large corporations who aggressively price access.

On the one hand, AI could be a great boon to the field of law, greatly reducing costs, increasing speed, and yielding more fair results.

On the other hand, since AI is a very capital intensive private business, the use of AI will likely further aggravate inequities, inefficiencies, and unfairness in the legal system, putting further stress on an already fragile process.

1. The Need for Fair Access to AI

The assumption is that AI will become so powerful and ubiquitous that people will need access to a neutral functioning AI in their daily lives.

The goal of “Fair Access” is that society will have fair access to this essential resource.

Requiring fair access to AI should be considered as a fundamental condition of AI to be constituted into the AI process from the beginning.

Obviously “fair” is a relative term. “Access” is broad and undefined. These are likely ongoing negotiations.

AI Regulated as Utility?

Although AI is a new force in the world, there are several paradigms for regulating an essential service or entity

For example, as it becomes ubiquitous, AI should be accessible to people, perhaps as a kind of regulated utility.

2. Objective and Neutral- Reference Model

Some basic structural form of AI should be trained to be neutral, factually correct, objective, and not biased to the extent that this could occur. (This recognizes that information is inherently biased).

There is an urgent need for AI to be trained as a reference model, a neutral, objective information source.

This is especially important as AI is essentially a collection of human decisions and creations concentrated into a powerful central resource, of which there is likely to be only a few (and perhaps only one dominant AI).

An objective neutral AI is also necessary to cross check and fact check AI responses that may be biased by commercial, political, or other interests.

This objective source could be maintained by a collection of universities, the Federal government, or other independent entity. Because AI is so expensive and trained on public materials, an objective neutral source of information must be free from influence, advocacy, or lobbying.

Of course, Corporations and other advocates should be free to train and to refine their versions of AI toward their particular tasks.

3. Guard Rails – Human Values

AI should also be limited so that it acts consistent with human values, and institutes a hierarchy of positive human values or results as the boundaries of its application.

4. Accountability

The AI must be accountability for its decision process or logical steps

AI should contain “guard rails” that require the AI to act consistent with human values.

5. Privacy and Selective Opt-In

The data and data sets used in AI should respect the individual’s privacy and allow a selective opt-in to allow the AI to use private information for specific purposes.