Objective Neutral Standard – Reference Model
Summary: To promote justice and due process, the use of Artificial Intelligence (AI) in law should be maintained or regulated to: (1) allow “fair” access for all; (2) establish at least one AI algorithm trained to be objective, neutral, and factually correct to inform and allow adjudicative bodies, individuals, and society to use as a standard or reference model; (3) contain “guard rails” that limit or define the inputs and information that AI may use or consider especially in a legal matter; (4) respect the individual’s privacy rights with selective opt-out options; and (5) be accountable for the basis of its responses.
Objective Neutral Reference Model – Standard
Some basic structural form of AI should be trained to be neutral, factually correct, objective, and not biased to the extent that this could occur. (This recognizes that information is inherently biased).
There is an urgent need for AI to be trained as a reference model, a neutral, objective information source.
This is especially important as AI is essentially a collection of human decisions and creations concentrated into a powerful central resource, of which there is likely to be only a few (and perhaps only one dominant AI).
An objective neutral AI is also necessary to cross check and fact check AI responses that may be biased by commercial, political, or other interests.
This objective source could be maintained by a collection of universities, the Federal government, or other independent entity. Because AI is so expensive and trained on public materials, an objective neutral source of information must be free from influence, advocacy, or lobbying.
Of course, Corporations and other advocates should be free to train and to refine their versions of AI toward their particular tasks.