Guard Rails Guide AI to Sound Decision Maximizing Human Values

Summary: To promote justice and due process, the use of Artificial Intelligence (AI) in law should be maintained or regulated to: (1) allow “fair” access for all; (2) establish at least one AI algorithm trained to be objective, neutral, and factually correct to inform and allow adjudicative bodies, individuals, and society to use as a standard or reference model; (3) contain “guard rails” that limit or define the inputs and information that AI may use or consider especially in a legal matter; (4) respect the individual’s privacy rights with selective opt-out options; and (5) be accountable for the basis of its responses.

AI must be programmed with “guard rails” that prevent it from acting in a manner that violates basic human values.

An AI decision is “sound” when it comports with human values. Of course, you may ask, which human values, but that is a political question.

By “sound”, I include indisputable human values such as the AI will not kill us all.

If the process or means is as important (including “due process” which is as important as the end “decision”), we cannot know if the decision is sound.

PaperClip Maximizer

AI is a “black box” that we cannot open to see how it works inside. We cannot track its process to determine if or where it diverted from the “logic” underlying the result or answer that we wold have preferred.

At this stage, we cannot test the AI “conjuring” process. We do not know if each of the steps in the decision aligns with our values.

The Paperclip Maximizer, in its various forms, is a famous parable showing where human’s best intentions can run astray when AI is involved. See

AI Constrained to Maximizing Human Values

We take the acceptance of basic human values for granted, when perhaps we should recognize that AI does not accept them. AI is a different essence altogether.

AI is a mathematical statistical model. There is no reason to believe that AI will recognize or accept our basic human values.