Nikolay N. Krylov1,
Yevgeniya L. Panova1
Aftandil V. Alekberzade1
1FSAEI HE I.M. Sechenov First MSMU MOH Russia (Sechenov University)
2 Bolshaya Pirogovskaya St., building 4, Moscow 119991, Russia
To unify the solution to the problems faced by the creators of algorithms for artificial intelligence (AI) for making moral decisions, both multifarious variants of speculative experiments and the results of studying the consequences of real events or generally accepted actions and stereotyped decisions were proposed. As a general rule, these were models of various critical situations requiring immediate solutions and designed to test the range of problems arising in the course of practical use of artificial intelligence in the field of administration and security. Various moral dilemmas, both artificially created and based on real events, were proposed as models for the decision-making algorithm. Decision-making requires defining the boundaries of the legitimacy of decisions made by AI. The authors analyse the logic of the choice between life and death in the 8th declamation of pseudo-Quintilian, as well as in the Survival Lottery (an experiment with organs for transplantation), the Terrorist Ultimatum, the trolley problem, and in the Moral Machine problem. Life forces us to constantly make choices to solve a wide range of everyday tasks, such as clinical experiments of physicians, medical triage of the wounded on the battlefield, treatment of patients in a state of prolonged coma and with orphan (rare) diseases, and other problems upon which the fate and lives of people depend. The authors are convinced that, at present, there is no universal morality that could serve as the basis for the creation of AI, including that for driving vehicles. When creating a universal morality for AI, one should consider the answer to the main question: do lives of all people have the same value?
Keywords: history of medicine, universal morality, artificial intelligence, ‘trolley problem’, ‘moral machine’, real moral dilemmas