Deadly Automatic Systems: Ethical And Legal Problems
In: Journal of politics and law: JPL, Volume 12, Issue 4, p. 50
ISSN: 1913-9055
Artificial intelligence, neural networks, speech and behavior recognition systems, drones, autonomous robotic systems - all of these and many other technologies are widely used by the military to create a new type of lethal weapon programmed to independently decide to use military force. According to experts, production of such weapons will be a revolution in military affairs, the same kind of revolution that the creation of nuclear weapons made back in the days.
Adoption of fully autonomous combat systems raises a number of ethical and legal issues, the major of which is a destruction of a supposed enemy’s manpower by a robot without a human command. This article focuses on the legal aspects of creating autonomous combat systems, their legal status and the prospects of creating an international document prohibiting lethal robotic technologies.
As the result of the study, the authors came to a conclusion that there is no direct legal restriction on the use of fully autonomous combat systems, however, the use of such weapons contradicts the doctrinal norms of international law. The authors also believe that a comprehensive ban on the development, use and distribution of robotic technologies is hardly possible in the foreseeable future. The most possible scenario for solving the problem at an international level is only a ban on the use of this type of military equipment directly during an operational activity of an armed conflict. At the same time, the authors consider it necessary to outline the acceptable areas of application of robotic technologies: medical and logistical support of military operations, military construction, the use of mine clearing robots and similar humanistically justified measures.