694
Dmitry Kuteynikov, Osman Izhaev, Valerian Lebedev y Sergey Zenin
Legal regulation of articial intelligence and robotic systems: review of key approaches
AI-powered systems with an unacceptable risk are completely prohibited
because their use violates the universal values recognized by the EU. In
particular, the use of various systems that aect the person’s consciousness
against their will is not allowed, namely, various manipulative techniques
that address various groups of the population: children, seniors, persons
with mental disorders, etc.
If the previous category does not cause any controversy, the next one is
rather ambiguous. High-risk AI systems fall under a whole set of regulatory
requirements and are allowed on the European market if they fully comply
with them. The criteria for assigning a specic AI-driven system are their
functional characteristics and goals. Within this group, they are divided
into:
a. AI systems to be used as a safety component of products subject to
prior third-party conformity assessment.
b. AI systems, whose exploitation can aect the state of human rights
and whose list is indicated in a separate annex (for example, use
in law enforcement, the administration of justice, or the eld of
democracy).
Such requirements represent a system of continuous risk management:
monitoring, identifying, and assessing them with due regard to the
available technical capabilities; thorough testing of these systems during
development and before commission based on the purpose of a specic AI-
powered system. Particular attention should be paid to data processing, i.e.,
information should be up-to-date, representative, correct and complete.
Finally, the third category includes low-risk AI systems that do not need
any regulation. However, attention is drawn to the fact that responsible
actors might comply with codes of ethics when creating, developing, and
using such systems.
The use of AI systems raises legal issues at the level of national legislation
in European countries. These issues concern, inter alia, human rights,
condentiality, fairness, algorithmic transparency, and accountability
(Wachter et al., 2021). Many states emphasize the need to assess the
existing legal framework and enact new legislation to provide favorable
legal conditions for the successful implementation and operation of AI-
driven systems.
For example, Belgium adopted a Royal Decree on tests with automated
vehicles in March 2018 (Belgisch Staatsblad, 2018). In 2017, a similar act
was adopted by the Danish parliament that amended the road trac law to
allow tests of unmanned vehicles. In addition, Denmark has amended the
Danish Financial Statements Act which stipulates that the largest companies
adhering to data ethics policies must provide compliance information, while