Information Technology Law (IT Law) is a research area concerned with Ethics and Law for Information Technologies. The area has recently gained increasing interest, due to the rapid and pervasive deployment of Artificial Intelligence (AI) technologies within industrial products that we can find in our everyday life.

The first central concern is the demand of clear legal and ethical guidelines to channel this development. The research area is receiving worldwide attention not only due to the disruptive impact it may have on the labour markets, but also because of the complex socio-ethical and legal questions it poses in combination with its manifold and complex technological challenges.

The second central concern is the relationship between humans and AI technology: there is a potential danger of raising negative attitudes in society towards the development and deployment of modern artificial-intelligence technology (e.g., by often poorly informed, biased media reports), while there is at the same time an education gap regarding the actual state-of-the-art and the predictable future in the development of intelligent autonomous systems. There’s no reason AI can’t take ethical actions/moral decisions, but a pure artifact, with no human component, should never be held to be the moral agent or the legally responsible party; justice and responsibility are concepts by which we humans organize our own societies, and which rely on human interests, drives, and capacities in order to steer behavior in ways appropriate to maintain those societies.

Law and Ethics for Science and Technology aims at developing and studying means to make the use of AI technologies in critical areas in a safe, sustainable and responsible way. Ethics-by-Design concerns the methods, algorithms and tools to endow autonomous agents with the capabilities to reason about the ethical aspects of their decisions, as well as the methods, tools and formalisms to guarantee that agents’ behavior remains within given ethical and moral bounds.

Two archetypal (toy) examples widely used in the Law & Ethics for AI scientific literature:
HAL 9000, the sentient computer of 2001: A Space Odyssey;
I, robot and the Three laws of robotics.

At the present, there exists no autonomous intelligent system that is able to reason about ethical issues and to consequently take ethical decisions. How and to what extent can agents understand and reason about the social reality in which they inter-operate, and the other intelligences (other AI systems, humans, animals) with which they co-exist?

Consider the often-referred scenario of a critical traffic situation as illustration, in which an autonomous car has to choose between (a) killing a child running after his ball, or (b) crashing into a parking car possibly injuring its occupants. In the case of a human driver, her moral consciousness would tell him which of the two options to choose, even if this implies economic damage or injury.

How can autonomous systems be made to react “morally” in such situations? And to what extend is that possible at all? Law and Ethics for Science and Technology addresses the identified knowledge and competence gap in the interaction area of Artificial Intelligence, Law and Ethics that currently impedes the development of morally critical decision making and enforcement mechanisms, and fosters interdisciplinary approaches to join, in holistic way, the different competences needed to this end.

Philosophy and Law must help to systematically derive the rules that guide such morally critical decision making. Computer Science must then translate them into computational form, and develop and deploy means to reason about such rules, e.g., to ensure their consistency and to assess their implications, make their application traceable and to enable explanations on how an AI system has come to its critical decisions (in our example, the system would, e.g., decide between the life of the child and the health of the car occupants), and reflect on what kinds of decisions an AI system can make (e.g., because they are enforceable), and which it cannot. Clarity and transparency are fundamental for increasing societal trust in AI technology by raising our confidence in the machines we entrust our lives to. In addition, Law must define the legal framework, codes-of-conduct and regulations to prevent violations or immoral decisions due to selfish interests.

Some of the most popular use cases in Law and Ethics for Science and Technology are:

    • Autonomous Systems, e.g., robots: Autonomous Systems constitute the most pressing application area, calling for the development of legal and ethical reasoning in machines.
    • Decision Support Systems (DSS): AI technology is utilised as a core component of DSSs and in algorithm-based decision making. Algorithm-based decision making is a phenomenon, which is spreading in various areas of regulation, including banking and financial regulation and criminal law, such as the detection of market manipulations.
    • Healthcare: Establishing autonomous systems in healthcare, especially for the permanent personal assistance of individuals, inherently exhibits the danger of “dehumanization” through the lack of social contacts. Human contact is essential in care. Replacing humans by robots and AI technology can dehumanize the care, but, on the other hand, also improve it by a more reliable form of assistance and supervision of the patients.
    • Fintech: Complex regulatory frameworks have been defined to prevent developments that might disrupt the smooth/fair functioning of our financial systems. However, reporting, needed for transparency, compliance and explanation, is extensive and elaborate when done by humans. On the other hand, risk assessment can increasingly be supported by machine learning means, but ethical and legal issues arise.
    • Data Protection and social Web: Naïve usage and processing of information in the social Web, and the Internet in general, is filled with risks. For example, predictions learned from available data may be significantly biased against minorities, and their unreflecting use may even amplify discrimination and inequality. Social bots may try influence public opinion, our e-personalities may be attacked in social networks, our e-identities may be stolen and misused, etc.