Ethics of AI sits at the crossroads of ethics and the emerging new technological revolution. The participation of AI and automation in society is expected to increase significantly, and with that the scope, intensity and significance of morally-burdened effects produced by, or otherwise related to them. As everything around us is progressively being automated and managed by algorithmic systems, it pays to focus a significant part of our attention towards the moral and ethical effects the widespread introduction of AI and automation are deemed to cause; and to figure out morally-sound ways to manage it.

The ethical considerations under review are numerous and span a great spectre: personhood and legal personhood, ‘Being’, agency and autonomy, complexity and moral uncertainty, moral respect, moral inertia, anthropocentrism / universalism, (moral) bias, rights, values, virtues, vices, accountability and responsibility, opacity and transparency, utility, trust, fairness, impartiality and justice, and plenty more.

Whenever there arises an issue of questionable moral effects produced by the usage of AI and automation, there is a need to model, predict, and manage them with a positive outcome. Research undergoing currently at the University of Luxembourg in collaboration with CIRSFID at the University of Bologna is focused on exploring the main ethical implications of the widespread introduction of AI and automation in human societies, and creating a framework tool for modeling moral scenarios including artificial entities.

Main contributor: Andrej Dameski