Ethical AI Principles : A Brief Summary of Frameworks for Diverse Stakeholders
As Artificial Intelligence (AI) technologies become more powerful and widely used, it is important to consider questions of AI’s impact on societal well-being, responsibility and accountability for AI’s consequences, and best practices for AI developers and users. Currently there are no universally accepted answers to the questions of AI ethics. However, many AI experts, policymakers and various organizations have been debating and publishing their versions of ethical guidelines and principles. These debates are necessary in the global arena as AI applications increasingly become a part of our life and society. Equally important, we need to reflect on how these questions apply to the present state of nascent AI in our own local contexts.
In this blog post, I present nine principles of AI ethics derived from guidelines and principles from a variety of sources. Some of these sources explicitly list AI Ethics Principles, while others describe features or practices for ethical AI based on similar principles. In the following sections, I contextualize these principles based on the mission, values, and target audience of the organizations that produced these reference sources.
AI ETHICS PRINCIPLES
1. Beneficial AI systems for all
2. AI systems to value human dignity, rights, morality, social norms
4. Building safe, secure, robust and responsible AI
5.AI literacy and awareness for diverse stake holders
6. Taking responsibility for AI’s action or consequences
7. AI that respects confidentiality and privacy
8. Environment of trust, cooperation, fairness and accountability among stakeholders
9. Open AI research
Principles from institutions of general AI ethics
We looked at principles from five organizations that provide guidelines to other institutions, practitioners and the public. These organizations include: AI Ethics conference on principles[i], Future of life institute[ii] , Partnership on AI tenets[iii], Holberton Turing oath[iv] , and Force11 Fair Data Principles[v] . Of the five, Force11 Fair Data Principles[v] prioritizes data related issues and asserts that data be findable, accessible, interoperable and reusable. The remaining four organizations focus on broad topics of general AI ethics research, design, development, creation and use of ethically intelligent systems and broadly agree with the above 9 principles. However, AI Ethics conference on principles[i] and Future of life Institute[ii] emphasize present impact and ethical development of AI rather than focusing on assumptions about future of AI. AI Ethics conference on principles[i] also considers the impact and preference of users as an important aspect of ethical AI. Holberton Turing oath[iv] and Partnership on AI Tenets[iii] value AI’s importance on economy and consider AI labor and job disruptions as important ethical issues. Finally, all of the above organizations list avoiding AI misuse, human control with auditability and AI’s legal and ethical status as key considerations for AI ethics.
Principles from Robotics Organizations
Similar to AI based organizations, robotics organizations like EPSRC, HAL have also provided ethical frameworks for the design, creation and use of robotic agents. EPSRC principles of robotics[vi] agrees with the above ethical guidelines along with building trust and clearing out harmful myths. The Humanoid Agent builders league (HAL)[vii] list 7 of the above ethical principles but do not consider AI confidentiality and privacy ethics. They however prioritize biological life over artificial humanoids in case of AI’s threat.
Principles from AI companies
Technology companies such as Google, IBM, and Microsoft that are leaders in development and deployment of AI systems globally have also developed their own principles which are used as a guide for their developers and designers during product development and research. Looking at these principles helps users and consumers evaluate the ethics of their products. Among the above 9 ethical principles, Google hasn’t explicitly stated who is responsible in case of AI consequences, while IBM hasn’t explicitly talked about the role of AI in confidentiality and privacy.
Principles of International Technical Organizations
International technical organizations have often provided ethical guidelines for engineering systems. Autonomous systems are no exception. IEEE Ethics of Intelligent systems[xi] emphasizes designing and developing autonomous systems according to the diversity of existing cultural norms, increasing AI literacy among people, and allowing AI systems only if they have well-defined purpose and criteria for safe and effective operation. Along with the above principles, UNI The Future World of Work Top 10 Principles for Ethical AI[xii] requires records to be kept of decision making processes for added transparency. They also consider ecological impact and future labor replacement as an ethical issue for AI development. Weforum[xiii] lists diversity, fairness, transparency, and data protection as key ethical principles. They also advocate for the government to invest young adults in preparing skills and training to invalidate the disruption caused by AI in the jobs market.