Event: Discussion of AI Ethics with Students in Nepal
- Post by: Aatiz Ghimire
- February 21, 2022
- No Comment
NAAMII organized an hour-long Focus Group Discussion for Students on 19 January 22 as part of our ongoing project to survey the state of “AI Ethics in Nepal”. The main goal of the event was to have a deeper understanding of the issue in AI Ethics as foreseen by students, which cannot visualize by surveys. The discussion session ran for an hour and had six student participants. Most of the participants were pursuing Computer and Electronics Engineering students at the undergraduate level and had done AI projects on their own.
The discussion was moderated by the NAAMII AI Ethics team and had pre-defined broad questions to facilitate the discussion. The topics we presented in Focus Group Discussion were AI ethics, ethical concerns and risks in AI, examples of ethical AI, the biggest problem with AI in Nepal, AI courses in the student’s curriculum and needs in the curriculum and addition, need or change AI-related law in now or future.
During the discussion, students suggested that human ethics should govern the AI ethics, which should be adaptive, flexible and based on the scenario where the AI system works. Some ethical concerns expressed by them were in data privacy, the liability of AI products, the bias-ness of AI, and unemployment caused by AI systems. Students outlined lack of local datasets availability, unethical data collections, non-structured courses, limited access to technology, limited AI experts, lack of AI community, lack of academic-industry tie-up and lack of government focus in AI could be the biggest problems with AI in Nepal.
Students pointed out that AI courses in the Universities of Nepal are non-practical and should be re-structured based upon a solid math approach, practical and relevant field application. They also suggest that the addition of ethics from day one of the AI course could help in developing ethical and responsible AI. Students suggested that AI-related policy and the law should cover the misusage of data, data security and data collections, accountability of foreign technology companies in Nepal, Inclusion of AI Engineers in Government, and predefined liability for damages caused by AI, which should be flexible.
Overall, the event was beneficial to contextualize the initial survey results with a more focused discussion. The student participants pointed out and discussed issues of fairness and bias, accountability, robustness, safety and security. However, as a solution they often presented technical solution or a ban without considering various socio-political nuances and difficulties with defining what such a ban would look like in practice. Our discussions showed that the students were aware of many issues with AI and the gaps in their curriculum but lacked exposure to social nuances which cause many of these issues. From our perspective, AI enthusiasm has risen exponentially in recent years in university students, but many engineering students are lagging due to limited experts, old syllabus in AI courses and limited consideration of social or ethical aspects for any project.
However, our insights from this discussion are limited because all the participants came from undergraduate engineering institutions. Among the 6 participants, 5 were male. All the participants came from eastern Nepal and upper-caste background. In the future, similar events will need to be held with a more demographically diverse participant group as well as with students from different academic backgrounds such as non-engineering disciplines and graduate and post-graduate institutions. It would also be interesting to compare the discussions among the students with similar discussions help by NAAMII among AI professionals and policymakers, and identify shared concerns and conflicting incentives between different stakeholders.
This project is generously funded by a grant from UNESCO Asia and Pacific Bureau.