Understand fairness, transparency, and accountability in AI systems. Use tools like SHAP and LIME to detect bias and build explainable models. Learn to align AI development with ethical standards and regulations.
Ethical AI & AI Governance explores the responsible development, deployment, and regulation of artificial intelligence systems. Learners begin by understanding the societal impact of AI—both its transformative benefits and potential risks. The course introduces key ethical principles such as fairness, accountability, transparency, and explainability (FATE). Topics include algorithmic bias, discriminatory data sets, and the impact of AI on employment and social structures. Learners explore case studies on controversial applications of AI in surveillance, credit scoring, hiring, and policing. AI governance frameworks from organizations such as OECD, EU (AI Act), UNESCO, and IEEE are examined, along with corporate policies from companies like Google, Microsoft, and IBM. Students learn about responsible AI practices including bias audits, ethical design reviews, and explainability tools like SHAP, LIME, and model cards. Regulatory topics cover data privacy laws like GDPR and CCPA, AI risk classification, and the role of AI ethics boards. The course also emphasizes stakeholder involvement, interdisciplinary collaboration, and participatory design to ensure AI systems align with human values. By the end, learners will understand how to assess, govern, and communicate AI risks effectively—preparing them for roles in compliance, policy, data ethics, and responsible AI product development.