Abstract
Artificial Intelligence (AI) is increasingly used to inform, automate, or support public policy decisions. While the potential benefits-such as enhanced efficiency, consistency, and data-driven decision-making-are widely touted, ethical concerns around fairness, accountability,
transparency, privacy, algorithmic bias, and public trust are growing. This paper synthesizes findings from 25 recent research studies on these ethical dimensions to identify major ethical dilemmas, propose a framework for responsible Al in public policy, and suggest policy-
governance mechanisms that could mitigate risks. We use a mixed-methods approach combining systematic literature review, expert interviews, and comparative case studies. The results
highlight that transparency and accountability are often under-implemented, that ethical
governance tends to be reactive rather than proactive, and that citizens' involvement is limited. We propose a multi-stakeholder governance model with clearer roles, periodic audits, and legally binding frameworks. We conclude with directions for building ethical Al policy regimes,
especially in contexts with weaker governance infrastructures.
Overview One of the 21st century’s most significant technologies is artificial intelligence.
Globally, it is becoming more and more incorporated into public governance and policymaking. AI is being used by governments to automate administrative processes, manage healthcare systems, identify crime-prone areas, predict traffic patterns, and distribute welfare funds. These applications seek to improve the effectiveness, data-drivenness, and societal responsiveness of
policies. But efficiency is only one aspect of public policy; other aspects include equality, justice, fairness, and the defense of citizens’ rights. The ethical application of AI becomes crucial since
decisions about public policy have a direct impact on the lives of millions of people. AI’s design, data, and results are not value-neutral.