Bridging the Gap: The Case for an “Incompletely Theorized Agreement” on AI Policy

Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize ‘near-term’ concerns and those focusing on ‘long-term’ concerns and corresponding policy measures. In this paper, we seek to examine this alleged ‘gap’, with a view to understanding the practical space for inter-community collaboration on AI policy. We propose to make use of the principle of an ‘incompletely theorized agreement’ to bridge some underlying disagreements, in the name of important cooperation on addressing AI’s urgent challenges. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.

Focus: AI Ethics/Policy
Source: AI and Ethics
Readability: Expert
Type: Website Article
Open Source: No
Keywords: Artificial intelligence, AI, Artificial intelligence policy, Long term, Short term, Artificial intelligence ethics, Cooperation models, Incompletely theorized agreement, Overlapping consensus
Learn Tags: Ethics Solution
Summary: A paper that proposes to make use of the principle of an "incompletely theorized agreement" to bridge some underlying disagreements between those in the research community in order to address AI’s urgent challenges.