Clueless AI: Should AI Models Report to Us When They Are Clueless?

The right to AI explainability has consolidated as a consensus in the research community and policy-making. However, a key component of explainability has been missing: extrapolation, which describes the extent to which AI models can be clueless when they encounter unfamiliar samples (i.e., samples outside the convex hull of their training sets, as we will explain down below). We report that AI models extrapolate outside their range of familiar data, frequently and without notifying the users and stakeholders. Knowing whether a model has extrapolated or not is a fundamental insight that should be included in explaining AI models in favor of transparency and accountability. Instead of dwelling on the negatives, we offer ways to clear the roadblocks in promoting AI transparency. Our analysis commentary accompanying practical clauses useful to include in AI regulations such as the National AI Initiative Act in the US and the AI Act by the European Commission.

Focus: AI Ethics/Policy
Source: MAIEI
Readability: Expert
Type: Website Resource
Open Source: No
Keywords: automated systems, AI, AI regulations, extrapolation, transparency
Learn Tags: Design/Methods Ethics Fairness AI and Machine Learning Trust Research Centre
Summary: This new paper discusses the importance of including extrapolation in the right to AI explainability and transparency discourse. Extrapolation, a term that refers to the extent AI models are clueless when they come across unfamiliar samples, happens far more often than users and stakeholders would expect.