Experts urge inclusive panels to assess AI trustworthiness, warning that relying solely on elites risks entrenching bias and power imbalances in AI evaluation.
Experts alone are insufficient to determine the trustworthiness of AI systems. While Vinay Chaudhri’s proposal of a ‘Sunstein test’—using expert interviews to assess an AI model’s depth of understanding—offers a valuable method for evaluating technical competence, it risks concentrating authority in the hands of a narrow group of elites. This approach may unintentionally entrench existing power imbalances, as highlighted by Cathy O’Neil in her critique of how AI systems often reflect the interests and biases of their creators. True trustworthiness in AI must go beyond technical accuracy and consider broader societal impacts, fairness, and inclusivity. To address this, we need to move beyond relying solely on expert judgment. Instead, panels composed of diverse, representative peers—individuals from varied cultural, educational, and professional backgrounds—should be integral to assessing AI systems. Such panels can provide grounded, real-world perspectives on how AI tools function in practice, how they affect different communities, and whether they align with public values. This participatory model ensures that the evaluation of AI is not limited to a technocratic elite but includes the lived experiences of those who interact with or are affected by these systems. Incorporating peer panels also helps counteract the risk of AI systems amplifying inequality. When only experts evaluate AI, there is a danger that systems optimized for efficiency or precision—often defined through a narrow technical lens—may overlook ethical trade-offs, accessibility issues, or unintended consequences for marginalized groups. By involving a broader cross-section of society, we can better identify biases, ensure transparency, and promote accountability. Moreover, public trust in AI depends on perceived legitimacy. When people see that AI systems are evaluated not just by experts, but by people like themselves, they are more likely to view the technology as fair and trustworthy. This is especially critical as AI becomes embedded in high-stakes domains such as healthcare, education, and criminal justice. In sum, while expert evaluation remains important, it should be complemented by inclusive, peer-led assessments. Only through such a pluralistic approach can we ensure that AI systems are not only technically sound but also ethically responsible, socially just, and truly trustworthy.
