Domain-Specific Evaluation Strategies for AI in Journalism
News organizations today rely on AI tools to increase efficiency and productivity across various tasks in news production and distribution. These tools are oriented towards stakeholders such as reporters, editors, and readers. However, practitioners also express reservations around adopting AI technologies into the newsroom, due to the technical and ethical challenges involved in evaluating AI technology and its return on investments. This is to some extent a result of the lack of domain-specific strategies to evaluate AI models and applications. In this paper, we consider different aspects of AI evaluation (model outputs, interaction, and ethics) that can benefit from domain-specific tailoring, and suggest examples of how journalistic considerations can lead to specialized metrics or strategies. In doing so, we lay out a potential framework to guide AI evaluation in journalism, such as seen in other disciplines (e.g. law, healthcare). We also consider directions for future work, as well as how our approach might generalize to other domains.
Stay in the loop.
Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.