MediumAI

Associate team MediumAI – Responsible AI for Journalism

Principal investigators
Oana-Denisa Balalau, CEDAR research team, Inria
Davide Ceolin, Human-Centered Data Analytics group, CWI

Abstract
From recommender systems to large language models, data-driven AI tools have shown different forms of limitations and bias. Bias in AI tools may stem from multiple factors, including bias in the input data the AI tools are trained on, the algorithm and the individuals responsible for designing the AI tools, and bias in the evaluation and interpretation of AI tool outputs. Limitations are due to technical difficulties in achieving specific tasks. Media outlets use different algorithmic aids in their workflow: keyword extraction, entities and relations extractions, event extraction, sentiment analysis, automatic summarization, newsworthy story detection, semi-automatic production of news using text generation models, and search, among others. Given the importance of the media sector for our democracies, shortcomings in the tools they use could have severe consequences. Both Inria and CWI have partnerships with large media groups and can help them address bias and limitations in their AI workflows.

Website: under construction

Keywords: Natural language processing Machine learning and statistics Data and knowledge analysis Participative democracy Information systems

Comments are closed.