Gods and Robots In this episode of the podcast we shake things up! Neil is on the guest side of the table with his partner Rabbi Laura Janner-Klausner to discuss their upcoming project Gods and Robots. Katherine is joined on the host side by friend of the show professor Michael Littman. See... See More Episodes arXiv Whitepapers Different Bias Under Different Criteria: Assessing Bias in LLMs with a Fact-Based Approach Large language models (LLMs) often reflect real-world biases, leading to efforts to mitigate these effects and make the models unbiased. Achieving this goal requires defining clear criteria for an unbiased state, with any deviation from these criteria considered biased. Some studies define an... A Shared Standard for Valid Measurement of Generative AI Systems' Capabilities, Risks, and Impacts The valid measurement of generative AI (GenAI) systems' capabilities, risks, and impacts forms the bedrock of our ability to evaluate these systems. We introduce a shared standard for valid measurement that helps place many of the disparate-seeming evaluation practices in use today on a common... Gaps Between Research and Practice When Measuring Representational Harms Caused by LLM-Based Systems To facilitate the measurement of representational harms caused by large language model (LLM)-based systems, the NLP research community has produced and made publicly available numerous measurement instruments, including tools, datasets, metrics, benchmarks, annotation instructions, and other... More featured content News Articles Eat a rock a day, put glue on your pizza: how Google’s AI is losing touch with reality AI’s excessive water consumption threatens to drown out its environmental contributions Stay in the loop. Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings. E-mail Leave this field blank Understanding AI outputs: study shows pro-western cultural bias in the way AI decisions are explained How to spot fake online reviews (with a little help from AI) Supermarket facial recognition failure: why automated systems must put the human factor first From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam Building fairness into AI is crucial – and hard to get right Beware businesses claiming to use trailblazing technology. They might just be ‘AI washing’ to snare investors Generative AI could leave users holding the bag for copyright violations Something felt ‘off’ – how AI messed with our human research, and what we learned More news