Gods and Robots In this episode of the podcast we shake things up! Neil is on the guest side of the table with his partner Rabbi Laura Janner-Klausner to discuss their upcoming project Gods and Robots. Katherine is joined on the host side by friend of the show professor Michael Littman. See... See More Episodes arXiv Whitepapers Contrasting Attitudes Towards Current and Future AI Applications for Computerised Interpretation of ECG: A Clinical Stakeholder Interview Study Objectives: To investigate clinicians' attitudes towards current automated interpretation of ECG and novel AI technologies and their perception of computer-assisted interpretation. Materials and Methods: We conducted a series of interviews with clinicians in the UK. Our study: (i) explores the... Language model developers should report train-test overlap Language models are extensively evaluated, but correctly interpreting evaluation results requires knowledge of train-test overlap which refers to the extent to which the language model is trained on the very data it is being tested on. The public currently lacks adequate information about train-test... When a language model is optimized for reasoning, does it still show embers of autoregression? An analysis of OpenAI o1 In "Embers of Autoregression" (McCoy et al., 2023), we showed that several large language models (LLMs) have some important limitations that are attributable to their origins in next-word prediction. Here we investigate whether these issues persist with o1, a new system from OpenAI that differs from... More featured content News Articles Eat a rock a day, put glue on your pizza: how Google’s AI is losing touch with reality AI’s excessive water consumption threatens to drown out its environmental contributions Stay in the loop. Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings. E-mail Leave this field blank Understanding AI outputs: study shows pro-western cultural bias in the way AI decisions are explained How to spot fake online reviews (with a little help from AI) Supermarket facial recognition failure: why automated systems must put the human factor first From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam Building fairness into AI is crucial – and hard to get right Beware businesses claiming to use trailblazing technology. They might just be ‘AI washing’ to snare investors Generative AI could leave users holding the bag for copyright violations Something felt ‘off’ – how AI messed with our human research, and what we learned More news