Gods and Robots In this episode of the podcast we shake things up! Neil is on the guest side of the table with his partner Rabbi Laura Janner-Klausner to discuss their upcoming project Gods and Robots. Katherine is joined on the host side by friend of the show professor Michael Littman. See... See More Episodes arXiv Whitepapers AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages In recent years, multilingual pre-trained language models have gained prominence due to their remarkable performance on numerous downstream Natural Language Processing tasks (NLP). However, pre-training these large multilingual language models requires a lot of training data, which is not available... Probing for Incremental Parse States in Autoregressive Language Models Next-word predictions from autoregressive neural language models show remarkable sensitivity to syntax. This work evaluates the extent to which this behavior arises as a result of a learned ability to maintain implicit representations of incremental syntactic structures. We extend work in syntactic... Does the explanation satisfy your needs?: A unified view of properties of explanations Interpretability provides a means for humans to verify aspects of machine learning (ML) models and empower human+ML teaming in situations where the task cannot be fully automated. Different contexts require explanations with different properties. For example, the kind of explanation required to... News Articles New leadership for MIT-IBM Watson AI Lab Artificial intelligence suggests recipes based on food photos Stay in the loop. Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings. E-mail Leave this field blank Lincoln Laboratory enters licensing agreement to produce its localizing ground-penetrating radar Bringing neural networks to cellphones Miniaturizing the brain of a drone