Content Tags

There are no tags.

Human Heuristics for AI-Generated Language Are Flawed

Authors
Maurice Jakesch, Jeffrey Hancock, Mor Naaman

Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems produce smart replies, autocompletes, and translations. AI-generated language is often not identified as such but poses as human language, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether one of the most personal and consequential forms of language - a self-presentation - was generated by AI. In six experiments, participants (N = 4,600) tried to detect self-presentations generated by state-of-the-art language models. Across professional, hospitality, and dating settings, we find that humans are unable to detect AI-generated self-presentations. Our findings show that human judgments of AI-generated language are handicapped by intuitive but flawed heuristics such as associating first-person pronouns, spontaneous wording, or family topics with humanity. We demonstrate that these heuristics make human judgment of generated language predictable and manipulable, allowing AI systems to produce language perceived as more human than human. We discuss solutions, such as AI accents, to reduce the deceptive potential of generated language, limiting the subversion of human intuition.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.