In both entertainment and professional applications, conventionally produced stereo or multi-channel audio content is frequently delivered over headphones or earbuds. Use cases involving object-based binaural audio rendering include recently developed immersive multi-channel audio distribution formats, along with the accelerating deployment of virtual or augmented reality applications and head-mounted displays. The appreciation of these listening experiences by end users may be compromised by an unnatural perception of the localization of frontal audio objects: commonly heard near or inside the listener’s head even when their specified position is distant. In this demonstration, examples are presented to illustrate the differences between audio rendered with traditional stereo panning, binaural processing, and a recently proposed externalization processing method.