• search hit 1 of 1
Back to Result List

A Deep Dive Into Neural Synchrony Evaluation for Audio-visual Translation

  • We present a comprehensive analysis of the neural audio-visual synchrony evaluation tool SyncNet. We assess the agreement of SyncNet scores vis-a-vis human perception and whether we can use these as a reliable metric for evaluating audio-visual lip-synchrony in generation tasks with no ground truth reference audio-video pair. We further look into the underlying elements in audio and video which vitally affect synchrony using interpretable explanations from SyncNet predictions and analyse its susceptibility by introducing adversarial noise. SyncNet has been used in numerous papers on visually-grounded text-to-speech for scenarios such as dubbing. We focus on this scenario which features many local asynchronies (something that SyncNet isn’t made for).

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics
Metadaten
Author:Shravan Nayak, Christian Schuler, Debjoy Saha, Timo BaumannORCiDGND
DOI:https://doi.org/10.1145/3536221.3556621
ISBN:978-1-4503-9390-4
Parent Title (English):ICMI '22, Proceedings of the 2022 International Conference on Multimodal Interaction: November 7 - 11 2022, Bengaluru India
Publisher:Association for Computing Machinery
Place of publication:New York
Editor:Raj Tumuluri, Nicu Sebe, Gopal Pingali
Document Type:conference proceeding (article)
Language:English
Year of first Publication:2022
Release Date:2022/11/09
Tag:audio-visual synchrony; dubbing; speech-lip synchrony
First Page:642
Last Page:647
Institutes:Fakultät Informatik und Mathematik
Begutachtungsstatus:peer-reviewed
research focus:Information und Kommunikation
OpenAccess Publikationsweg:Diamond Open Access - OA-Veröffentlichung ohne Publikationskosten (Sponsoring)
Licence (German):Keine Lizenz - Es gilt das deutsche Urheberrecht: § 53 UrhG