Thursday, May 28, 2020

Transcription-Enriched Joint Embeddings\\for Spoken Descriptions of Images and Videos


Benet Oriol, Jordi Luque, Ferran Diego, Xavier Giro-i-Nieto Telefonica Research / Universitat Politecnica de Catalunya (UPC) CVPR 2020 Workshop on on Egocentric Perception, Interaction and Computing In this work, we propose an effective approach for training unique embedding representations by combining three simultaneous modalities: image and spoken and textual narratives. The proposed methodology departs from a baseline system that spawns a embedding space trained with only spoken narratives and image cues. Our experiments on the EPIC-Kitchen and Places Audio Caption datasets show that introducing the human-generated textual transcriptions of the spoken narratives helps to the training procedure yielding to get better embedding representations. The triad speech, image and words allows for a better estimate of the point embedding and show an improving of the performance within tasks like image and speech retrieval, even when text third modality, text, is not present in the task.

No comments:

Post a Comment