HyperAIHyperAI
2 months ago

Distinguishing Homophenes Using Multi-Head Visual-Audio Memory for Lip Reading

Kim, Minsu ; Yeo, Jeong Hun ; Ro, Yong Man
Distinguishing Homophenes Using Multi-Head Visual-Audio Memory for Lip
  Reading
Abstract

Recognizing speech from silent lip movement, which is called lip reading, isa challenging task due to 1) the inherent information insufficiency of lipmovement to fully represent the speech, and 2) the existence of homophenes thathave similar lip movement with different pronunciations. In this paper, we tryto alleviate the aforementioned two challenges in lip reading by proposing aMulti-head Visual-audio Memory (MVM). Firstly, MVM is trained with audio-visualdatasets and remembers audio representations by modelling theinter-relationships of paired audio-visual representations. At the inferencestage, visual input alone can extract the saved audio representation from thememory by examining the learned inter-relationships. Therefore, the lip readingmodel can complement the insufficient visual information with the extractedaudio representations. Secondly, MVM is composed of multi-head key memories forsaving visual features and one value memory for saving audio knowledge, whichis designed to distinguish the homophenes. With the multi-head key memories,MVM extracts possible candidate audio features from the memory, which allowsthe lip reading model to consider the possibility of which pronunciations canbe represented from the input lip movement. This also can be viewed as anexplicit implementation of the one-to-many mapping of viseme-to-phoneme.Moreover, MVM is employed in multi-temporal levels to consider the context whenretrieving the memory and distinguish the homophenes. Extensive experimentalresults verify the effectiveness of the proposed method in lip reading and indistinguishing the homophenes.