Neuromorphic Lip-reading with signed spiking gated recurrent units - CEA - Commissariat à l’énergie atomique et aux énergies alternatives Accéder directement au contenu
Communication Dans Un Congrès Année : 2024

Neuromorphic Lip-reading with signed spiking gated recurrent units

Résumé

Automatic Lip-Reading (ALR) requires the recognition of spoken words based on a visual recording of the speaker’s lips, without access to the sound. ALR with neuromorphic event-based vision sensors, instead of traditional framebased cameras, is particularly promising for edge applications due to their high temporal resolution, low power consumption and robustness. Neuromorphic models, such as Spiking Neural Networks (SNNs), encode information using events and are naturally compatible with such data. The sparse and event-based nature of both the sensor data and SNN activations can be leveraged in an end-to-end neuromorphic hardware pipeline for low-power and lowlatency edge applications. However, the accuracy of SNNs is often largely degraded compared to state-of-the-art nonspiking Artificial Neural Networks (ANNs). In this work, a new SNN model, the Signed Spiking Gated Recurrent Unit (SpikGRU2+), is proposed and used as a task head for event-based ALR. The SNN architecture is as accurate as its ANN equivalent, and outperforms the state-of-the-art on the DVS-Lip dataset. Notably, the accuracy is improved by 25% (respectively 4%) compared to the previous state-ofthe-art SNN (respectively ANN). In addition, the SNN spike sparsity can be optimized to further reduce the number of operations up to 22x compared to the ANN while maintaining a high accuracy. This work opens up new perspectives for the use of SNNs for accurate and low-power end-to-end neuromorphic gesture recognition. Code is available1 .
Fichier principal
Vignette du fichier
Dampfhoffer_Neuromorphic_Lip-Reading_with_Signed_Spiking_Gated_Recurrent_Units_CVPRW_2024_paper.pdf (3.3 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

cea-04613773 , version 1 (17-06-2024)

Identifiants

  • HAL Id : cea-04613773 , version 1

Citer

Manon Dampfhoffer, Thomas Mesquida. Neuromorphic Lip-reading with signed spiking gated recurrent units. CVPR 2024 - IEEE / CVF Computer Vision and Pattern Recognition Conference, Jun 2024, Seattle, United States. pp.2141-2151. ⟨cea-04613773⟩
0 Consultations
0 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More