Centered Kernel Alignment for efficient Vision Transformer quantization - CEA - Commissariat à l’énergie atomique et aux énergies alternatives
Communication Dans Un Congrès Année : 2024

Centered Kernel Alignment for efficient Vision Transformer quantization

Résumé

The rapidly evolving field of computer vision has witnessed a paradigm shift with the introduction of Transformerbased architectures, particularly Vision Transformers (ViTs). As these models expand in complexity, ensuring their efficient deployment on resource-limited devices becomes crucial. This paper proposes a solution for the model compression problem, emphasizing quantization, and highlights a notable gap in current methodologies: their need to consider outliers in the quantization process. We propose a distillation-guided quantization approach for ViTs, leveraging the Centered Kernel Alignment (CKA) similarity score. Empirical experiments are carried out on the DeiT architecture using the ImageNet dataset, with our CKA approach demonstrating promising results in retaining model intricacies during compression.
Fichier principal
Vignette du fichier
AccML___Centered_Kernel_Alignment_for_Efficient_Vision_Transformer_Quantization___Camera_Ready.pdf (347.46 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

cea-04706854 , version 1 (23-09-2024)

Identifiants

  • HAL Id : cea-04706854 , version 1

Citer

Jose Lucas de Melo Costa, Cyril Moineau, Thibault Allenet, Inna Kucher. Centered Kernel Alignment for efficient Vision Transformer quantization. AccML and HiPEAC 2024 workshop - 6th Workshop on Accelerated Machine Learning, Jan 2024, Munich, Germany. 6th_AccML_paper_17. ⟨cea-04706854⟩
0 Consultations
0 Téléchargements

Partager

More