Fault injection and safe-error attack for extraction of embedded neural network models - CEA - Commissariat à l’énergie atomique et aux énergies alternatives Access content directly
Journal Articles Lecture Notes in Computer Science Year : 2023

Fault injection and safe-error attack for extraction of embedded neural network models

Abstract

Model extraction emerges as a critical security threat with attack vectors exploiting both algorithmic and implementation-based approaches. The main goal of an attacker is to steal as much information as possible about a protected victim model, so that he can mimic it with a substitute model, even with a limited access to similar training data. Recently, physical attacks such as fault injection have shown worrying efficiency against the integrity and confidentiality of embedded models. We focus on embedded deep neural network models on 32-bit microcontrollers, a widespread family of hardware platforms in IoT, and the use of a standard fault injection strategy - Safe Error Attack (SEA) - to perform a model extraction attack with an adversary having a limited access to training data. Since the attack strongly depends on the input queries, we propose a black-box approach to craft a successful attack set. For a classical convolutional neural network, we successfully recover at least 90% of the most significant bits with about 1500 crafted inputs. These information enable to efficiently train a substitute model, with only 8% of the training dataset, that reaches high fidelity and near identical accuracy level than the victim model.
Fichier principal
Vignette du fichier
SECAI_2023_sub18_Hector.pdf (928.71 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

cea-04607995 , version 1 (11-06-2024)

Identifiers

Cite

Kévin Hector, Mathieu Dumont, Pierre-Alain Moellic, Jean-Max Dutertre. Fault injection and safe-error attack for extraction of embedded neural network models. Lecture Notes in Computer Science, 2023, Computer Security. ESORICS 2023 International Workshops CPS4CIP, ADIoT, SecAssure, WASP, TAURIN, PriST-AI, and SECAI, The Hague, The Netherlands, September 25–29, 2023, Revised Selected Papers, Part II, 14399, pp.644-664. ⟨10.1007/978-3-031-54129-2_38⟩. ⟨cea-04607995⟩
0 View
0 Download

Altmetric

Share

Gmail Mastodon Facebook X LinkedIn More