1 B. Biggio, "Wild patterns : Ten years after the rise of adversarial machine learning" 84 : 317-331, 2018
2 S. Freitas, "Unmask: Adversarial detection and defense through robust feature al ignment" 1081-1088, 2020
3 H. Hirano, "Universal adversarial attacks on d eep neural networks for medical imag e classification" 21 (21): 1-13, 2021
4 L. Munoz-Gonzalez, "Towards poi soning of deep learning algorithms wi th back-gradient optimization" 27-38, 2017
5 A. Madry, "Towards deep learning models resistant to adversari al attacks"
6 N. Carlini, "Towards Evaluating the Robustness of Neural Networks" 39-57, 2017
7 N. Akhtar, "Threat of ad versarial attacks on deep learning in computer vision : A survey" 6 : 14410-14430, 2018
8 X. Chen, "Targeted backdoor attacks on deep learning systems using data pois oning"
9 F. Tramèr, "Stealingm achine learning models via prediction APIs" 601-618, 2016
10 R. Timofte, "Sparse r epresentation based projections" 61-61, 2011
1 B. Biggio, "Wild patterns : Ten years after the rise of adversarial machine learning" 84 : 317-331, 2018
2 S. Freitas, "Unmask: Adversarial detection and defense through robust feature al ignment" 1081-1088, 2020
3 H. Hirano, "Universal adversarial attacks on d eep neural networks for medical imag e classification" 21 (21): 1-13, 2021
4 L. Munoz-Gonzalez, "Towards poi soning of deep learning algorithms wi th back-gradient optimization" 27-38, 2017
5 A. Madry, "Towards deep learning models resistant to adversari al attacks"
6 N. Carlini, "Towards Evaluating the Robustness of Neural Networks" 39-57, 2017
7 N. Akhtar, "Threat of ad versarial attacks on deep learning in computer vision : A survey" 6 : 14410-14430, 2018
8 X. Chen, "Targeted backdoor attacks on deep learning systems using data pois oning"
9 F. Tramèr, "Stealingm achine learning models via prediction APIs" 601-618, 2016
10 R. Timofte, "Sparse r epresentation based projections" 61-61, 2011
11 K. Eykholt, "Robust Physical-World Attacks o n Deep Learning Visual Classificatio n" 1625-1634, 2018
12 M. Fredrikson, "Model inversion attacks that ex ploit confidence information and basic countermeasures" 1322-1333, 2015
13 D. Meng, "Magnet: a t wo-pronged defense against adversari al examples" 135-147, 2017
14 M. Xue, "Machine learning securit y : Threats, countermeasures, and eva luations" 8 : 74720-74742, 2020
15 C. Szegedy, "Intriguing properties of neural netsworks"
16 C. Yang, "Generative poisoning attack methoda gainst neural networks"
17 W. Xu, "Feature squeezing : Detecting adversarial exa mples in deep neural networks"
18 I. Goodfellow, "Explaining and Harnessing Adver sarial Examples" 2015
19 S. Moosavi-Dezfooli, "Deepfool: a simple and accu rate method to fool deep neural netwo rks" 2574-2582, 2016
20 Y. . Dong, "Boosting adve rsarial attacks with momentum" 9185-9193, 2018
21 T. Gu, "Badnets: Identifying vulnerabilities i n the machine learning model supply chain"
22 A. Kurakin, "Artificial intelligence s afety and security" Chapman and Hall /CRC 99-112, 2018
23 G. Ryu, "Advers arial attacks by attaching noise mark ers on the face against deep face reco gnition" 60 : 1-11, 2021
24 G. Ryu, "A Research Tre nds in Artificial Intelligence Security Attacks and Countermeasures" 30 (30): 93-99, 2020