A Survey on Machine Learning Adversarial Attacks

Flávio Luis de Mello

Abstract


It is becoming notorious several types of adversaries based on their threat model leverage vulnerabilities to compromise a machine learning system. Therefore, it is important to provide robustness to machine learning algorithms and systems against these adversaries. However, there are only a few strong countermeasures, which can be used in all types of attack scenarios to design a robust artificial intelligence system. This paper is structured and comprehensive overview of the research on attacks to machine learning systems and it tries to call the attention from developers and software houses to the security issues concerning machine learning.

Keywords


adversarial attack; machine learning; poisoning; privacy attack; trojoning; backdooring; evasion; reprogramming; countermeasures

Full Text:

PDF

References


Polyakov, Alexander. "How to attack Machine Learning ( Evasion, Poisoning, Inference, Trojans, Backdoors)", Towards Data Science, 2019. Access: January 12th, 2020

NIST. "A Taxonomy and Terminology of 3 Adversarial Machine Learning", National Institute of Standards and Technology Interagency, Internal Report 8269, Eds: Elham Tabassi, Kevin J. Burns, Michael Hadjimichael, Andres D. Molina-Markham, Julian T. Sexton; October, 2019.

Biggio, B.; Roli, F. "Wild patterns: Ten years after the rise of adversarial machinelearning," Pattern Recognition, vol. 84, pp. 317-331, 2018. doi: 10.1016/j.patcog.2018.07.023

Akhtar, N.; Mian, A. "Threat of adversarial attacks on deep learning in computervision: A survey", IEEE Access, vol. 6, pp. 14410-14430, 2018.

Chakraborty, A.; Alam, M.; Dey, V.; Chattopadhyay, A.; Mukhopadhyay, D. "Adversarial Attacks and Defences: A Survey", 2018.

Liu, Q.; Li, P.; Zhao, W.; Cai, W.; Yu, S.; Leung, V. C. M. "A survey on security threats and defensive techniques of machine learning: A data driven view," IEEEaccess, vol. 6, pp. 12103-12117, 2018. doi: 10.1109/ACCESS.2018.2805680

Papernot, N.; McDaniel, P.; Sinha, A.; Wellman, M. P. "SoK: Security and privacy in machine learning", In: 2018 IEEE European Symposium on Security and Privacy(EuroS&P), London, 2018. doi: 10.1109/EuroSP.2018.00035

Pitropakis, Nikolaos; Panaousis, Emmanouil; Giannetsos, Thanassis; Anastasiadis,Eleftherios; Loukas, George. "A taxonomy and survey of attacks against machine learning", Computer Science Review, vol. 34, November 2019. doi: 10.1016/j.cosrev.2019.100199

Szegedy, Christian; Zaremba, Wojciech; Sutskever, Ilya; Bruna, Joan; Erhan, Dumitru; Goodfellow, Ian; Fergus,Rob. "Intriguing properties of neural networks", arXiv, 2014.

Goodfellow, Ian; Shlens, Jonathon; Szegedy, Christian. “Explaining and Harnessing Adversarial Examples”, 2014. arXiv 1412.6572.

Jo, Jason; Bengio, Yoshua. “Measuring the tendency of CNNs to Learn Surface Statistical Regularities”, 2017. arXiv 1711.11561.

Nelson, Blaine; Barreno, Marco; Chi, Fuching Jack; Joseph, Anthony D.; Rubinstein, Benjamin I. P.; Saini, Udam; Sutton, Charles; Tygar, J. D.; Xia, Kai. "Exploiting Machine Learning to Subvert Your Spam Filter", In: Proceedings of First USENIX Workshop on Large Scale Exploits and Emergent Threats, April 2008.

Liu, Yingqi; Ma, Shiqing; Aafer, Yousra; Lee, Wen-Chuan; Zhai, Juan; Wang, Weihang; Zhang, Xiangyu. "Trojaning Attack on Neural Networks", In: Network and Distributed System Security Symposium, 2018. doi: 10.14722/ndss.2018.23300

Lin, Yen-Chen; Hong, Zhang-Wei; Liao,Yuan-Hong; Shih, Meng-Li; Liu, Ming-Yu; Sun, Min. "Tactics of adversarial attack on deep reinforcement learning agents", 2017. arXiv:1703.06748.

Elsayed, Gamaleldin F.; Goodfellow, Ian; Sohl-Dickstein, Jascha. "Adversarial Reprogramming of Neural Networks", 2018. arXiv:1806.11146

Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. "Membership Inference Attacks Against Machine Learning Models", IEEE Symposium on Security and Privacy, 2017. doi: 10.1109/SP.2017.41

Fredrikson, Matthew; Jha, Somesh K; Ristenpart, Thomas. "Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures", In: CCS '15: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp.1322–1333,October 2015. doi: 10.1145/2810103.2813677

Tramèr, Florian; Zhang, Fan; Juels, Ari; Reiter, Michael K.; Ristenpart, Thomas. "Stealing Machine Learning Models via Prediction APIs", 25th Security Symposium (USENIX), 2016.

Dillow, Clay. "Anti-Surveillance Hoodie And Scarf Prevent Drones From Tracking You", In: Popular Science, Technoogy, 2013. Access on December 31th, 2019.

Cvdazzle. "Camouflage from face detection", 2010. , Access on December 31th, 2019.

Xu, Kaidi; Zhang, Gaoyuan; Liu, Sijia; Fan, Quanfu; Sun, Mengshu; Chen, Hongge; Chen, Pin-Yu; Wang, Yanzhi; Lin, Xue. "Adversarial T-shirt! Evading Person Detectors in A Physical World", 2019. arXiv:1910.11099

Thys, Simen; Van Ranst, Wiebe; Goedem, Toon. "Fooling automated surveillance cameras:adversarial patches to attack person detection", IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019. arXvi: 1904.08653

Yamada, Takayuki; Gohshi, Seiichi; Echizen, Isao. "Privacy Visor: wearable device for privacy protection based on differences in sensory perception between humans and devices", IWSEC - International Workshop on Security, 2012.

Sharif, Mahmood; Bhagavatula, Sruti; Bauer, Lujo; Reiter, Michael K. "Accessorize to a Crime: Real and Stealthy Attacks onState-of-the-Art Face Recognition", ACM Conference on Computer and Communications Security, 2016.

Thalen, Mikael. "Is this wearable face projector being used by Hong Kong protesters?", The Daily Dot, 2019. , Accessed on Jan 2nd, 2020.

Eykholt, Kevin; Evtimov, Ivan; Fernandes, Earlence; Li, Bo; Rahmati, Amir; Xiao, Chaowei; Prakash, Atul; Kohno, Tadayoshi; Song, Dawn. "Robust Physical-World Attacks on Deep Learning Visual Classification", IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018. doi: 10.1109/CVPR.2018.00175

Mogelmose, A.; Trivedi, M.; Moeslund, T. "Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey", Trans. Intell. Transport. Syst. 2012, 3, 1484–1497. doi: 10.1109/TITS.2012.2209421

Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. "Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition", Neural Networks. 2012, 32, 323–332. doi: 10.1016/j.neunet.2012.02.016

Denil, Misha; Shakibi, Babak; Dinh, Laurent; "Predicting parameters in deep learning". In: Advances in Neural Information Processing Systems, pp. 2148–2156, 2013.

Wang, Beilun; Gao, Ji; Qi, Yanjun. "A theoretical framework for robustness of (deep) classifiers under adversarial noise, 2016. arXiv:1612.00334

Metzen, J. H.; Genewein, T.; Fischer, V.; Bischoff, B. "On detecting adversarial perturbations", Proceedings of 5th International Conference on Learning Representations (ICLR), 2017.

Feinman, R.; Curtin, R. R.; Shintre, S.; Gardner, A. B.. “Detecting adversarial samples from artifacts”, 2017. arXiv:1703.00410

Grosse, K.; Manoharan, P.; Papernot, N.; Backes, M.; McDaniel, P. “On the (statistical) detection of adversarial examples”, 2017. arXiv:1702.06280

Hendrycks,D.; Gimpel, K. “Early methods for detecting adversarial images,” ICLR Workshop, 2017.

Huang, R.; Xu, B.; Schuurmans, D.; Szepesvári, C. "Learning with a strong adversary", 2015. arXiv:1511.03034

Tramèr, F.; Kurakin, A.; Papernot, N.; Goodfellow, I.; Boneh, D.; McDaniel, P. "Ensemble adversarial training: Attacks and defenses, 2017. arXiv:1705.07204

Hosseini, H.; Chen, Y.; Kannan, S.; Zhang, B.; Poovendran, R. "Blocking transferability of adversarial examples in black-box learning systems", 2017. arXiv:1703.04318

Biggio, B.; Nelson, B.; Laskov, P. !Support vector machines under adversarial label noise", In: Proceedings of the Asian Conference on Machine Learning, Taoyuan, Taiwan, 13–15 November 2011; pp. 97–112.

Lyu, C.; Huang, K.; Liang, H.N. 'A unified gradient regularization family for adversarial examples", In: Proceedings of the 2015 IEEE International Conference on Data Mining (ICDM), Atlantic City, pp. 301-309, 2015.

Zhao, Q.; Griffin, L.D. "Suppressing the unusual: Towards robust cnns using symmetric activation functions", 2016. arXiv:1603.05145

Rozsa, A.; Gunther, M.; Boult, T.E. "Towards robust deep neural networks with BANG", 2016. arXiv:1612.00138.

Xu, W.; Evans, D.; Qi, Y. "Feature squeezing: Detecting adversarial examples in deep neural networks", 2017. arXiv:1704.0115

Gu, S.; Rigazio, L. "Towards deep neural network architectures robust to adversarial examples", 2014. arXiv:1412.5068

Samangouei, P.; Kabkab, M.; Chellappa, R. "Defense-GAN: Protecting classifiers against adversarial attacks using generative models", 2018. arXiv:1805.06605

Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. "Generative adversarial nets", In: Advances in Neural Information Processing Systems, Proceedings of the Annual Conference on Neural Information Processing Systems, 2013.




DOI: https://doi.org/10.17648/jisc.v7i1.76

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Licença Creative Commons
This site is licensed with the Creative Commons Atribuição-NãoComercial-SemDerivações 4.0 Internacional

RENASIC Logo1 Logo2 Logo3