DEEP TRAINING IN METHODS OF PROTECTION AGAINST ATTACKS

Abstract

In recent years, machine learning algorithms, or rather deep learning algorithms, have been widely used in many fields, including cybersecurity. However, machine learning systems are vulnerable to attacks by attackers, and this limits the use of machine learning, especially in nonstationary environments with hostile actions, such as the cybersecurity field, where real attackers exist (for example, malware developers). With the rapid development of artificial intelligence (AI) and deep learning (GO) methods, it is important to ensure the safety and reliability of the implemented algorithms. Recently, the vulnerability of deep learning algorithms to conflicting patterns has been widely recognized. Fabricated samples for analysis can lead to various violations of the behavior of deep learning models, while people will consider them safe to use. The successful implementation of enemy attacks in real physical situations and scenarios of the real physical world once again proves their practicality. As a result, methods of adversarial attack and defense are attracting increasing attention from the security and machine learning communities and have become a hot topic of research in recent years not only in Russia, but also in other countries. Sberbank, Yandex, T1 Group, Atlas Medical Center and many others are developing competitive solutions, including on the international market. Unfortunately, in the list of the 10 largest IT companies, the direction of Big Data, in particular, and protection against attacks is represented only by the T1 Group company, but the market growth potential is huge. In this paper, the theoretical foundations, algorithms and application of methods of adversarial attacks of the enemy arepresented. Then a number of research papers on protection methods are described, covering a wide range of research in this area. This article explores and summarizes adversarial attacks and defenses, which represent the most up-to-date research in this field and meet the latest requirements for information security.

Authors

References

1. Krizhevsky A., Sutskever I., Hinton G.E. ImageNet classification with deep convolutional neural
networks, Proceedings of the 26th Conference on Neural Information Processing Systems;
2012 Dec 3–6; Lake Tahoe, NV, USA; 2012, pp. 1097-105.
2. Cho K., van Merrienbur B., Gyul'sekhre S., Bakhdanau D., Bugares F., Shvenk Kh. i dr.
Izuchenie frazovykh predstavleniy s ispol'zovaniem kodirovshchika-dekodera RNN dlya
statisticheskogo mashinnogo perevoda [The study of phrasal representations using RNN encoder-
decoder for statistical machine translation], 2014. arXiv:1406.1078.
3. Szegedy C., Zaremba W., Sutskever I., Bruna J., Erhan D., Goodfellow I., et al. Intriguing
properties of neural networks, 2013, arXiv:1312.6199.
4. Goodfellow I.J., Shlens J., Szegedy C. Explaining and harnessing adversarial examples. 2014.
arXiv:1412.6572.
5. Zargaryan Yu.A. Zadacha upravlyaemosti v adaptivnoy avtomatnoy obuchaemoy sisteme
upravleniya [The problem of controllability in an adaptive automatic trainable control system],
Mater. X Mezhdunarodnoy nauchno-tekhnicheskoy konferentsii "Tekhnologii razrabotki
informatsionnykh sistem" [Materials of the X International Scientific and Technical Conference.
"Information Systems Development Technologies"], 2020.
6. Zargaryan E.V., Zargaryan Y.A., Kapc I.V., Sakharova O.N., Kalyakina I.M and Dmitrieva
I.A. Method of estimating the Pareto-optimal solutions based on the usefulness. International
Conference on Advances in Material Science and Technology - CAMSTech-2020, IOP Conf.
Series: Materials Science and Engineering, 2020, Vol. 919 (2), pp. 022027 (1-8). DOI:
10.1088/1757-899X/919/2/022027.
7. Zheng T., Chen C., Ren K. Distribution ally adversarial attack. 2018. arXiv:1808.05537.
8. Karlini N., Vagner D. K otsenke nadezhnosti neyronnykh setey [To assess the reliability of neural
networks], Mater. simpoziuma IEEE 2017 goda po bezopasnosti i konfidentsial'nosti; 22–26 maya
2017 g. San-Khose, Kaliforniya, SShA, 2017 [Proceedings of the 2017 IEEE Symposium on
Security and Privacy; May 22-26, 2017; San Jose, California, USA; 2017], pp. 39-57.
9. Papernot N., McDaniel P., Jha S., Fredrikson M., Celik Z.B., Swami A. The limitations of deep
learning in adversarial settings, Proceedings of the 2016 IEEE European Symposium on Security
and Privacy; 2016 Mar 21–24;Saarbrucken, Germany, 2016, pp. 372-87.
10. Moosavi-Dezfooli S.M., Fawzi A., Frossard P. DeepFool: a simple and accurate method to fool
deep neural networks, Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern
Recognition; 2016 Jun 27–30. Las Vegas, NV, USA, 2016, pp. 2574-82.
11. Papernot N., McDaniel P., Goodfellow I. Transferability in machine learning:from phenomena
to black-box attacks using adversarial samples. 2016. arXiv:1605.07277.
12. Liu Y., Chen X., Liu C., Song D. Delving into transferable adversarial examples and black-box
attacks. 2016. arXiv:1611.02770.
13. Madry A., Makelov A., Schmidt L., Tsipras D., Vladu A. Towards deep learning models resistant
to adversarial attacks. 2017. arXiv: 1706.06083.
14. Alamir KH.S., Zargaryan E.V., Zargaryan Yu.A. Model' prognozirovaniya transportnogo potoka
na osnove neyronnykh setey dlya predskazaniya trafika na dorogakh [A traffic flow prediction
model based on neural networks for predicting traffic on the roads], Izvestiya YuFU.
Tekhnicheskie nauki [Izvestiya SFedU. Engineering Sciences], 2021, No. 6 (223), pp. 124-132.
15. Zheng T., Chen C., Yuan J., Li B., Ren K. Point Cloud saliency maps. 2018. arXiv:1812.01687.
16. Beloglazov D., Shapovalov I., Soloviev V., Zargaryan E. The hybrid method of path planning
in non-determined environments based on potential fields, ARPN Journal of Engineering and
Applied Sciences, 2017, Vol. 12, No. 23, pp. 6762-6772.
17. Атали А., Карлини Н., Вагнер Д. Запутанные градиенты дают ложное чувство безопас-
ности: обход защиты к состязательным примерам. 2018.arXiv:1802.00420.
18. Zargarjan E.V., Zargarjan Ju.A., Finaev V.I. Information support for the training of fuzzy
production account balance in the conditions of incomplete data, Innovative technologies and
didactics in teaching (ITDT-2016): Collected papers, 2016, pp. 128-138.
19. Chen Yu., Sharma Yu., Chzhan Kh., I Dzh., Se S.Dzh. EAD: ataki elastichnoy seti na glubokie
neyronnye seti na sostyazatel'nykh primerakh [EAD: Elastic network attacks on deep neural
networks on adversarial examples], Mater. tridtsat' vtoroy konferentsii AAAI po
iskusstvennomu intellektu; 2–7 fevralya 2018 g. Novyy Orlean, Los-Andzheles, SShA, 2019
[Proceedings of the Thirty-second AAAI Conference on Artificial Intelligence; February 2-7,
2018; New Orleans, Los Angeles, USA; 2019].
20. Pushnina I.V. Sistema upravleniya podvizhnym ob"ektom v usloviyakh neopredelennosti [The
control system of a moving object in conditions of uncertainty], Nauka i obrazovanie na
rubezhe tysyacheletiy: Sb. nauchno-issledovatel'skikh rabot [Science and Education at the turn
of the Millennium. Collection of research papers]. Kislovodsk, 2018, pp. 65-74.
21. Xiao C., Li B., Zhu J.Y., He W., Liu M., Song D. Generating adversarial examples with adversarial
networks. 2018. arXiv:1801.02610.
22. Ronneberger O., Fisher., Broks T. U-net: svertochnye seti dlya segmentatsii biomeditsinskikh
izobrazheniy [U-net: convolutional networks for segmentation of biomedical images], Mater.
Mezhdunarodnoy konferentsii po vychislitel'noy tekhnike meditsinskikh izobrazheniy i
komp'yuternomu vmeshatel'stvu; 5–9 oktyabrya 2015 g. Myunkhen, Germaniya, 2015 [Proceedings
of the International Conference on Medical Imaging Computing and Computer Intervention;
October 5-9, 2015; Munich, Germany; 2015], pp. 234-41.
23. Qi C.R., Su H., Mo K., Guibas L.J. PointNet: deep learning on point sets for 3D classification
and segmentation, Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern
Recognition; 2017 Jul 21–26; Honolulu, HI,USA, 2017, pp. 652-60.
24. Finaev V.I., Zargaryan Yu.A., Zargaryan E.V., Solov'ev V.V. Formalizatsiya grupp
podvizhnykh ob"ektov v usloviyakh neopredelennosti dlya vybora upravlyayushchikh resheniy
[Formalization of groups of mobile objects under uncertainty for the choice of control solutions],
Informatizatsiya i svyaz' [Informatization and Communication], 2016, No. 3, pp. 56-62.
25. Bekhzadan V., Munir A. Uyazvimost' glubokogo obucheniya s podkrepleniem k atakam s
tsel'yu induktsii politiki [Vulnerability of deep learning with reinforcement to attacks for the
purpose of policy induction], Mater. Mezhdunarodnoy konferentsii po mashinnomu
obucheniyu i intellektual'nomu analizu dannykh v raspoznavanii obrazov; 15–20 iyulya 2017
g. N'yu-York, N'yu-York, SShA, 2017 [Proceedings of the International Conference on Machine
Learning and Data Mining in Pattern Recognition; July 15-20, 2017. New York, New York,
USA, 2017], pp. 262-75.

Скачивания

Published:

2023-06-07

Issue:

Section:

SECTION III. INFORMATION PROCESSING ALGORITHMS

Keywords:

Adversarial machine learning, deep neural network, adversarial attack, information security, cybersecurity