SECURITY IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING SYSTEMS ANALYSIS OF ADVERSARIAL ATTACKS AND DEFENSE STRATEGIES

Authors

  • Igor Luigi Fracarolli Taquaritinga College of Technology (Fatec) - Taquaritinga - São Paulo - Brazil , Faculdade de Tecnologia de Taquaritinga (Fatec) - Taquaritinga - São Paulo - Brasil
  • Robson Eduardo Galloppi Taquaritinga College of Technology (Fatec) - Taquaritinga - São Paulo - Brazil , Faculdade de Tecnologia de Taquaritinga (Fatec) - Taquaritinga - São Paulo - Brasil

DOI:

https://doi.org/10.31510/infa.v22i1.2240

Keywords:

Artificial Intelligence, Machine Learning, Security, Adversarial attacks, Defense strategies

Abstract

Artificial Intelligence (AI) and Machine Learning (ML) have played fundamental roles in digital transformation, being widely used in sectors such as healthcare, finance, and transportation. However, these technologies present significant vulnerabilities, especially in relation to adversarial attacks, which manipulate input data to compromise the effectiveness and security of models. In addition, the use of biased data to train algorithms can perpetuate prejudices and inequalities, raising ethical and social concerns. This work aims to analyze the impacts of adversarial attacks on AI and ML systems, as well as propose defense strategies to ensure the reliability and security of these technologies. At the same time, we seek to explore the ethical challenges related to the use of biased data, suggesting guidelines that promote greater equity and responsibility. The research adopts a bibliographic methodology, based on the analysis of scientific articles, technical reports, and case studies. This approach allows identifying vulnerabilities, exemplifying adversarial attacks, and evaluating existing solutions to mitigate their effects. This work is based on authors who explore the use of artificial intelligence in several areas, such as natural language (VASWANI et al., 2017; DEVLIN et al., 2019), speech recognition (RAVANELLI et al., 2020), agriculture (SILVA et al., 2018) and autonomous vehicles (FENG et al., 2021), vulnerability to adversarial attacks (HOSPEDALES et al., 2020). The contributions of Transformers and deep learning (OTTER et al., 2020) stand out. The results indicate that measures such as the use of robust algorithms, the diversification of training data and the implementation of continuous adversarial testing are effective in protecting AI and ML systems. In addition, the need for regulations that ensure transparency and ethics in the development of these technologies is highlighted. It is concluded that, to ensure security and fairness, it is essential to combine technical strategies with ethical guidelines, promoting the responsible adoption of AI and ML.

Downloads

Download data is not yet available.

References

ALBUQUERQUE, Bárbara Beatriz Fernandes. A revolução na prática clínica: O impacto da Inteligência Artificial (IA) nas aplicações radiológicas e diagnóstico médico. 2023. Trabalho de Conclusão de Curso. Universidade Federal do Rio Grande do Norte.

COLODETTI, Pedro Vinicius Baptista. Matemática aplicada à inteligência artificial: a base fundamental do machine learning. 2022.

CUSTÓDIO, Elaine Cristina Pereira. Aplicações de inteligência artificial em entidades fechadas de previdência complementar. 2021.

DEVLIN, J. et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805, 2019.

FENG, S. et al. Intelligent driving intelligence test for autonomous vehicles with naturalistic and adversarial environment. Nat Commun, v.12, p.748, 2021. DOI: https://doi.org/10.1038/s41467-021-21007-8

GIL, A. C. (2019). Como elaborar projetos de pesquisa. São Paulo: Atlas.

HOSPEDALES, T. et al. A. Meta-Learning in Neural Networks: A Survey. arXiv:2004.05439, 11 Abr. 2020.

LECUN, Y. et al. Deep learning. Nature v.521, p.436-44, 2015 DOI: https://doi.org/10.1038/nature14539

MARCONI, Marina de Andrade; LAKATOS, Eva Maria. Fundamentos de metodologia científica. 8. ed. São Paulo: Atlas, 2019.

OTTER, D. W. et al. A survey of the usages of deep learning for natural language processing. IEEE Transactions on Neural Networks and Learning Systems, v.32, n.2, p.604- 24, 2020. DOI: https://doi.org/10.1109/TNNLS.2020.2979670

RANGEL NETO, Digenaldo de Brito. Detecção de ataques DDoS na camada de aplicação: um esquema com aprendizado de máquina e Big Data. 2025. Dissertação de Mestrado. DOI: https://doi.org/10.5753/wgrs.2025.8757

RAVANELLI, M. et al. Multi-task self-supervised learning for robust speech recognition. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Barcelona, 2020. p.6989-93. DOI: https://doi.org/10.1109/ICASSP40776.2020.9053569

SILVA, Giovanni Cimolin da et al. Detecção e contagem de plantas utilizando técnicas de inteligência artificial e machine learning. 2018.

STRELKOVA, O. Three types of artificial intelligence. 2017.

TORFI, A. et al. Natural language processing advancements by deep learning: A survey. arXiv preprint arXiv:2003.01200 (2020).

VASWANI, Ashish; SHAZEER, Noam; PARMAR, Niki; USZKOREIT, Jakob; JONES, Llion; GOMEZ, Aidan; KAISER, Łukasz; POLOSUKHIN, Illia. Attention is all you need. NeurIPS, [S. l.], 2017.

ZHANG, Yizhe; SUN, Siqi; GALLEY, Michel; CHEN, Yen-Chun; BROCKETT, Chris; GAO, Xiang; GAO, Jianfeng; LIU, Jingjing; DOLAN, Bill. DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation. ACL, system demonstration, [S. l.], p. 1-10, 2020. DOI: https://doi.org/10.18653/v1/2020.acl-demos.30

Published

2025-10-24

Issue

Section

Tecnologia em Informática

How to Cite

FRACAROLLI, Igor Luigi; GALLOPPI, Robson Eduardo. SECURITY IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING SYSTEMS ANALYSIS OF ADVERSARIAL ATTACKS AND DEFENSE STRATEGIES. Revista Interface Tecnológica, Taquaritinga, SP, v. 22, n. 1, p. 158–169, 2025. DOI: 10.31510/infa.v22i1.2240. Disponível em: https://revista.fatectq.edu.br/interfacetecnologica/article/view/2240. Acesso em: 5 dec. 2025.