Adversarial Machine Learning in Wireless Communications - Basics and two examples
Publish date: 2023-01-12
Report number: FOI-R--5427--SE
Pages: 30
Written in: English
Keywords:
- AI
- machine learning
- AML
- wireless communications
Abstract
Machine learning (ML) techniques have been introduced in wireless communications standards and systems, and the use of ML in communications is expected to increase. However, there are still many challenges remaining to make it robust and reliable. ML algorithms are known to be vulnerable to adversarial attacks. An adversarial attack is a crafted set of input data to an ML algorithm with intention to cause the ML algorithm to provide erroneous output. An adversarial attack can be directed towards ML algorithms used by a communication system, but can also be exploited by a communications system, for example to avoid signal classification. The term adversarial ML (AML) refers to adversarial attacks and means to mitigate such attacks. The goal of this report is to present a brief overview of the different aspects of AML in wireless communications as well as to build further knowledge and provide more details about two selected examples of AML in communications. There exist a huge amount of different types of standard adversarial attacks and numerous countermeasureas that are effective against some of these attacks. However, almost all countermeasures are effective only for certain attacks and fail to defend against some strong and unseen attacks. It remains an open problem to come up with robust designs against all adversarial attacks. An example of exploiting adversarial attacks to avoid detection or classification is studied in this work. The adversarial attack, aimed at the modulation classifier, is performed at the transmitter by addding a small perturbation to the modulated communication signal before transmitting it. This concept of intentionally adding a perturbation to its own signal is one example of LPI/LPD communication. The attack does not only affect the classification accuracy, but also increases the bit error rate (BER) at the intended communication receiver. By training a deadversarial network on the attack, the bit error rate is decreased. This gives the transmitter the opportunity to add an attack to the signal to fool the modulation classifier while still being able to successfully transmit information to its intended receiver. Future extensions of this study includes modelling the channel model between the transmitter and receivers and evaluating additional neural network architectures and different types of adversarial attacks. Another important aspect is to study how the frequency spectrum is affected, and techniques to achieve desired spectral properties. A study of an end-to-end learning system shows that robustness towards adversarial attacks may not always be achieved as desired. In particular, it is shown in this work that a conventional BPSK modulation and Hamming (7,4) code with soft decision decoding outperforms the studied generative adversarial network (GAN) model. Therefore, an earlier conclusion in the literature about the GAN model, that it is better than conventional schemes, does not hold in general and the development of more robust and better performing ML model needs to continue.