Award Abstract # 2114220
Collaborative Research: SaTC: CORE: Small: Securing IoT and Edge Devices under Audio Adversarial Attacks

NSF Org: CNS
Division Of Computer and Network Systems
Recipient: RUTGERS, THE STATE UNIVERSITY
Initial Amendment Date: August 25, 2021
Latest Amendment Date: August 25, 2021
Award Number: 2114220
Award Instrument: Standard Grant
Program Manager: Karen Karavanic
kkaravan@nsf.gov
 (703)292-2594
CNS
 Division Of Computer and Network Systems
CSE
 Direct For Computer & Info Scie & Enginr
Start Date: October 1, 2021
End Date: September 30, 2024 (Estimated)
Total Intended Award Amount: $330,000.00
Total Awarded Amount to Date: $330,000.00
Funds Obligated to Date: FY 2021 = $330,000.00
History of Investigator:
  • Yingying Chen (Principal Investigator)
    yingche@scarletmail.rutgers.edu
  • Bo Yuan (Co-Principal Investigator)
Recipient Sponsored Research Office: Rutgers University New Brunswick
3 RUTGERS PLZ
NEW BRUNSWICK
NJ  US  08901-8559
(848)932-0150
Sponsor Congressional District: 12
Primary Place of Performance: Rutgers University New Brunswick
NJ  US  08854-3925
Primary Place of Performance
Congressional District:
06
Unique Entity Identifier (UEI): M1LVPE5GLSD9
Parent UEI:
NSF Program(s): Secure &Trustworthy Cyberspace
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 025Z, 7923, 9102
Program Element Code(s): 806000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Powered by the advancement of artificial intelligence (AI) techniques, the next-generation voice-controllable IoT and edge systems have substantially facilitated people?s daily lives. Such systems include voice assistant systems and voice authenticated mobile banking, among many others. However, the underlying machine learning approaches used in these systems, are inherently vulnerable to audio adversarial attacks, in which an adversary can mislead the machine learning models via injecting imperceptible perturbation to the original audio input. Given the widespread adoption of voice-controllable IoT and edge systems in many privacy-critical and safety-critical applications, e.g., personal banking and autonomous driving, the in-depth understanding and investigation of severity and consequences of audio-based adversarial attack as well as the corresponding defense solutions, are highly demanded. This project will perform a comprehensive study and analysis of the vulnerability and robustness of voice-controllable IoT and edge systems against audio-domain adversarial attacks in both temporal and spatial perspectives. The research outcome of this project will form solid foundations for building trustworthy voice-controllable IoT and edge systems. The developed defense techniques will improve the security of many intelligent audio systems, such as automatic speech recognition (ASR), keyword spotting, and speaker recognition. This project will involve underrepresented students, undergraduate and graduate students, and K-12 students through a variety of engaging programs.

The objective of this project is to demonstrate the feasibility of audio adversarial attacks in the physical world, determine the attack severity and consequences, and further develop defending strategies in practical environments to build attack-resilient voice-controllable Internet-of-Things (IoT) devices and edge systems. To study the possibility and severity of audio adversarial attacks in a practical time-constraint setting, the project will develop low-cost audio-agnostic synchronization-free attack launching schemes, including audio-specific fast adversarial perturbation generator and universal adversarial perturbation generator. To investigate how the adversarial perturbations survive various propagation factors in realistic environments, the project will analyze the audio distortions caused by the over-the-air propagation using an advanced room impulse response simulator and physical environment measurements. The project will also develop several defense techniques, including defensive denoiser, model enhancement, and microphone-array-based liveness detection. The presented technique will help to secure the voice-controllable IoT and edge devices under audio adversarial attacks. The project will also contribute to a new computing paradigm in audio-based adversarial machine learning in both theoretic foundations as well as safety-critical audio-oriented emerging applications.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Phan, Huy and Xie, Yi and Liu, Jian and Chen, Yingying and Yuan, Bo "Invisible and Efficient Backdoor Attacks for Compressed Deep Neural Networks" IEEE International Conference on Acoustics, Speech and Signal Processing , 2022 https://doi.org/10.1109/ICASSP43922.2022.9747582 Citation Details
Li, Zhuohang and Shi, Cong and Zhang, Tianfang and Xie, Yi and Liu, Jian and Yuan, Bo and Chen, Yingying "Robust Detection of Machine-induced Audio Attacks in Intelligent Audio Systems with Microphone Array" ACM SIGSAC Conference on Computer and Communications Security , 2021 https://doi.org/10.1145/3460120.3484755 Citation Details
Zhao, T and Tang, Z and Zhang, T and Phan, H and Wang, Y and Shi, C and Yuan, B and Chen, Y "Stealthy Backdoor Attack on RF Signal Classification" International Conference on Computing Communication and Networking Technologies , 2023 Citation Details
Xiao, J and Zhang, C and Gong, Y and Yin, M and Sui, Y and Xiang, L and Tao, D and Yuan, B "HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks" Proceedings of the AAAI Conference on Artificial Intelligence , 2023 Citation Details
Phan, H. and Shi, C. and Xie, Y. and Zhang, T. and Li, Z. and Zhao, T. and Liu, J. and Wang, Y. and Chen, Y. and Yuan, B. "RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN" European conference on computer vision , 2022 Citation Details
Xie, Y and Jiang, R and Guo, X and Wang, Y and Cheng, J and Chen, Y "Universal Targeted Adversarial Attacks Against mmWave-based Human Activity Recognition" IEEE International Conference on Computer Communication and the Internet , 2023 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page