Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Void: A fast and light voice liveness detection system

Authors
Ahmed, Muhammad EjazKwak, Il-YoupHuh, Jun HoKim, IljooOh, TaekkyungKim, Hyoungshick
Issue Date
2020
Publisher
USENIX ASSOC
Citation
PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, pp 2685 - 2702
Pages
18
Journal Title
PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM
Start Page
2685
End Page
2702
URI
https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/63476
Abstract
Due to the open nature of voice assistants' input channels, adversaries could easily record people's use of voice commands, and replay them to spoof voice assistants. To mitigate such spoofing attacks, we present a highly efficient voice liveness detection solution called "Void." Void detects voice spoofing attacks using the differences in spectral power between live-human voices and voices replayed through speakers. In contrast to existing approaches that use multiple deep learning models, and thousands of features, Void uses a single classification model with just 97 features. We used two datasets to evaluate its performance: (1) 255,173 voice samples generated with 120 participants, 15 playback devices and 12 recording devices, and (2) 18,030 publicly available voice samples generated with 42 participants, 26 playback devices and 25 recording devices. Void achieves equal error rate of 0.3% and 11.6% in detecting voice replay attacks for each dataset, respectively. Compared to a state of the art, deep learning-based solution that achieves 7.4% error rate in that public dataset, Void uses 153 times less memory and is about 8 times faster in detection. When combined with a Gaussian Mixture Model that uses Mel-frequency cepstral coefficients (MFCC) as classification features - MFCC is already being extracted and used as the main feature in speech recognition services - Void achieves 8.7% error rate on the public dataset. Moreover, Void is resilient against hidden voice command, inaudible voice command, voice synthesis, equalization manipulation attacks, and combining replay attacks with live-human voices achieving about 99.7%, 100%, 90.2%, 86.3%, and 98.2% detection rates for those attacks, respectively.
Files in This Item
Appears in
Collections
College of Business & Economics > Department of Applied Statistics > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kwak, Il-Youp photo

Kwak, Il-Youp
대학원 (통계데이터사이언스학과)
Read more

Altmetrics

Total Views & Downloads

BROWSE