Detailed Information

Cited 0 time in webofscience Cited 24 time in scopus
Metadata Downloads

Void: A fast and light voice liveness detection system

Authors
Ahmed, M.E.[Ahmed, M.E.]Kwak, I.-Y.[Kwak, I.-Y.]Huh, J.H.[Huh, J.H.]Kim, I.[Kim, I.]Oh, T.[Oh, T.]Kim, H.[Kim, H.]
Issue Date
2020
Publisher
USENIX Association
Citation
Proceedings of the 29th USENIX Security Symposium, pp.2685 - 2702
Indexed
SCOPUS
Journal Title
Proceedings of the 29th USENIX Security Symposium
Start Page
2685
End Page
2702
URI
https://scholarworks.bwise.kr/skku/handle/2021.sw.skku/6861
ISSN
0000-0000
Abstract
Due to the open nature of voice assistants' input channels, adversaries could easily record people's use of voice commands, and replay them to spoof voice assistants. To mitigate such spoofing attacks, we present a highly efficient voice liveness detection solution called “Void.” Void detects voice spoofing attacks using the differences in spectral power between live-human voices and voices replayed through speakers. In contrast to existing approaches that use multiple deep learning models, and thousands of features, Void uses a single classification model with just 97 features. We used two datasets to evaluate its performance: (1) 255,173 voice samples generated with 120 participants, 15 playback devices and 12 recording devices, and (2) 18,030 publicly available voice samples generated with 42 participants, 26 playback devices and 25 recording devices. Void achieves equal error rate of 0.3% and 11.6% in detecting voice replay attacks for each dataset, respectively. Compared to a state of the art, deep learning-based solution that achieves 7.4% error rate in that public dataset, Void uses 153 times less memory and is about 8 times faster in detection. When combined with a Gaussian Mixture Model that uses Mel-frequency cepstral coefficients (MFCC) as classification features - MFCC is already being extracted and used as the main feature in speech recognition services - Void achieves 8.7% error rate on the public dataset. Moreover, Void is resilient against hidden voice command, inaudible voice command, voice synthesis, equalization manipulation attacks, and combining replay attacks with live-human voices achieving about 99.7%, 100%, 90.2%, 86.3%, and 98.2% detection rates for those attacks, respectively. © 2020 by The USENIX Association. All Rights Reserved.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Computing and Informatics > Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher KIM, HYOUNG SHICK photo

KIM, HYOUNG SHICK
Computing and Informatics (Computer Science and Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE