Multimodal Displays for Takeover Requests
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yang, Ji Hyun | - |
dc.contributor.author | Lee, Seul Chan | - |
dc.contributor.author | Nadri, Chihab | - |
dc.contributor.author | Kim, Jaewon | - |
dc.contributor.author | Shin, Jaekon | - |
dc.contributor.author | Jeon, Myounghoon | - |
dc.date.accessioned | 2024-04-23T04:02:41Z | - |
dc.date.available | 2024-04-23T04:02:41Z | - |
dc.date.issued | 2022-01 | - |
dc.identifier.issn | 1860-949X | - |
dc.identifier.issn | 1860-9503 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/118817 | - |
dc.description.abstract | In an automated vehicle, drivers or occupants may receive a takeover request (TOR) from the automated system, which would require them to respond in a safe and timely manner to take control of the vehicle. Because drivers are likely to engage in non-driving activities in an automated vehicle, it can be challenging for them to respond safely to a TOR with appropriate situation awareness. In particular, an improper or delayed reaction may have dangerous consequences. This chapter discusses multimodal display methods for TORs in automated vehicles. It also presents theoretical foundations of multimodal display research, designs of commercially available multimodal displays, and a discussion of selected research studies in the field of automotive multimodal displays. Challenges and considerations related to TORs from a multimodal display research perspective are described. We hope that this chapter will lead to the design of safer user interfaces for automated vehicles and the development of clear safety guidelines. © 2022, Springer Nature Switzerland AG. | - |
dc.format.extent | 28 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Springer Verlag | - |
dc.title | Multimodal Displays for Takeover Requests | - |
dc.type | Article | - |
dc.publisher.location | 독일 | - |
dc.identifier.doi | 10.1007/978-3-030-77726-5_15 | - |
dc.identifier.scopusid | 2-s2.0-85122405708 | - |
dc.identifier.bibliographicCitation | Studies in Computational Intelligence, v.980, pp 397 - 424 | - |
dc.citation.title | Studies in Computational Intelligence | - |
dc.citation.volume | 980 | - |
dc.citation.startPage | 397 | - |
dc.citation.endPage | 424 | - |
dc.type.docType | 정기 학술지(기타) | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordAuthor | Auditory | - |
dc.subject.keywordAuthor | Automated vehicle | - |
dc.subject.keywordAuthor | Haptic | - |
dc.subject.keywordAuthor | Multimodality | - |
dc.subject.keywordAuthor | Visual | - |
dc.identifier.url | https://link.springer.com/chapter/10.1007/978-3-030-77726-5_15?utm_source=getftr&utm_medium=getftr&utm_campaign=getftr_pilot | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.