Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Presentation interface based on gesture and voice recognition

Full metadata record
DC Field Value Language
dc.contributor.authorKim, J.-
dc.contributor.authorKim, S.-
dc.contributor.authorHong, K.-
dc.contributor.authorJean, D.-
dc.contributor.authorJung, K.-
dc.date.available2019-04-10T10:17:04Z-
dc.date.created2018-04-17-
dc.date.issued2014-05-
dc.identifier.isbn9783642548994-
dc.identifier.issn1876-1100-
dc.identifier.urihttp://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/32697-
dc.description.abstractIn this paper, we introduce a Kinect based interface that recognizes gestures and voice. We have developed an interface to control presentations such as speeches or lectures. It is possible to receive the coordinates of the body, and recognize gestures and positions of the hand. Data received by the camera in Kinect are used to create a hook between the user hand and a presentation application such as Microsoft Powerpoint. Our interface is able to recognize grip and push gestures from the presenter. The result of this gesture recognition generates a signal to the presentation application, such as shortcuts to change slides or make use of additional tools. It is also possible to start and end the presentation by voice using our voice recognition tool. Additionally we show some tools that not only change the slides, but also provide more options to the presenter such as memo tools to directly highlight some parts of a slide, and even an eraser. This paper describes all the methodology and presents the result of our tests session. We are effectively able to improve the presentation capability of the presenter and think that such interface can be commercialized for presentation and other type of use. © Springer-Verlag Berlin Heidelberg 2014.-
dc.language영어-
dc.language.isoen-
dc.publisherSpringer Verlag-
dc.relation.isPartOfLecture Notes in Electrical Engineering-
dc.titlePresentation interface based on gesture and voice recognition-
dc.typeConference-
dc.identifier.doi10.1007/978-3-642-54900-7_11-
dc.type.rimsCONF-
dc.identifier.bibliographicCitationFTRA 8th International Conference on Multimedia and Ubiquitous Engineering, MUE 2014, v.308, pp.75 - 81-
dc.description.journalClass2-
dc.identifier.scopusid2-s2.0-84924426388-
dc.citation.conferenceDate2014-05-28-
dc.citation.conferencePlaceGE-
dc.citation.endPage81-
dc.citation.startPage75-
dc.citation.titleFTRA 8th International Conference on Multimedia and Ubiquitous Engineering, MUE 2014-
dc.citation.volume308-
dc.contributor.affiliatedAuthorHong, K.-
dc.contributor.affiliatedAuthorJean, D.-
dc.contributor.affiliatedAuthorJung, K.-
dc.type.docTypeConference Paper-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Information Technology > Global School of Media > 2. Conference Papers

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Jung, Kee chul photo

Jung, Kee chul
College of Information Technology (Global School of Media)
Read more

Altmetrics

Total Views & Downloads

BROWSE