Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Some perceptual aspects of native and non-native speech

Full metadata record
DC Field Value Language
dc.contributor.author조태홍-
dc.date.accessioned2021-08-03T21:33:18Z-
dc.date.available2021-08-03T21:33:18Z-
dc.date.created2021-06-30-
dc.date.issued2009-08-13-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/61210-
dc.description.abstractThe first half of the talk discusses how listeners of two unrelated languages, Korean and Dutch, process phonologically viable and nonviable consonants spoken in Dutch and American English. To Korean listeners, `released` final (coda) stops are nonviable because word-final stops in Korean are never released in words spoken in isolation, but to Dutch listeners, `unreleased` word-final stops are nonviable because word-final stops in Dutch are generally released in words spoken in isolation. Two phoneme monitoring experiments showed a phonological effect on both Dutch and English stimuli: Korean listeners detected the unreleased stops more rapidly whereas Dutch listeners detected the released stops more rapidly and/or more accurately. The Koreans, however, detected released stops more accurately than unreleased stops, but only in the non-native language they were familiarity with-i.e., English. The results suggest that, in non-native speech perception, phonological legitimacy in the native language can be more important than the richness of phonetic information, though familiarity with phonetic detail in the non-native language can also improve listening performance. The second half of the talk deals with cross-linguistic differences in prosodic (intonational and/or durational) cue use in speech segmentation. It discusses an Artificial Language Learning study about the influence of L1 prosody in segmentation of a novel artificial language, and learning and generalization of non-native prosody. The study showed L1 bias in word segmentation: Dutch and Korean listeners were good at segmenting words from the artificial language when the language was provided with the cues that conform to their L1 prosody than when it was provided with non-native prosodic cues. The results also suggested that non-native prosodic cues could be used by listeners when they were exposed to the artificial language longer, which implies implicit learning of prosodic cues.-
dc.publisher오레곤대학 언어학과-
dc.titleSome perceptual aspects of native and non-native speech-
dc.typeConference-
dc.contributor.affiliatedAuthor조태홍-
dc.identifier.bibliographicCitationHanyang-Oregon International Symposium on Linguistics 2009-
dc.relation.isPartOfHanyang-Oregon International Symposium on Linguistics 2009-
dc.citation.titleHanyang-Oregon International Symposium on Linguistics 2009-
dc.citation.conferencePlace미국 오레곤 대학-
dc.type.rimsCONF-
dc.description.journalClass1-
Files in This Item
There are no files associated with this item.
Appears in
Collections
서울 인문과학대학 > 서울 영어영문학과 > 2. Conference Papers

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Cho, Tae hong photo

Cho, Tae hong
COLLEGE OF HUMANITIES (DEPARTMENT OF ENGLISH LANGUAGE & LITERATURE)
Read more

Altmetrics

Total Views & Downloads

BROWSE