How does the presence of background noise affect the cognitive processes underlying spoken-word recognition? And how do these effects differ in native and non-native language listeners? We addressed these questions using artificial neural-network modelling. We trained a deep auto-encoder architecture on binary phonological and semantic representations of 121 English and Dutch translation equivalents. We also varied exposure to the two languages to generate ‘native English’ and ‘non-native English’ trained networks. These networks captured key effects in the performance (accuracy rates and the number of erroneous responses per word stimulus) of English and Dutch listeners in an offline English spoken-word identification experiment (Scharenborg et al., 2017), which considered clean and noisy listening conditions and three intensities of speech-shaped noise, applied word-initially or word-finally. Our simulations suggested that the effects of noise on native and non-native listening are comparable and can be accounted for within the same cognitive architecture for spoken-word recognition.
|Title of host publication||Not Known|
|Publication status||Published - 28 Jul 2018|
|Event||The 40th Annual Conference of the Cognitive Science Society - Madison, United States|
Duration: 25 Jul 2018 → 28 Jul 2018
|Conference||The 40th Annual Conference of the Cognitive Science Society|
|Period||25/07/18 → 28/07/18|