Abstract
Oral communication often takes place in noisy environments, which challenge spoken-word recognition. Previous research has suggested that the presence of background noise extends the number of candidate words competing with the target word for recognition and that this extension affects the time course and accuracy of spoken-word recognition. In this study, we further investigated the temporal dynamics of competition processes in the presence of background noise, and how these vary in listeners with different language proficiency (i.e., native and non-native) using computational modeling. We developed ListenIN (Listen-In-Noise), a neural-network model based on an autoencoder architecture, which learns to map phonological forms onto meanings in two languages and simulates native and non-native spoken-word comprehension. We also examined the model’s activation states during online spoken-word recognition. These analyses demonstrated that the presence of background noise increases the number of competitor words, which are engaged in phonological competition and that this happens in similar ways intra and interlinguistically and in native and non-native listening. Taken together, our results support accounts positing a “many-additional-competitors scenario” for the effects of noise on spoken-word recognition.
Original language | English |
---|---|
Journal | Cognitive Science |
Volume | 46 |
Issue number | 2 |
Early online date | 21 Feb 2022 |
DOIs | |
Publication status | E-pub ahead of print - 21 Feb 2022 |
Keywords
- Artificial Intelligence
- Cognitive Neuroscience
- Experimental and Cognitive Psychology