Simultaneous segmentation and generalisation of non-adjacent dependencies from continuous speech

Rebecca L.A. Frost*, Padraic Monaghan

*Corresponding author for this work

Research output: Contribution to journalArticle (journal)peer-review

60 Citations (Scopus)
1 Downloads (Pure)


Language learning requires mastering multiple tasks, including segmenting speech to identify words, and learning the syntactic role of these words within sentences. A key question in language acquisition research is the extent to which these tasks are sequential or successive, and consequently whether they may be driven by distinct or similar computations. We explored a classic artificial language learning paradigm, where the language structure is defined in terms of non-adjacent dependencies. We show that participants are able to use the same statistical information at the same time to segment continuous speech to both identify words and to generalise over the structure, when the generalisations were over novel speech that the participants had not previously experienced. We suggest that, in the absence of evidence to the contrary, the most economical explanation for the effects is that speech segmentation and grammatical generalisation are dependent on similar statistical processing mechanisms.

Original languageEnglish
Pages (from-to)70-74
Number of pages5
Early online date27 Nov 2015
Publication statusPublished - 1 Feb 2016
Externally publishedYes


  • Artificial grammar learning
  • Grammatical processing
  • Language acquisition
  • Speech segmentation
  • Statistical learning


Dive into the research topics of 'Simultaneous segmentation and generalisation of non-adjacent dependencies from continuous speech'. Together they form a unique fingerprint.

Cite this