Publications





Articles

The Processing of Spanish Article–Noun Gender Agreement by Monolingual and Bilingual Toddlers

ABSTRACT: We assessed monolingual Spanish and bilingual Spanish-Basque toddlers’ sensitivity to gender agreement in correct vs. incorrect Spanish noun phrases (definite article + noun), using a spontaneous preference listening paradigm. Monolingual Spanish-learning toddlers exhibited a tendency to listen longer to the grammatically correct phrases (e.g., la casa; “the house”), as opposed to the incorrect ones (e.g., * el casa). This listening preference toward correct phrases is in line with earlier results obtained from French monolingual 18-month-olds (van Heugten & Christophe, 2015). Bilingual toddlers in the current study, however, tended to listen longer to the incorrect phrases. Basque was not a source of interference in the bilingual toddler’s input as Basque does not instantiate grammatical gender agreement. Overall, our results suggest that both monolingual and bilingual toddlers can distinguish between the correct and incorrect phrases by 18 months of age; however, monolinguals and bilinguals allocate their attention differently when processing grammatically incorrect forms.

Phonemic contrasts under construction? Evidence from Basque

ABSTRACT: Attunement theories of speech perception development suggest that native language exposure is one of the main factors shaping infants' phonemic discrimination capacity within the second half of their first year. Here, we focus on the role of acoustic-perceptual salience and language-specific experience by assessing the discrimination of acoustically subtle Basque sibilant contrasts. We used the infant-controlled version of the habituation procedure to assess discrimination in 6-7-and 11-12-month-old infants who varied in their amount of exposure to Basque and Spanish. We observed no significant variation in the infants' discrimination behavior as a function of their linguistic experience. Infants in both age groups exhibited poor discrimination, consistent with Basque adults finding these contrasts more difficult than some others. Our findings are in agreement with previous research showing that perceptual discrimination of subtle speech sound contrasts may follow a different developmental trajectory, where increased native language exposure seems to be a requisite.

Speaker Matters: Natural inter-speaker variation affects 4-month-olds’ perception of audio-visual speech

ABSTRACT: In the language development literature, studies often make inferences about infants’ speech perception abilities based on their responses to a single speaker. However, there can be significant natural variability across speakers in how speech is produced (i.e., inter-speaker differences). The current study examined whether inter-speaker differences can affect infants’ ability to detect a mismatch between the auditory and visual components of vowels. Using an eye-tracker, 4.5-month-old infants were tested on auditory-visual (AV) matching for two vowels (/i/ and /u/). Critically, infants were tested with two speakers who naturally differed in how distinctively they articulated the two vowels within and across the categories. Only infants who watched and listened to the speaker whose visual articulation of the two vowels were most distinct from one another were sensitive to AV mismatch. This speaker also produced a visually more distinct /i/ as compared to the other speaker. This finding suggests that infants are sensitive to the distinctiveness of AV information across speakers, and that when making inferences about infants’ perceptual abilities, characteristics of the speaker should be taken into account.

Effect of prewhitening in resting-state functional near-infrared spectroscopy data

ABSTRACT: Near-infrared spectroscopy (NIRS) offers the potential to characterize resting-state functional connectivity (RSFC) in populations that are not easily assessed otherwise, such as young infants. In addition to the advantages of NIRS, one should also consider that the RS-NIRS signal requires specific data preprocessing and analysis. In particular, the RS-NIRS signal shows a colored frequency spectrum, which can be observed as temporal autocorrelation, thereby introducing spurious correlations. To address this issue, prewhitening of the RS-NIRS signal has been recently proposed as a necessary step to remove the signal temporal autocorrelation and therefore reduce false-discovery rates. However, the impact of this step on the analysis of experimental RS-NIRS data has not been thoroughly assessed prior to the present study. Here, the results of a standard preprocessing pipeline in a RS-NIRS dataset acquired in infants are compared with the results after incorporating two different prewhitening algorithms. Our results with a standard preprocessing replicated previous studies. Prewhitening altered RSFC patterns and disrupted the antiphase relationship between oxyhemoglobin and deoxyhemoglobin. We conclude that a better understanding of the effect of prewhitening on RS-NIRS data is still needed before directly considering its incorporation to the standard preprocessing pipeline.

The role of voice familiarity in bilingual speech processing

ABSTRACT: Interlocutor context affects proficient bilinguals’ spoken language processing. For instance, bilinguals in a visual-auditory lexical decision task are able to predict the context-appropriate language based on the visual cues of interlocutor context (e.g., Molnar et al., 2015). Because it has been also demonstrated that bilinguals, as compared to monolinguals, process talker-voice information more efficiently (Levi, 2017), in the current study we addressed the question whether bilinguals are able to predict context-appropriate language based on voice information alone. First, in a same-different task, English monolingual and bilingual participants were familiarized with the voices of 4 female speakers who either spoke English (shared language across the monolingual and bilingual participants) or Farsi (unknown to both monolingual and bilingual participants). Then, in a lexical decision task, the participants heard the same 4 voices again, but all of the voices spoke in English this time. We predicted that if the participants established a voice-language link in the first part of the task, then their response times should decrease when they hear an “English-voice” (as opposed to a “Farsi-voice”) uttering an English word in the lexical decision task. Accordingly, our preliminary results suggest that the bilinguals’ performance is facilitated by the established voice-language link.

The Role of Slow Speech Amplitude Envelope for Speech Processing and Reading Development

ABSTRACT: This study examined the putative link between the entrainment to the slow rhythmic structure of speech, speech intelligibility and reading by means of a behavioral paradigm. Two groups of 20 children (Grades 2 and 5) were asked to recall a pseudoword embedded in sentences presented either in quiet or noisy listening conditions. Half of the sentences were primed with their syllabic and prosodic amplitude envelope to determine whether a boost in auditory entrainment to these speech features enhanced pseudoword intelligibility. Priming improved pseudoword recall performance only for the older children both in a quiet and a noisy listening environment, and such benefit from the prime correlated with reading skills and pseudoword recall. Our results support the role of syllabic and prosodic tracking of speech in reading development.

Directional asymmetries reveal a universal bias in adult vowel perception

ABSTRACT: Research on cross-language vowel perception in both infants and adults has shown that for many vowel contrasts, discrimination is easier when the same pair of vowels is presented in one direction compared to the reverse direction. According to one account, these directional asymmetries reflect a universal bias favoring “focal” vowels (i.e., vowels whose adjacent formants are close in frequency, which concentrates acoustic energy into a narrower spectral region). An alternative, but not mutually exclusive, account is that such effects reflect an experience-dependent bias favoring prototypical instances of native-language vowel categories. To disentangle the effects of focalization and prototypicality, the authors first identified a certain location in phonetic space where vowels were consistently categorized as /u/ by both Canadian-English and Canadian-French listeners, but that nevertheless varied in their stimulus goodness (i.e., the best Canadian-French /u/ exemplars were more focal compared to the best Canadian-English /u/ exemplars). In subsequent AX discrimination tests, both Canadian-English and Canadian-French listeners performed better at discriminating changes from less to more focal /u/'s compared to the reverse, regardless of variation in prototypicality. These findings demonstrate a universal bias favoring vowels with greater formant convergence that operates independently of biases related to language-specific prototype categorization.

The Development of Spontaneous Sound-Shape Matching in Monolingual and Bilingual Infants During the First Year

ABSTRACT: Recently it has been proposed that sensitivity to nonarbitrary relationships between speech sounds and objects potentially bootstraps lexical acquisition. However, it is currently unclear whether preverbal infants (e.g., before 6 months of age) with different linguistic profiles are sensitive to such nonarbitrary relationships. Here, the authors assessed 4- and 12-month-old Basque monolingual and Spanish-Basque bilingual infants' sensitivity to cross-modal correspondences between sound symbolic nonwords without syllable repetition (buba, kike) and drawings of rounded and angular shapes. The findings demonstrate that sensitivity to sound-shape correspondences emerge by 12 months of age in both monolinguals and bilinguals. This finding suggests that spontaneous sound-shape matching is likely to be the product of language learning and development and may not be readily available prior to the onset of word learning. (PsycINFO Database Record)

Language dominance shapes non-linguistic rhythmic grouping in bilinguals

ABSTRACT: To what degree non-linguistic auditory rhythm perception is governed by universal biases (e.g., Iambic-Trochaic Law; Hayes, 1995) or shaped by native language experience is debated. It has been proposed that rhythmic regularities in spoken language, such as phrasal prosody affect the grouping abilities of monolinguals (e.g., Iversen, Patel, & Ohgushi, 2008). Here, we assessed the non-linguistic tone grouping biases of Spanish monolinguals, and three groups of Basque-Spanish bilinguals with different levels of Basque experience. It is usually assumed in the literature that Basque and Spanish have different phrasal prosodies and even linguistic rhythms. To confirm this, first, we quantified Basque and Spanish phrasal prosody (Experiment 1a) and duration patterns used in the classification of languages into rhythm classes (Experiment 1b). The acoustic measurements revealed that regularities in phrasal prosody systematically differ across Basque and Spanish; by contrast, the rhythms of the two languages are only minimally dissimilar. In Experiment 2, participants’ non-linguistic rhythm preferences were assessed in response to non-linguistic tones alternating in either intensity (Intensity condition) or in duration (Duration condition). In the Intensity condition, all groups showed a trochaic grouping bias, as predicted by the Iambic-Trochaic Law. In the Duration Condition the Spanish monolingual and the most Basque-dominant bilingual group exhibited opposite grouping preferences in line with the phrasal prosodies of their native/dominant languages, trochaic in Basque, iambic in Spanish. The two other bilingual groups showed no significant biases, however. Overall, results indicate that duration-based grouping mechanisms are biased toward the phrasal prosody of the native and dominant language; also, the presence of an L2 in the environment interacts with the auditory biases.

The proactive bilingual brain: Using interlocutor identity to generate predictions for language processing

ABSTRACT: The present study investigated the proactive nature of the human brain in language perception. Specifically, we examined whether early proficient bilinguals can use interlocutor identity as a cue for language prediction, using an event-related potentials (ERP) paradigm. Participants were first familiarized, through video segments, with six novel interlocutors who were either monolingual or bilingual. Then, the participants completed an audio-visual lexical decision task in which all the interlocutors uttered words and pseudo-words. Critically, the speech onset started about 350 ms after the beginning of the video. ERP waves between the onset of the visual presentation of the interlocutors and the onset of their speech significantly differed for trials where the language was not predictable (bilingual interlocutors) and trials where the language was predictable (monolingual interlocutors), revealing that visual interlocutor identity can in fact function as a cue for language prediction, even before the onset of the auditory-linguistic signal.

Language discrimination and prosodic bootstrapping in early bilingual acquisition : Acoustic regularities in the Basque-Spanish bilingual speech input

January 2016

Interlocutor identity affects language activation in bilinguals

ABSTRACT: In bilingual communities, individuals often communicate in one of their languages only, and they adjust to the linguistic background of different interlocutors with ease. What facilitates such efficiency? We investigated whether bilinguals’ language activation is supported by non-linguistic cues (e.g., interlocutor identity). First, in an audio–visual task, early (proficient) and late (less proficient) Basque–Spanish bilinguals were familiarized with six novel interlocutors who spoke either Spanish, Basque, or both languages. Then, participants completed an audio–visual lexical decision task, in which the interlocutors produced test items in Spanish or Basque. Early, but not late, bilinguals’ speed of processing decreased when the language that the interlocutors spoke during familiarization matched the language they spoke at test, relative to test trials when the interlocutors changed languages. Overall, results suggest that proficient and/or early bilinguals benefit from an association between language and interlocutor during (or even before) language comprehension, because they are able to predict the context-appropriate language based on non-linguistic cues, such as interlocutor context.

The Amount of Language Exposure Determines Nonlinguistic Tone Grouping Biases in Infants From a Bilingual Environment

ABSTRACT: Duration‐based auditory grouping preferences are presumably shaped by language experience in adults and infants, unlike intensity‐based grouping that is governed by a universal bias of a loud‐soft preference. It has been proposed that duration‐based rhythmic grouping preferences develop as a function of native language phrasal prosody. Additionally, it has been suggested that phrasal prosody supports syntax acquisition (e.g., prosodic bootstrapping of word order within phrases). Using a looking preference procedure, in the current study, 9‐to‐10‐month‐old Spanish‐dominant and Basque‐dominant bilingual infants’ rhythmic preferences in response to nonlinguistic tones alternating in duration or intensity were assessed. In the intensity‐based condition no effects of language experience was present. In the duration‐based condition, however, infants exhibited grouping patterns as predicted by the phrasal prosody of their dominant input. Considering the proposed link between syntactic bootstrapping and perceptual tone grouping, our overall results suggest that syntax acquisition (e.g., learning the rules of word order) is supported by different auditory perceptual mechanisms for the dominant syntax than for the less dominant syntax in the infant's dual language input.

The Roots of Language Learning: Infant Language Acquisition

August 2014.

Learning two languages from birth shapes pre-attentive processing of vowel categories: Electrophysiological correlates of vowel discrimination in monolinguals and simultaneous bilinguals

ABSTRACT: Using event-related brain potentials (ERPs), we measured pre-attentive processing involved in native vowel perception as reflected by the mismatch negativity (MMN) in monolingual and simultaneous bilingual (SB) users of Canadian English and Canadian French in response to various pairings of four vowels: English /u/, French /u/, French /y/, and a control /y/. The monolingual listeners exhibited a discrimination pattern that was shaped by their native language experience. The SB listeners, on the other hand, exhibited a MMN pattern that was distinct from both monolingual listener groups, suggesting that the SB pre-attentive system is tuned to access sub-phonemic detail with respect to both input languages, including detail that is not readily accessed by either of their monolingual peers. Additionally, simultaneous bilinguals exhibited sensitivity to language context generated by the standard vowel in the MMN paradigm. The automatic access to fine phonetic detail may aid SB listeners to rapidly adjust their perception to the variable listening conditions that they frequently encounter.

Within-rhythm Class Native Language Discrimination Abilities of Basque-Spanish Monolingual and Bilingual Infants at 3.5 Months of Age

ABSTRACT: Language rhythm determines young infants' language discrimination abilities. However, it is unclear whether young bilingual infants exposed to rhythmically similar languages develop sensitivities to cross‐linguistic rhythm cues to discriminate their dual language input. To address this question, 3.5‐month‐old monolingual Basque, monolingual Spanish and bilingual Basque‐Spanish infants' language discrimination abilities (across low‐pass filtered speech samples of Basque and Spanish) have been tested using the visual habituation procedure. Although falling within the same rhythmic class, Basque and Spanish exhibit significant differences in their distributions of vocalic intervals (within‐rhythmic class variation). All infant groups in our study successfully discriminated between the languages, although each group exhibited a different pattern. Monolingual Spanish infants succeeded only when they heard Basque during habituation, suggesting that they were influenced by native language recognition. The bilingual and the Basque monolingual infants showed no such asymmetries and succeeded irrespective of the language of habituation. Additionally, bilingual infants exhibited longer looking times in the test phase as compared with monolinguals, reflecting that bilingual infants attend to their native languages differently than monolinguals. Overall, results suggest that bilingual infants are sensitive to within‐rhythm acoustic regularities of their native language(s) facilitating language discrimination and hence supporting early bilingual acquisition.

The formation of the perceptual vowel space in monolinguals and simultaneous bilinguals: Insights from a model

ABSTRACT: Recent reports on perception and production of speech segments in simultaneous bilinguals, who acquired both of their languages at the same time-frame (from birth), suggest that they do not always exhibit abilities similar to those of the native monolinguals of the same languages [i.e., Guion (2003); Molnar et al. (2009); Sundara and Polka (2008)] despite the fact that simultaneous bilinguals are native users of both of their languages as well. As opposed to the second language users, who often assimilate the acoustically similar non-native phonetic categories to their native ones, simultaneous bilinguals rather tend to dissimilate the phonetically similar sounds that occur across both languages. In order to understand the mechanism underlying the phonemic category formation of simultaneous bilingual language users across their two languages, an unsupervised formation of the phonetic perceptual space of English-French bilinguals was modeled by training self-organizing maps (SOMs) with vowels from both languages at the same time. The formation of phonetic categories in monolinguals by training SOMs with vowels from French or English only was modeled as well. Differences in the perception of speech segments in the bilingual SOMs with respect to the monolingual SOMs will be discussed and compared to experimental findings.

ERPs reveal sensitivity to hypothetical contexts in spoken discourse

ABSTRACT: We used event-related potentials to examine the interaction between two dimensions of discourse comprehension: (i) referential dependencies across sentences (e.g. between the pronoun 'it' and its antecedent 'a novel' in: 'John is reading a novel. It ends quite abruptly'), and (ii) the distinction between reference to events/situations and entities/individuals in the real/actual world versus in hypothetical possible worlds. Cross-sentential referential dependencies are disrupted when the antecedent for a pronoun is embedded in a sentence introducing hypothetical entities (e.g. 'John is considering writing a novel. It ends quite abruptly'). An earlier event-related potential reading study showed such disruptions yielded a P600-like frontal positivity. Here we replicate this effect using auditorily presented sentences and discuss the implications for our understanding of discourse-level language processing.

Automatic auditory discrimination of vowels in simultaneous bilingual and monolingual speakers as measured by the mismatch negativity (MMN).

ABSTRACT: MMN responses reflect whether language users have developed long-term memory traces in response to phonemes and whether they are able to perceive small acoustic changes within speech sound categories. Subtle acoustic changes within phonemes are often irrelevant to monolingual perceivers, but can be crucial for bilingual perceivers if the acoustic change differentiates the phonemes of their two languages. In the present study, we investigated whether bilinguals are sensitive to such acoustic changes. We recorded MMN responses from monolingual (English, French) and simultaneous bilingual (English French) adults using an auditory oddball paradigm in response to four vowels: English [u], French [u], French [y], and an acoustically-distinct (control) [y]. In line with previous findings, monolinguals were more sensitive to the phonemic status of the vowels than to the acoustic properties differentiating the sounds. Bilingual speakers revealed a different pattern; they demonstrated overall slower discrimination responses to all sounds, but showed almost equal sensitivity to phonemic and phonetic acoustic differences. The results suggest that bilingual speakers exhibit a more flexible but less uniquely-specified perceptual pattern compared to monolingual speakers.

Asymmetries in the mismatch negativity response to vowels by French, English, and bilingual adults: Evidence for a language-universal bias.

ABSTRACT: In infants, discrimination of a vowel change presented in one direction is often significantly better compared to the same change presented in the reverse direction. These directional asymmetries reveal a language-universal bias favoring vowels with extreme articulatory-acoustic properties (peripheral in F1F2 vowel space). In adults, asymmetries are observed for non-native but not for native vowel contrasts. To examine neurophysiological correlates of these asymmetries we recorded MMN responses from monolingual (English, French) and simultaneous bilingual (EnglishFrench) adults using an oddball paradigm with four vowels: English [u], French [u], French [y], and an acoustically-distinct [y]. All vowels were tested in four conditions with each vowel serving as deviant and standard. Within each vowel pair, MMN responses were larger and earlier when the deviant vowel was more peripheral. This pattern was consistent across the language groups and was observed for within-category (within [u]; within [y]) and for cross-category vowel pairs ([u] versus [y]). Findings indicate that a bias favoring peripheral vowels is retained in the neural pre-attentive processing of vowels in adults. As in infant behavioral findings, this bias is independent of the functional status of the vowel pair. Implications for the natural referent vowel model [Polka and Bohn (2003)] will be discussed.

Bilinguals mind their language (mode): Vowel perception patterns of simultaneous bilingual and monolingual speakers.

ABSTRACT: It is well-established that the speech perception abilities of monolingual speakers are highly tuned to the sounds of their native language, and that this language specificity affects how monolingual speakers distinguish the sounds of a non-native language. The present study addressed how the speech perception skills of simultaneous bilingual speakers, who are native speakers of two languages, may be affected by control of active language mode. We tested monolingual (English and French) and simultaneous bilingual (EnglishFrench) adults in an identification and rating task with 42 vowels along a continuum from a high back rounded vowel (u) to a high front rounded vowel (y) that are both phonemic in French, with only the back vowel represented in English. Bilinguals completed the task in three language modes: English, French, and bilingual. As expected, monolingual speakers demonstrated a language-specific perceptual pattern for the vowels. Bilingual participants displayed different perceptual patterns in each active language mode to accommodate the vowel categories relevant in the target language. These findings indicate that simultaneous bilinguals rely on a finely detailed perceptual space and are flexible as they adapt their perception to different language environments.

Deviant standards: Effects of stimuli and oddball status on the MMN in speech sound discrimination of monolingual and bilingual speakers

ABSTRACT: Vowel perception patterns of monolingual (English and French) and simultaneous bilingual (English/French) adults were investigated using an unattended auditory oddball paradigm. The discrimination abilities of each language group were measured in response to four vowels whose phonemic status varied between languages: French [u], English [u], French [y], and an acoustically distinct non-phonemic control [y]. Stimuli were created using the Variable Linear Articulatory Model (VLAM) (Boë & Maeda, 1998) which simulates realistic vowels in terms of articulatory-to-acoustic relationships. ERPs were collected in four experimental blocks. Every block contained one of the vowels as a standard and the remaining three vowels as deviants; in this way, each token served both as a standard and as a deviant (three times) within the same paradigm. The overall results are in line with previous electrophysiological and behavioral findings: monolingual speakers exhibited increased sensitivity to the phonemic status of the vowels compared to the acoustic properties differentiating the sounds. Bilingual speakers demonstrated a slower response but seemed more sensitive to acoustic patterns than monolinguals. In addition to the effect of language experience on the MMN, the oddball status (standard or deviant) of the vowels was also considered. Since each sound served as a standard and as a deviant in comparison to all the other sounds, not only the MMN triggered by the acoustic/phonemic changes in the stimuli were measured but also the MMN elicited purely based on the oddball status of the same tokens was examined. Possible methodological implications of such a design will be discussed.

Speech Perception by 6-to 8-Month-Olds in the Presence of Distracting Sounds

ABSTRACT: The role of selective attention in infant phonetic perception was examined using a distraction masker paradigm. We compared perception of /bu/ versus /gu/ in 6‐ to 8‐month‐olds using a visual fixation procedure. Infants were habituated to multiple natural productions of 1 syllable type and then presented 4 test trials (old‐new‐old‐new). Perception of the new syllable (indexed as novelty preference) was compared across 3 groups: habituated and tested on syllables in quiet (Group 1), habituated and tested on syllables mixed with a nonspeech signal (Group 2), and habituated with syllables mixed with a non‐speech signal and tested on syllables in quiet (Group 3). In Groups 2 and 3, each syllable was mixed with a segment spliced from a recording of bird and cricket songs. This nonspeech signal has no overlapping frequencies with the syllable; it is not expected to alter the sensory structure or perceptual coherence of the syllable. Perception was negatively affected by the presence of the auditory distracter during habituation; individual performance levels also varied more in these groups. The findings show that perceiving speech in the presence of irrelevant sounds poses a cognitive challenge for young infants. We conclude that selective attention is an important skill that supports speech perception in infants; the significance of this skill for language learning during infancy deserves investigation.

Development of coronal stop perception: Bilingual infants keep pace with their monolingual peers

ABSTRACT: Previous studies indicate that the discrimination of native phonetic contrasts in infants exposed to two languages from birth follows a different developmental time course from that observed in monolingual infants. We compared infant discrimination of dental (French) and alveolar (English) place variants of /d/ in three groups differing in language experience. At 6-8 months, infants in all three language groups succeeded; at 10-12 months, monolingual English and bilingual but not monolingual French infants distinguished this contrast. Thus, for highly frequent, similar phones, despite overlap in cross-linguistic distributions, bilingual infants performed on par with their English monolingual peers and better than their French monolingual peers.

The developmental course of lexical tone perception in the first year of life

ABSTRACT: Perceptual reorganization of infants' speech perception has been found from 6 months for consonants and earlier for vowels. Recently, similar reorganization has been found for lexical tone between 6 and 9 months of age. Given that there is a close relationship between vowels and tones, this study investigates whether the perceptual reorganization for tone begins earlier than 6 months. Non-tone language English and French infants were tested with the Thai low vs. rising lexical tone contrast, using the stimulus alternating preference procedure. Four- and 6-month-old infants discriminated the lexical tones, and there was no decline in discrimination performance across these ages. However, 9-month-olds failed to discriminate the lexical tones. This particular pattern of decline in nonnative tone discrimination over age indicates that perceptual reorganization for tone does not parallel the developmentally prior decline observed in vowel perception. The findings converge with previous developmental cross-language findings on tone perception in English-language infants [Mattock, K., & Burnham, D. (2006). Chinese and English infants' tone perception: Evidence for perceptual reorganization. Infancy, 10(3)], and extend them by showing similar perceptual reorganization for non-tone language infants learning rhythmically different non-tone languages (English and French).

Natural referent vowels guide the development of vowel perception

ABSTRACT: Certain vowels are favored across languages of the world. This selection bias has received a great deal of attention in linguistic theories seeking to explain vowel system typologies. In comparison, the role that specific vowels might play in the ontogeny of vowel perception has been more implicit. In this talk we will summarize recent findings that elucidate the functional significance of peripheral vowels in the development of vowel perception. Data from cross-language studies of infant vowel discrimination and vowel preference will be presented. This work shows that peripheral vowels have a perceptual priority for young infants and that this bias is independent of the phonemic status of the vowels presented in the perceptual task. Findings from cross-language experiments with adults reveal that language experience builds on the natural vowel biases observed in infancy. Adult data suggest that the natural bias remains in place in mature listeners unless the perceiver needs to override the bias to optimize perception of functional vowel differences. These findings support our proposal of a Natural Reference Vowel hypothesis as a framework for understanding the development of vowel perception and production. Specific avenues of research needed to elaborate this framework will be outlined. [Work supported by NSERC.]

The role of attention in infant phonetic perception

ABSTRACT: Attention is an important factor underlying phonetic perception that is not well understood. In this study we examined the role of auditory attention in infant phonetic perception using a distraction masker paradigm. We tested infant discrimination of /bu/ vs /gu/ with a habituation procedure and three natural productions of each syllable. For the quiet condition each token was copied into a separate sound file. For the distractor condition, a high frequency noise was added to each sound file so that it gated on and off with the onset and offset of the syllable. The distractor noise was a recording of bird and cricket songs whose frequencies did NOT overlap with the test syllables. Thus, the noise did not change the audibility of the syllable, but it could distract infants if they do not focus their attention well. Infants (6‐ to 8‐month‐olds) were tested in each condition. Infants tested in quiet performed significantly better than infants tested in the distractor condition; discrimination scores showed little overlap between the two groups. These findings indicate that in young infants, attention to subtle phonetic differences is easily disrupted. The implications for developmental models of speech perception will be discussed.

Preference patterns in infant vowel perception

ABSTRACT: Infants show directional asymmetries in vowel discrimination tasks that reveal an underlying perceptual bias favoring more peripheral vowels. Polka and Bohn (2003) propose that this bias is language independent and plays an important role in the development of vowel perception. In the present study we measured infant listening preferences for vowels to assess whether a perceptual bias favoring peripheral vowels can be measured more directly. Monolingual (French and English) and bilingual infants completed a listening preference task using multiple natural tokens of German /dut/ and /dyt/ produced by a male talker. In previous work, discrimination of this vowel pair by German-learning and by English-learning infants revealed a robust directional asymmetry in which /u/ acts as a perceptual anchor; specifically, infants had difficulty detecting a change from /u/ to /y/, whereas a change from /y/ to /u/ was readily detected. Preliminary results from preference tests with these stimuli show that most infants between 3 and 5 months of age also listen longer to /u/ than to /y/. Preference data obtained from older infants and with other vowel pairs will also be reported to further test the claim that peripheral vowels have a privileged perceptual status in infant perception.

Behavioural and electrophysiological correlates of vowel perception in monolingual and simultaneous bilingual users of Canadian English and Canadian French

ABSTRACT: In this dissertation, the phonetic perception abilities of simultaneous bilingual (SB) language users exposed to both Canadian English (CE) and French (CF) from birth were tested to examine mechanisms underlying this process. It is well established that monolingual speakers' speech perception abilities are highly tuned to the sounds of their native language, and that this language specificity affects how they distinguish the sounds of a second language. However, it is not well understood how the speech perception skills of simultaneous bilinguals, who are native speakers of two languages, are shaped. In order to investigate speech processing in this population, two studies were designed to assess the vowel perception abilities of monolingual and SB users of CE and CF at two different stages of speech processing. In Study 1, using a behavioral vowel categorization paradigm, we measured how the control of active language mode or language context (English, French, or bilingual) affects the perception of acoustically similar cross-language vowel categories (specifically, front /y/ and back /u/ high vowels). As expected, monolingual speakers demonstrated a language-specific perceptual pattern for the vowels; however, the SB participants displayed different patterns in each active language mode and were able to accommodate acoustically similar vowel categories relevant in the target language, which was achieved by dissimilation (separation) of the categories. These findings indicate that SB listeners rely on a finely detailed perceptual space and are flexible as they adapt their perception to different language environments. In Study 2, using event-related brain potentials, we recorded pre-attentive processing involved in vowel perception as reflected by the mismatch negativity (MMN). The SB listeners exhibited a MMN pattern that was distinct from both monolingual listener groups even during the earliest levels of speech processing, as the SB pre-attentive system is tuned to access sub-phonemic detail with respect to both of their input languages, including detail that is not readily accessed by either set of monolingual peers. This automatic access to fine phonetic detail may be essential in supporting the SBs' ability to make rapid, effortless shifts in perception across different communication contexts (French, English, bilingual).